MaziyarPanahi commited on
Commit
0343c62
1 Parent(s): 4eb7935

Fix model's name (#5)

Browse files

- Fix model's name (cf9afaf6c7dfafd7076c83d0a9e2e5dc4d993ad8)

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -28,13 +28,13 @@ datasets:
28
  <img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
29
 
30
 
31
- # Llama-3-8B-Instruct-DPO-v0.3 (32k)
32
 
33
- This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-8B-Instruct` model. I have used `rope_theta` to extend the context length up to 32K safely.
34
 
35
  # Quantized GGUF
36
 
37
- All GGUF models come with context length of `32000`: [Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3-32k-GGUF)
38
 
39
  # Prompt Template
40
 
@@ -53,7 +53,7 @@ This model uses `ChatML` prompt template:
53
 
54
  # How to use
55
 
56
- You can use this model by using `MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3` as the model name in Hugging Face's
57
  transformers library.
58
 
59
  ```python
 
28
  <img src="./llama-3-merges.webp" alt="Llama-3 DPO Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
29
 
30
 
31
+ # MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1
32
 
33
+ This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model.
34
 
35
  # Quantized GGUF
36
 
37
+ All GGUF models are available here: [MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1-GGUF)
38
 
39
  # Prompt Template
40
 
 
53
 
54
  # How to use
55
 
56
+ You can use this model by using `MaziyarPanahi/Llama-3-70B-Instruct-DPO-v0.1` as the model name in Hugging Face's
57
  transformers library.
58
 
59
  ```python