OwenArli commited on
Commit
8be7875
1 Parent(s): 7e4d8eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -5
README.md CHANGED
@@ -13,9 +13,6 @@ DPO fine tuning method using the following datasets:
13
  - https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
14
 
15
 
16
- We are happy for anyone to try it out and give some feedback and we will have the model up on https://awanllm.com on our LLM API if it is popular.
17
-
18
-
19
  Instruct format:
20
  ```
21
  <|begin_of_text|><|start_header_id|>system<|end_header_id|>
@@ -32,6 +29,6 @@ Instruct format:
32
 
33
  Quants:
34
 
35
- FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1
36
 
37
- GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1-GGUF
 
13
  - https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
14
 
15
 
 
 
 
16
  Instruct format:
17
  ```
18
  <|begin_of_text|><|start_header_id|>system<|end_header_id|>
 
29
 
30
  Quants:
31
 
32
+ FP16: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1
33
 
34
+ GGUF: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF