juvi21 commited on
Commit
31b0569
·
verified ·
1 Parent(s): 6549b55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -21
README.md CHANGED
@@ -56,27 +56,6 @@ For detailed benchmark results, including sub-categories and various metrics, pl
56
  - [juvi21/Hermes-2.5-Yi-1.5-9B-Chat-GGUF](https://huggingface.co/juvi21/Hermes-2.5-Yi-1.5-9B-Chat-GGUF) is availabe in:
57
  - **F16** **Q8_0** **Q6_KQ5_K_M** **Q4_K_M** **Q3_K_M** **Q2_K**
58
 
59
-
60
-
61
- ## Usage
62
-
63
- To use this model, you can load it using the Hugging Face Transformers library:
64
-
65
- ```python
66
- from transformers import AutoModelForCausalLM, AutoTokenizer
67
-
68
- model = AutoModelForCausalLM.from_pretrained("juvi21/Hermes-2.5-Yi-1.5-9B-Chat")
69
- tokenizer = AutoTokenizer.from_pretrained("juvi21/Hermes-2.5-Yi-1.5-9B-Chat")
70
-
71
- # Generate text
72
- input_text = "What is the question to 42?"
73
- inputs = tokenizer(input_text, return_tensors="pt")
74
- outputs = model.generate(**inputs)
75
- generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
76
- print(generated_text)
77
-
78
- ```
79
-
80
  ## chatml
81
  ```
82
  <|im_start|>system
 
56
  - [juvi21/Hermes-2.5-Yi-1.5-9B-Chat-GGUF](https://huggingface.co/juvi21/Hermes-2.5-Yi-1.5-9B-Chat-GGUF) is availabe in:
57
  - **F16** **Q8_0** **Q6_KQ5_K_M** **Q4_K_M** **Q3_K_M** **Q2_K**
58
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ## chatml
60
  ```
61
  <|im_start|>system