MaziyarPanahi commited on
Commit
1af9850
1 Parent(s): 5023723

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -18,7 +18,7 @@ inference: false
18
  model_creator: MaziyarPanahi
19
  quantized_by: MaziyarPanahi
20
  base_model: Qwen/Qwen2-7B
21
- model_name: Qwen2-7B-Instruct-v0.4
22
  datasets:
23
  - nvidia/HelpSteer2
24
  - teknium/OpenHermes-2.5
@@ -28,13 +28,13 @@ datasets:
28
 
29
  <img src="./qwen2-fine-tunes-maziyar-panahi.webp" alt="Qwen2 fine-tune" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
30
 
31
- # MaziyarPanahi/Qwen2-7B-Instruct-v0.4
32
 
33
  This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve the base model across all benchmarks.
34
 
35
  # ⚡ Quantized GGUF
36
 
37
- All GGUF models are available here: [MaziyarPanahi/Qwen2-7B-Instruct-v0.4-GGUF](https://huggingface.co/MaziyarPanahi/Qwen2-7B-Instruct-v0.4-GGUF)
38
 
39
  # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
40
 
@@ -68,7 +68,7 @@ from transformers import pipeline
68
  messages = [
69
  {"role": "user", "content": "Who are you?"},
70
  ]
71
- pipe = pipeline("text-generation", model="MaziyarPanahi/Qwen2-7B-Instruct-v0.4")
72
  pipe(messages)
73
 
74
 
@@ -76,6 +76,6 @@ pipe(messages)
76
 
77
  from transformers import AutoTokenizer, AutoModelForCausalLM
78
 
79
- tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.4")
80
- model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.4")
81
  ```
 
18
  model_creator: MaziyarPanahi
19
  quantized_by: MaziyarPanahi
20
  base_model: Qwen/Qwen2-7B
21
+ model_name: calme-2.4-qwen2-7b
22
  datasets:
23
  - nvidia/HelpSteer2
24
  - teknium/OpenHermes-2.5
 
28
 
29
  <img src="./qwen2-fine-tunes-maziyar-panahi.webp" alt="Qwen2 fine-tune" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
30
 
31
+ # MaziyarPanahi/calme-2.4-qwen2-7b
32
 
33
  This is a fine-tuned version of the `Qwen/Qwen2-7B` model. It aims to improve the base model across all benchmarks.
34
 
35
  # ⚡ Quantized GGUF
36
 
37
+ All GGUF models are available here: [MaziyarPanahi/calme-2.4-qwen2-7b-GGUF](https://huggingface.co/MaziyarPanahi/calme-2.4-qwen2-7b-GGUF)
38
 
39
  # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
40
 
 
68
  messages = [
69
  {"role": "user", "content": "Who are you?"},
70
  ]
71
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.4-qwen2-7b")
72
  pipe(messages)
73
 
74
 
 
76
 
77
  from transformers import AutoTokenizer, AutoModelForCausalLM
78
 
79
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.4-qwen2-7b")
80
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.4-qwen2-7b")
81
  ```