MaziyarPanahi commited on
Commit
c19ae0d
1 Parent(s): aeb2ad7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ library_name: transformers
6
+ license_name: tongyi-qianwen
7
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
8
+ tags:
9
+ - chat
10
+ - qwen
11
+ - qwen2
12
+ - qwen2.5
13
+ - finetune
14
+ - chatml
15
+ base_model: Qwen/Qwen2.5-72B
16
+ datasets:
17
+ - argilla/ultrafeedback-binarized-preferences
18
+ model_name: calme-2.2-qwen2.5-72b
19
+ pipeline_tag: text-generation
20
+ inference: false
21
+ model_creator: MaziyarPanahi
22
+ ---
23
+
24
+ <img src="./calme-2.webp" alt="Calme-2 Models" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
25
+
26
+ # MaziyarPanahi/calme-2.2-qwen2.5-72b
27
+
28
+ This model is a fine-tuned version of the powerful `Qwen/Qwen2.5-72B-Instruct`, pushing the boundaries of natural language understanding and generation even further. My goal was to create a versatile and robust model that excels across a wide range of benchmarks and real-world applications.
29
+
30
+
31
+
32
+ ## Use Cases
33
+
34
+ This model is suitable for a wide range of applications, including but not limited to:
35
+
36
+ - Advanced question-answering systems
37
+ - Intelligent chatbots and virtual assistants
38
+ - Content generation and summarization
39
+ - Code generation and analysis
40
+ - Complex problem-solving and decision support
41
+
42
+ # ⚡ Quantized GGUF
43
+
44
+ coming soon.
45
+
46
+
47
+ # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
48
+
49
+ coming soon.
50
+
51
+ # Prompt Template
52
+
53
+ This model uses `ChatML` prompt template:
54
+
55
+ ```
56
+ <|im_start|>system
57
+ {System}
58
+ <|im_end|>
59
+ <|im_start|>user
60
+ {User}
61
+ <|im_end|>
62
+ <|im_start|>assistant
63
+ {Assistant}
64
+ ````
65
+
66
+ # How to use
67
+
68
+
69
+ ```python
70
+
71
+ # Use a pipeline as a high-level helper
72
+
73
+ from transformers import pipeline
74
+
75
+ messages = [
76
+ {"role": "user", "content": "Who are you?"},
77
+ ]
78
+ pipe = pipeline("text-generation", model="MaziyarPanahi/calme-2.2-qwen2.5-72b")
79
+ pipe(messages)
80
+
81
+
82
+ # Load model directly
83
+
84
+ from transformers import AutoTokenizer, AutoModelForCausalLM
85
+
86
+ tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/calme-2.2-qwen2.5-72b")
87
+ model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/calme-2.2-qwen2.5-72b")
88
+ ```
89
+
90
+
91
+ # Ethical Considerations
92
+
93
+ As with any large language model, users should be aware of potential biases and limitations. We recommend implementing appropriate safeguards and human oversight when deploying this model in production environments.