Ruqiya commited on
Commit
552eb6e
1 Parent(s): 5b0d34f

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +57 -0
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - google/gemma-7b
7
+ base_model:
8
+ - google/gemma-7b
9
+ ---
10
+
11
+ # Gemma-2b-rs
12
+
13
+ Gemma-2b-rs is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
14
+ * [google/gemma-7b](https://huggingface.co/google/gemma-7b)
15
+
16
+ ## 🧩 Configuration
17
+
18
+ ```yaml
19
+ models:
20
+ - model: google/gemma-7b-it
21
+ # No parameters necessary for base model
22
+ - model: google/gemma-7b
23
+ parameters:
24
+ density: 0.53
25
+ weight: 0.45
26
+ merge_method: dare_ties
27
+ base_model: google/gemma-7b-it
28
+ parameters:
29
+ int8_mask: true
30
+ dtype: bfloat16
31
+ random_seed: 0
32
+ ```
33
+
34
+ ## 💻 Usage
35
+
36
+ ```python
37
+ !pip install -qU transformers accelerate
38
+
39
+ from transformers import AutoTokenizer
40
+ import transformers
41
+ import torch
42
+
43
+ model = "Ruqiya/Gemma-2b-rs"
44
+ messages = [{"role": "user", "content": "What is a large language model?"}]
45
+
46
+ tokenizer = AutoTokenizer.from_pretrained(model)
47
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
48
+ pipeline = transformers.pipeline(
49
+ "text-generation",
50
+ model=model,
51
+ torch_dtype=torch.float16,
52
+ device_map="auto",
53
+ )
54
+
55
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
56
+ print(outputs[0]["generated_text"])
57
+ ```