cstr commited on
Commit
10610c8
1 Parent(s): e9bb2f3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - flemmingmiguel/NeuDist-Ro-7B
7
+ - johannhartmann/Brezn3
8
+ - ResplendentAI/Flora_DPO_7B
9
+ base_model:
10
+ - flemmingmiguel/NeuDist-Ro-7B
11
+ - johannhartmann/Brezn3
12
+ - ResplendentAI/Flora_DPO_7B
13
+ license: cc
14
+ language:
15
+ - de
16
+ ---
17
+
18
+ # Spaetzle-v8-7b
19
+
20
+ Spaetzle-v8-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
21
+ * [flemmingmiguel/NeuDist-Ro-7B](https://huggingface.co/flemmingmiguel/NeuDist-Ro-7B)
22
+ * [johannhartmann/Brezn3](https://huggingface.co/johannhartmann/Brezn3)
23
+ * [ResplendentAI/Flora_DPO_7B](https://huggingface.co/ResplendentAI/Flora_DPO_7B)
24
+
25
+ ## 🧩 Configuration
26
+
27
+ ```yaml
28
+ models:
29
+ - model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
30
+ # no parameters necessary for base model
31
+ - model: flemmingmiguel/NeuDist-Ro-7B
32
+ parameters:
33
+ density: 0.60
34
+ weight: 0.30
35
+ - model: johannhartmann/Brezn3
36
+ parameters:
37
+ density: 0.65
38
+ weight: 0.40
39
+ - model: ResplendentAI/Flora_DPO_7B
40
+ parameters:
41
+ density: 0.6
42
+ weight: 0.3
43
+ merge_method: dare_ties
44
+ base_model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
45
+ parameters:
46
+ int8_mask: true
47
+ dtype: bfloat16
48
+ random_seed: 0
49
+ tokenizer_source: base
50
+ ```
51
+
52
+ ## 💻 Usage
53
+
54
+ ```python
55
+ !pip install -qU transformers accelerate
56
+
57
+ from transformers import AutoTokenizer
58
+ import transformers
59
+ import torch
60
+
61
+ model = "cstr/Spaetzle-v8-7b"
62
+ messages = [{"role": "user", "content": "What is a large language model?"}]
63
+
64
+ tokenizer = AutoTokenizer.from_pretrained(model)
65
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
66
+ pipeline = transformers.pipeline(
67
+ "text-generation",
68
+ model=model,
69
+ torch_dtype=torch.float16,
70
+ device_map="auto",
71
+ )
72
+
73
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
74
+ print(outputs[0]["generated_text"])
75
+ ```