Kaoeiri commited on
Commit
db8fff3
1 Parent(s): ce44806

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -93
README.md CHANGED
@@ -1,93 +0,0 @@
1
- ---
2
- tags:
3
- - merge
4
- - mergekit
5
- - lazymergekit
6
- - grimjim/Mistral-7B-Instruct-demi-merge-v0.3-7B
7
- - DataPilot/ArrowPro-7B-KillerWhale
8
- - icefog72/IceBlendedLatteRP-7b
9
- - Elizezen/Berghof-NSFW-7B
10
- - mlabonne/OmniBeagle-7B
11
- - Kaoeiri/AkiroErotures
12
- base_model:
13
- - grimjim/Mistral-7B-Instruct-demi-merge-v0.3-7B
14
- - DataPilot/ArrowPro-7B-KillerWhale
15
- - icefog72/IceBlendedLatteRP-7b
16
- - Elizezen/Berghof-NSFW-7B
17
- - mlabonne/OmniBeagle-7B
18
- - Kaoeiri/AkiroErotures
19
- ---
20
-
21
- # AkiroXEntro-7B-1-V1
22
-
23
- AkiroXEntro-7B-1-V1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
24
-
25
- # Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.
26
- * [grimjim/Mistral-7B-Instruct-demi-merge-v0.3-7B](https://huggingface.co/grimjim/Mistral-7B-Instruct-demi-merge-v0.3-7B)
27
- * [DataPilot/ArrowPro-7B-KillerWhale](https://huggingface.co/DataPilot/ArrowPro-7B-KillerWhale)
28
- * [icefog72/IceBlendedLatteRP-7b](https://huggingface.co/icefog72/IceBlendedLatteRP-7b)
29
- * [Elizezen/Berghof-NSFW-7B](https://huggingface.co/Elizezen/Berghof-NSFW-7B)
30
- * [mlabonne/OmniBeagle-7B](https://huggingface.co/mlabonne/OmniBeagle-7B)
31
- * [Kaoeiri/AkiroErotures](https://huggingface.co/Kaoeiri/AkiroErotures)
32
-
33
- ## 🧩 Configuration
34
-
35
- ```yaml
36
- models:
37
- - model: grimjim/Mistral-7B-Instruct-demi-merge-v0.3-7B
38
- parameters:
39
- density: .5
40
- weight: 1
41
- - model: DataPilot/ArrowPro-7B-KillerWhale
42
- parameters:
43
- density: .2
44
- weight: .5
45
- - model: icefog72/IceBlendedLatteRP-7b
46
- parameters:
47
- density: .1
48
- weight: .2
49
- - model: Elizezen/Berghof-NSFW-7B
50
- parameters:
51
- density: .7
52
- weight: .4
53
- - model: mlabonne/OmniBeagle-7B
54
- parameters:
55
- density: .2
56
- weight: .4
57
- - model: Kaoeiri/AkiroErotures
58
- parameters:
59
- density: .5
60
- weight: .7
61
- merge_method: ties
62
- tokenizer_source: union
63
- base_model: Kaoeiri/AkiroErotures
64
- parameters:
65
- normalize: true
66
- int8_mask: true
67
- dtype: bfloat16
68
- ```
69
-
70
- ## 💻 Usage
71
-
72
- ```python
73
- !pip install -qU transformers accelerate
74
-
75
- from transformers import AutoTokenizer
76
- import transformers
77
- import torch
78
-
79
- model = "Kaoeiri/AkiroXEntro-7B-1-V1"
80
- messages = [{"role": "user", "content": "What is a large language model?"}]
81
-
82
- tokenizer = AutoTokenizer.from_pretrained(model)
83
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
84
- pipeline = transformers.pipeline(
85
- "text-generation",
86
- model=model,
87
- torch_dtype=torch.float16,
88
- device_map="auto",
89
- )
90
-
91
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
92
- print(outputs[0]["generated_text"])
93
- ```