Ppoyaa commited on
Commit
a1a0e1c
1 Parent(s): 6d166f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -29
README.md CHANGED
@@ -7,40 +7,32 @@ tags:
7
  - merge
8
 
9
  ---
10
- # merge
 
11
 
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
13
 
14
- ## Merge Details
15
- ### Merge Method
16
 
17
- This model was merged using the passthrough merge method.
 
18
 
19
- ### Models Merged
 
 
20
 
21
- The following models were included in the merge:
22
- * [Ppoyaa/Lumina-5-Instruct](https://huggingface.co/Ppoyaa/Lumina-5-Instruct)
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- slices:
30
- - sources:
31
- - model: Ppoyaa/Lumina-5-Instruct
32
- layer_range: [0, 8]
33
- - sources:
34
- - model: Ppoyaa/Lumina-5-Instruct
35
- layer_range: [4, 16]
36
- - sources:
37
- - model: Ppoyaa/Lumina-5-Instruct
38
- layer_range: [8, 24]
39
- - sources:
40
- - model: Ppoyaa/Lumina-5-Instruct
41
- layer_range: [12, 32]
42
- merge_method: passthrough
43
- dtype: bfloat16
44
 
 
 
 
 
 
 
45
 
 
 
 
 
46
  ```
 
7
  - merge
8
 
9
  ---
10
+ # Lumina-5.5-Instruct
11
+ Lumina-5.5-Instruct is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing). This model uses a context window of up to 32k.
12
 
13
+ ## 🏆 Open LLM Leaderboard Evaluation Results
14
+ Coming soon.
15
 
16
+ ## 💻 Usage
 
17
 
18
+ ```python
19
+ !pip install -qU transformers bitsandbytes accelerate
20
 
21
+ from transformers import AutoTokenizer
22
+ import transformers
23
+ import torch
24
 
25
+ model = "Ppoyaa/Lumina-5.5-Instruct"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
+ tokenizer = AutoTokenizer.from_pretrained(model)
28
+ pipeline = transformers.pipeline(
29
+ "text-generation",
30
+ model=model,
31
+ model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
32
+ )
33
 
34
+ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
35
+ prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
36
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
37
+ print(outputs[0]["generated_text"])
38
  ```