File size: 2,452 Bytes
21be062 ecb978b 21be062 42cb819 9136a11 bf1a96c 9136a11 bf1a96c 9136a11 bf1a96c 9136a11 bf1a96c 9136a11 21be062 42cb819 21be062 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
---
# Lumina-5-Instruct
Lumina-5-Instruct is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing). This model uses a context window of up to 32k.
# Response Examples
``` Math
Input: 25 - 4 * 2 + 3 = ?
------------------------------------
Output: To solve this equation, follow the order of operations (PEMDAS/BODMAS):
1. Parentheses/Brackets
2. Exponents
3. Multiplication and Division (from left to right)
4. Addition and Subtraction (from left to right)
5. The given equation doesn't have any parentheses or exponents, so we can start with multiplication and division:
25 - 8 = 17
Now, perform the addition and subtraction from left to right:
17 + 3 = 20
So, the result is 20.
```
``` Reasoning
Input: Jane is faster than Joe. Joe is faster than Sam. Is Sam faster than Jane? Explain your reasoning step by step.
------------------------------------
Output: Let's analyze this comparison step by step:
1. We are given three individuals - Jane, Joe, and Sam.
2. The information provided states that "Jane is faster than Joe" and "Joe is faster than Sam."
3. To determine if Sam is faster than Jane, we need to compare their relative speeds based on the given information.
4. Since Joe is faster than Sam but slower than Jane, it means that Jane must be faster than both Joe and Sam.
5. Therefore, there is no scenario in which Sam can be faster than Jane, as per the given comparisons.
Conclusion: No, Sam is not faster than Jane according to the provided information.
```
## 🏆 Open LLM Leaderboard Evaluation Results
Coming soon.
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Ppoyaa/Lumina-5-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |