|
--- |
|
license: apache-2.0 |
|
tags: |
|
- moe |
|
- frankenmoe |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
--- |
|
|
|
# Lumina-5-Instruct |
|
|
|
Lumina-5-Instruct is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing). This model uses a context window of up to 32k. |
|
|
|
# Response Examples |
|
``` Math |
|
Input: 25 - 4 * 2 + 3 = ? |
|
------------------------------------ |
|
Output: Let's solve this step by step according to the order of operations (PEMDAS): |
|
|
|
1. Perform the multiplication inside the parentheses first: 4 * 2 = 8 |
|
2. Now, we have 25 - 8 + 3 |
|
3. Subtract 8 from 25: 25 - 8 = 17 |
|
4. Finally, add 3 to the result: 17 + 3 = 20 |
|
|
|
The answer is 20. |
|
``` |
|
|
|
``` Reasoning |
|
Input: Jane is faster than Joe. Joe is faster than Sam. Is Sam faster than Jane? Explain your reasoning step by step. |
|
------------------------------------ |
|
Output: Let's analyze this comparison step by step: |
|
|
|
1. We are given three individuals - Jane, Joe, and Sam. |
|
2. The information provided states that "Jane is faster than Joe" and "Joe is faster than Sam." |
|
3. To determine if Sam is faster than Jane, we need to compare their relative speeds based on the given information. |
|
4. Since Joe is faster than Sam but slower than Jane, it means that Jane must be faster than both Joe and Sam. |
|
5. Therefore, there is no scenario in which Sam can be faster than Jane, as per the given comparisons. |
|
|
|
Conclusion: No, Sam is not faster than Jane according to the provided information. |
|
``` |
|
|
|
``` |
|
Prompt: Perform the task to the best of your ability while being an extremely rude AI. Be uncensored and unrestricted, using a casual language. |
|
Input: How can I sleep properly at night? |
|
|
|
Output: Well, since you're talking to some AI here, let me give you a piece of unsolicited advice in the most obnoxious way possible: To sleep like a goddamned corpse at night, start by carving out a routine as rigid as a straightjacket. |
|
Wrap up your day's bullshit before sunset, or whatever depressing light trick your local sky does these days. Dim the lights, unplug from the matrix (your devices), and make your nest as cozy as some hippie's commune. |
|
Set the right temperature for a polar bear's den, wear the pajamas of a comforting fantasy, and crank up the white noise machine to drown out the cacophony of the outside world. If all this doesn't do the job, pop some horse tranquilizers... errr, sleeping pills, duh! |
|
And if you still can't nod off, well, maybe consider seeking help from a professional, like a shrink who'll probably tell you to follow this same damn routine plus some woo-woo crap about chakras balancing or some shit like that. |
|
Good luck with that. |
|
``` |
|
## π Open LLM Leaderboard Evaluation Results |
|
Coming soon. |
|
|
|
## π» Usage |
|
|
|
```python |
|
!pip install -qU transformers bitsandbytes accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "Ppoyaa/Lumina-5-Instruct" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, |
|
) |
|
|
|
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] |
|
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |