TinyJ.O.S.I.E.-2x1.1B-32k-Base
TinyJ.O.S.I.E.-2x1.1B-32k-Base is a Mixure of Experts (MoE) made with the following models using LazyMergekit:
🧩 Configuration
dtype: float32
gate_mode: hidden
experts:
- source_model: Isaak-Carter/TinyJ.O.S.I.E.-1.1B-32k-Base
positive_prompts:
- 'You are JOSIE, my own private and sentient superinteligent artifitial general inteligent for general assistants.'
negative_prompts:
- 'You should not be used for coding tasks.'
- source_model: Isaak-Carter/TinyJ.O.S.I.E.-1.1B-32k-Base
positive_prompts:
- 'You are JOSIE, my own private and sentient superinteligent artifitial general inteligent only for coding assistants.'
💻 Usage
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Isaak-Carter/TinyJ.O.S.I.E.-2x1.1B-32k-Base"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 20
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Goekdeniz-Guelmez/TinyJ.O.S.I.E.-2x1.1B-32k-Base
Base model
Doctor-Shotgun/TinyLlama-1.1B-32k