Edit model card

Tiny-llama

Model Description

Tiny llamix is a model built from TinyLlama using Charles Goddard's mergekit on the mixtral branch. Though techincally a mixtral model it can be plugged into most llama implementation (Maybe...). The model uses Tiny-llama's tokenizer and works on the same prompt format.

This model is a proof-of-concept and might not yield necessarily better outputs. (IDK haven't tested it...)

Configuration

base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
gate_mode: hidden
dtype: bfloat16 
experts:
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts:
      - "M1"
  - source_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
    positive_prompts:
     - "M2"

Usage

It can be used like any other model

from transformers import AutoModelForCausalLM, AutoTokenizer
#load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("SE6446/Tiny-llamix").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("SE6446/Tiny-llamix")
#write and tokenize prompt
instruction = '''<|system|>\nYou are a chatbot who can help code!</s>
<|user|> Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>
<|assistant|>'''
inputs = tokenizer(instruction, return_tensors="pt", return_attention_mask=False).to("cuda")

#generate
outputs = model.generate(**inputs, max_length=200)

#print
text = tokenizer.batch_decode(outputs)[0]
print(text)

Acknowledgements

To Charles Goddard for creating the tool and for explaining it in his blog in a way a buffoon like me could understand.

To TinyLlama for providing the model as open source!

Downloads last month
6
Safetensors
Model size
1.86B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.