llama-600M-rus

Simple and customized amateur experimental model pretrained on the text fiction books from the scratch (updating the model regularly).
It could generate amateur, but more or less adequate output as well (in respect of training tokens).
The work can be used as a checkpoint for the further training or for experiments.

Simple usage example:

from transformers import LlamaTokenizerFast, LlamaForCausalLM
model = LlamaForCausalLM.from_pretrained('demetera/llama-600M-rus')
tokenizer = LlamaTokenizerFast.from_pretrained('demetera/llama-600M-rus')

prompt = "Я вышел и улицу и"
inputs = tokenizer(prompt, return_tensors='pt')
outputs = model.generate(inputs.input_ids, attention_mask = inputs.attention_mask, max_new_tokens=250, do_sample=True, top_k=50, top_p=0.95)

print (tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
437
Safetensors
Model size
548M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for demetera/llama-600M-rus

Quantizations
1 model