|
--- |
|
language: |
|
- ru |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
tags: |
|
- aeonium |
|
- llama |
|
- chat |
|
- conversational |
|
base_model: aeonium/Aeonium-v1.1-Base-4B |
|
--- |
|
|
|
# Aeoinum v1.1 Chat 4B |
|
A state-of-the-art language model for Russian language processing. The model is fine-tuned for dialogues, SFT only. Trained on 2xNVIDIA L40S |
|
|
|
|
|
## Usage |
|
Example for running a model on NVIDIA CUDA: |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
import transformers |
|
import torch |
|
|
|
model_id = "aeonium/Aeonium-v1.1-Chat-4B" |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_id, |
|
device_map="cuda", |
|
torch_dtype=torch.bfloat16, |
|
) |
|
|
|
chat = [ |
|
{ "role": "user", "content": "Привет!" }, |
|
] |
|
|
|
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True) |
|
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device) |
|
outputs = model.generate(input_ids=inputs, max_new_tokens=48) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |
|
|
|
## Content Warning |
|
Aeonium v1.1 is a large language model trained on a broad dataset from the internet. As such, it may generate text that contains biases, offensive language, or other disapproving content. The model outputs should not be considered factual or representative of any individual's beliefs or identity. Users should exercise caution and apply careful filtering when using Aeonium's generated text, especially for sensitive or high-stakes applications. The developers do not condone generating harmful, biased, or unethical content. |
|
|
|
## Copyright |
|
The model is released under the Apache 2.0 license. |