File size: 3,358 Bytes
5f4a414 661565f a34a5f1 a27778a 5f4a414 661565f 2e64c0e 661565f a9ca512 661565f 6f5e451 072cbfd 26dd2b0 661565f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
language:
- ru
license: apache-2.0
pipeline_tag: text-generation
tags:
- aeonium
inference:
parameters:
temperature: 0.8
---
# Aeoinum v1 BaseWeb 1B
A state-of-the-art language model for Russian language processing. This checkpoint contains a preliminary version of the model with 1.6 billion parameters. Trained only on web pages.
## Models
| Name | N of parameters | N of dataset tokens | Context window |
|:---------------------:|:-----------------:|:---------------------:|:--------------:|
| **Aeonium-v1-BaseWeb-1B** | 1.6B | 32B | 4K |
| Aeonium-v1-Base-1B | 1.6B | In training | 4K |
| Aeonium-v1-Chat-1B | 1.6B | In training | 4K |
| Aeonium-v1-BaseCode-1B | 1.6B | In training | 4K |
| Aeonium-v1-ChatCode-1B | 1.6B | In training | 4K |
| Aeonium-v1-Base-3B | 1.6B | In training | 4K |
| Aeonium-v1-Chat-3B | 1.6B | In training | 4K |
| Aeonium-v1-BaseCode-3B | 1.6B | In training | 4K |
| Aeonium-v1-ChatCode-3B | 1.6B | In training | 4K |
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("aeonium/Aeonium-v1-BaseWeb-1B")
model = AutoModelForCausalLM.from_pretrained("aeonium/Aeonium-v1-BaseWeb-1B").cuda()
input_ids = tokenizer("Искусственный интеллект - это", return_tensors='pt').to(model.device)["input_ids"]
output = model.generate(input_ids, max_new_tokens=48, do_sample=True, temperature=0.7)
print(tokenizer.decode(output[0]))
```
Output:
```
Искусственный интеллект - это не только про компьютеры и смартфоны. Его возможности безграничны, а с развитием интернета и интернета вещей он становится еще и самым настоящим оружием в борьбе с преступностью.
Мы поговорили с юристом о самых интересных и опасных способах использования ИИ.
```
## Dataset Detail
The dataset for pre-training is collected from public data, most of which are web pages in Russian. The total size of the data is 32B tokens.
## Training Detail
The training is performed thanks to a grant from [TPU Research Cloud](https://sites.research.google/trc/about/) on a TPU v4-128 node.
Loss: 2.68; Accuracy: 0.48; Batch Size: 1024
## Content Warning
Aeonium v1 is a large language model trained on a broad dataset from the internet. As such, it may generate text that contains biases, offensive language, or other disapproving content. The model outputs should not be considered factual or representative of any individual's beliefs or identity. Users should exercise caution and apply careful filtering when using Aeonium's generated text, especially for sensitive or high-stakes applications. The developers do not condone generating harmful, biased, or unethical content.
## Copyright
The model is released under the Apache 2.0 license. |