Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,25 @@ license: mit
|
|
3 |
language:
|
4 |
- ru
|
5 |
library_name: transformers
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
language:
|
4 |
- ru
|
5 |
library_name: transformers
|
6 |
+
---
|
7 |
+
|
8 |
+
# llama-600M-rus
|
9 |
+
|
10 |
+
Simple amateur experimental model trained on approximately 60 Mb of text books from beginner in LLMs.
|
11 |
+
No resources and time to collect bigger dataset.
|
12 |
+
It could generate amateur, but no or less adequate output as well (in respect of training tokens)/
|
13 |
+
The work can be used as a checkpoint for the further training or for experiments.
|
14 |
+
|
15 |
+
Simle usage example:
|
16 |
+
|
17 |
+
```python
|
18 |
+
from transformers import LlamaTokenizerFast, LlamaForCausalLM
|
19 |
+
model = LlamaForCausalLM.from_pretrained('demetera/llama-600M-rus')
|
20 |
+
tokenizer = LlamaTokenizerFast.from_pretrained('demetera/llama-600M-rus')
|
21 |
+
|
22 |
+
prompt = "Я вышел и улицу и"
|
23 |
+
inputs = tokenizer(prompt, return_tensors='pt')
|
24 |
+
outputs = model.generate(inputs.input_ids, attention_mask = inputs.attention_mask, max_new_tokens=250, do_sample=True, top_k=50, top_p=0.95)
|
25 |
+
|
26 |
+
print (tokenizer.decode(outputs[0], skip_special_tokens=True))
|
27 |
+
```
|