File size: 1,005 Bytes
64bfcc4
 
 
 
 
a390e6c
541bfcd
64bfcc4
 
46458e1
12e1679
46458e1
f7d2a8f
a390e6c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
81d2526
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
language:
- en
pipeline_tag: text-generation
model: PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T
dataset: ArmelR/oasst1_guanaco_english
license: apache-2.0
---
TinyLLama 1.5T checkpoint trained to answer questions.
```
f"{'prompt'}\n{'completion'}\n<END>"
```
No special formatting, just question, then newline to begin the answer.


```
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM

pipe = pipeline("text-generation", model="Corianas/tiny-llama-miniguanaco-1.5T")# Load model directly

tokenizer = AutoTokenizer.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")
model = AutoModelForCausalLM.from_pretrained("Corianas/tiny-llama-miniguanaco-1.5T")

# Run text generation pipeline with our next model
prompt = "What is a large language model?"
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=500)
result = pipe(f"<s>{prompt}")
print(result[0]['generated_text'])
```
Result will have the answer, ending with \<END\> on a new line.