Update README.md
Browse files
README.md
CHANGED
@@ -26,13 +26,28 @@ This repository hosts a small language model developed as part of the TinyLLM fr
|
|
26 |
|
27 |
## Acknowledgements
|
28 |
|
29 |
-
We
|
30 |
|
31 |
## Usage
|
32 |
|
33 |
The model can be used in two primary ways:
|
34 |
1. **With Hugging Face’s Transformers Library**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
2. **With llama.cpp**
|
|
|
|
|
|
|
|
|
36 |
|
37 |
## Disclaimer
|
38 |
|
|
|
26 |
|
27 |
## Acknowledgements
|
28 |
|
29 |
+
We want to acknowledge the open-source frameworks [llm.c](https://github.com/karpathy/llm.c) and [llama.cpp](https://github.com/ggerganov/llama.cpp) and the sensor dataset provided by SHL, which were instrumental in training and testing these models.
|
30 |
|
31 |
## Usage
|
32 |
|
33 |
The model can be used in two primary ways:
|
34 |
1. **With Hugging Face’s Transformers Library**
|
35 |
+
```python
|
36 |
+
from transformers import pipeline
|
37 |
+
import torch
|
38 |
+
|
39 |
+
path = "tinyllm/124M-0.2"
|
40 |
+
prompt = "The sea is blue but it's his red sea"
|
41 |
+
|
42 |
+
generator = pipeline("text-generation", model=path,max_new_tokens = 30, repetition_penalty=1.3, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto")
|
43 |
+
print(generator(prompt)[0]['generated_text'])
|
44 |
+
```
|
45 |
+
|
46 |
2. **With llama.cpp**
|
47 |
+
Generate a GGUF model file using this [tool](https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py) and use the generated GGUF file for inferencing.
|
48 |
+
```python
|
49 |
+
python3 convert_hf_to_gguf.py models/mymodel/
|
50 |
+
```
|
51 |
|
52 |
## Disclaimer
|
53 |
|