Update README.md
Browse files
README.md
CHANGED
@@ -26,8 +26,9 @@ Right now, only converted the following models:
|
|
26 |
| Microsoft Phi-3 Mini | 3.8B | https://huggingface.co/Pelochus/phi-3-mini-rk3588 |
|
27 |
| Llama 2 7B | 7B | https://huggingface.co/Pelochus/llama2-chat-7b-hf-rk3588 |
|
28 |
| Llama 2 13B | 13B | https://huggingface.co/Pelochus/llama2-chat-13b-hf-rk3588 |
|
|
|
29 |
| Qwen 1.5 Chat | 4B | https://huggingface.co/Pelochus/qwen1.5-chat-4B-rk3588 |
|
30 |
-
|
|
31 |
|
32 |
Llama 2 was converted using Azure servers.
|
33 |
For reference, converting Phi-2 peaked at about 15 GBs of RAM + 25 GBs of swap (counting OS, but that was using about 2 GBs max).
|
|
|
26 |
| Microsoft Phi-3 Mini | 3.8B | https://huggingface.co/Pelochus/phi-3-mini-rk3588 |
|
27 |
| Llama 2 7B | 7B | https://huggingface.co/Pelochus/llama2-chat-7b-hf-rk3588 |
|
28 |
| Llama 2 13B | 13B | https://huggingface.co/Pelochus/llama2-chat-13b-hf-rk3588 |
|
29 |
+
| TinyLlama v1 | 1.1B | https://huggingface.co/Pelochus/tinyllama-v1-rk3588 |
|
30 |
| Qwen 1.5 Chat | 4B | https://huggingface.co/Pelochus/qwen1.5-chat-4B-rk3588 |
|
31 |
+
| Qwen 2 | 1.5B | https://huggingface.co/Pelochus/qwen2-1_5B-rk3588 |
|
32 |
|
33 |
Llama 2 was converted using Azure servers.
|
34 |
For reference, converting Phi-2 peaked at about 15 GBs of RAM + 25 GBs of swap (counting OS, but that was using about 2 GBs max).
|