Pinkstack commited on
Commit
3a840be
1 Parent(s): 15f0663

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -2
README.md CHANGED
@@ -6,17 +6,40 @@ tags:
6
  - unsloth
7
  - llama
8
  - gguf
 
 
9
  license: apache-2.0
10
  language:
11
  - en
12
  ---
13
 
14
- # Uploaded model
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - **Developed by:** Pinkstack
17
  - **License:** apache-2.0
18
  - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
19
 
20
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
21
 
22
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
6
  - unsloth
7
  - llama
8
  - gguf
9
+ - Roblox
10
+ - Luau
11
  license: apache-2.0
12
  language:
13
  - en
14
  ---
15
 
16
+ ![BY_PINKSTACK.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2xMulpuSlZ3C1vpGgsAYi.png)
17
 
18
+ # 🤖 Which quant is right for you?
19
+
20
+ - ***Q4:*** This model should be used on edge devices like phones or older laptops due to its compact size, quality is okay but fully usable.
21
+ - ***Q5:*** This model should be used on most medium range devices like a rtx 2070 super, good quality and fast responses.
22
+ - ***Q8:*** This model should be used on most modern devices Responses are very high quality, but its slower than q5.
23
+ - ***F16:*** This model should be used for testing and evaluating the model, server GPU needed to run it quickly.
24
+
25
+ ## Things you should be aware of when using PGAM models (Pinkstack General Accuracy Models) 🤖
26
+
27
+ This PGAM is based on Meta Llama 3.1 8B which we've given extra roblox LuaU training parameters so it would have similar outputs to the roblox ai documentation assistant, We trained using [this](mahiatlinux/luau_corpus-ShareGPT-for-EDM) dataset. Which is based on Roblox/luau_corpus
28
+
29
+
30
+ To use this model, you must use a service which supports the GGUF file format.
31
+ Additionaly, this is the Prompt it uses the llama-3.1 template.
32
+
33
+ Highly recommended to use with a system prompt.
34
+
35
+ # Extra information
36
  - **Developed by:** Pinkstack
37
  - **License:** apache-2.0
38
  - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
39
 
40
+ This model was trained using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
41
+
42
+ Used this model? Don't forget to leave a like :)
43
+
44
 
45
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)