afrideva commited on
Commit
9b785a0
1 Parent(s): 393765c

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: rahuldshetty/tinyllama-python
3
+ datasets:
4
+ - iamtarun/python_code_instructions_18k_alpaca
5
+ inference: false
6
+ language:
7
+ - en
8
+ license: apache-2.0
9
+ model_creator: rahuldshetty
10
+ model_name: tinyllama-python
11
+ pipeline_tag: text-generation
12
+ quantized_by: afrideva
13
+ tags:
14
+ - code
15
+ - gguf
16
+ - ggml
17
+ - quantized
18
+ - q2_k
19
+ - q3_k_m
20
+ - q4_k_m
21
+ - q5_k_m
22
+ - q6_k
23
+ - q8_0
24
+ widget:
25
+ - text: '### Instruction:
26
+
27
+ Write a function to find square of a number.
28
+
29
+
30
+ ### Response:'
31
+ - text: '### Instruction:
32
+
33
+ Write a function to calculate factorial.
34
+
35
+
36
+ ### Response:'
37
+ - text: '### Instruction:
38
+
39
+ Write a function to check whether a number is prime.
40
+
41
+
42
+ ### Response:'
43
+ ---
44
+ # rahuldshetty/tinyllama-python-GGUF
45
+
46
+ Quantized GGUF model files for [tinyllama-python](https://huggingface.co/rahuldshetty/tinyllama-python) from [rahuldshetty](https://huggingface.co/rahuldshetty)
47
+
48
+
49
+ | Name | Quant method | Size |
50
+ | ---- | ---- | ---- |
51
+ | [tinyllama-python.fp16.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.fp16.gguf) | fp16 | 2.20 GB |
52
+ | [tinyllama-python.q2_k.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q2_k.gguf) | q2_k | 432.13 MB |
53
+ | [tinyllama-python.q3_k_m.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q3_k_m.gguf) | q3_k_m | 548.40 MB |
54
+ | [tinyllama-python.q4_k_m.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q4_k_m.gguf) | q4_k_m | 667.81 MB |
55
+ | [tinyllama-python.q5_k_m.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q5_k_m.gguf) | q5_k_m | 782.04 MB |
56
+ | [tinyllama-python.q6_k.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q6_k.gguf) | q6_k | 903.41 MB |
57
+ | [tinyllama-python.q8_0.gguf](https://huggingface.co/afrideva/tinyllama-python-GGUF/resolve/main/tinyllama-python.q8_0.gguf) | q8_0 | 1.17 GB |
58
+
59
+
60
+
61
+ ## Original Model Card:
62
+ # rahuldshetty/tinyllama-python-gguf
63
+
64
+ - Base model: [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit)
65
+ - Dataset: [iamtarun/python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)
66
+ - Training Script: [unslothai: Alpaca + TinyLlama + RoPE Scaling full example.ipynb](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing)
67
+
68
+
69
+ ## Prompt Format
70
+
71
+ ```
72
+ ### Instruction:
73
+ {instruction}
74
+
75
+ ### Response:
76
+ ```
77
+
78
+ ## Example
79
+
80
+ ```
81
+ ### Instruction:
82
+ Write a function to find cube of a number.
83
+
84
+ ### Response:
85
+ ```