Tanvir1337 commited on
Commit
27183d0
1 Parent(s): 8b0e59c

init readme content and metadata

Browse files
Files changed (1) hide show
  1. README.md +62 -3
README.md CHANGED
@@ -1,3 +1,62 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ base_model: BanglaLLM/BanglaLLama-3-8b-unolp-culturax-instruct-v0.0.1
4
+ datasets:
5
+ - unolp/culturax
6
+ - BanglaLLM/bangla-alpaca-orca
7
+ language:
8
+ - bn
9
+ - en
10
+ tags:
11
+ - bangla
12
+ - large language model
13
+ - text-generation-inference
14
+ - transformers
15
+ library_name: transformers
16
+ pipeline_tag: text-generation
17
+ quantized_by: Tanvir1337
18
+ ---
19
+
20
+ # Tanvir1337/BanglaLLama-3-8b-BnWiki-Instruct-GGUF
21
+
22
+ This model has been quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp/), a high-performance inference engine for large language models.
23
+
24
+ ## System Prompt Format
25
+
26
+ To interact with the model, use the following prompt format:
27
+ ```
28
+ {System}
29
+ ### Prompt:
30
+ {User}
31
+ ### Response:
32
+ ```
33
+
34
+ ## Usage Instructions
35
+
36
+ If you're new to using GGUF files, refer to [TheBloke's README](https://huggingface.co/TheBloke/CapybaraHermes-2.5-Mistral-7B-GGUF) for detailed instructions.
37
+
38
+ ## Quantization Options
39
+
40
+ The following graph compares various quantization types (lower is better):
41
+
42
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
43
+
44
+ For more information on quantization, see [Artefact2's notes](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9).
45
+
46
+ ## Choosing the Right Model File
47
+
48
+ To select the optimal model file, consider the following factors:
49
+
50
+ 1. **Memory constraints**: Determine how much RAM and/or VRAM you have available.
51
+ 2. **Speed vs. quality**: If you prioritize speed, choose a model that fits within your GPU's VRAM. For maximum quality, consider a model that fits within the combined RAM and VRAM of your system.
52
+
53
+ **Quantization formats**:
54
+
55
+ * **K-quants** (e.g., Q5_K_M): A good starting point, offering a balance between speed and quality.
56
+ * **I-quants** (e.g., IQ3_M): Newer and more efficient, but may require specific hardware configurations (e.g., cuBLAS or rocBLAS).
57
+
58
+ **Hardware compatibility**:
59
+
60
+ * **I-quants**: Not compatible with Vulcan (AMD). If you have an AMD card, ensure you're using the rocBLAS build or a compatible inference engine.
61
+
62
+ For more information on the features and trade-offs of each quantization format, refer to the [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix).