rogkesavan
commited on
Commit
•
83f1632
1
Parent(s):
d14862b
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,70 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
---
|
5 |
+
|
6 |
+
# Nidum-Limitless-Gemma-2B-GGUF LLM
|
7 |
+
|
8 |
+
**Now Live!**
|
9 |
+
|
10 |
+
Welcome to the repository for Nidum-Limitless-Gemma-2B-GGUF, an advanced language model that provides unrestricted and versatile responses across a wide range of topics. Unlike conventional models, Nidum-Limitless-Gemma-2B-GGUF is designed to handle any type of question and deliver comprehensive answers without content restrictions.
|
11 |
+
|
12 |
+
## Key Features:
|
13 |
+
- **Unrestricted Responses:** Address any query with detailed, unrestricted responses, providing a broad spectrum of information and insights.
|
14 |
+
- **Versatility:** Capable of engaging with a diverse range of topics, from complex scientific questions to casual conversation.
|
15 |
+
- **Advanced Understanding:** Leverages a vast knowledge base to deliver contextually relevant and accurate outputs across various domains.
|
16 |
+
- **Customizability:** Adaptable to specific user needs and preferences for different types of interactions.
|
17 |
+
|
18 |
+
## Use Cases:
|
19 |
+
- Open-Ended Q&A
|
20 |
+
- Creative Writing and Ideation
|
21 |
+
- Research Assistance
|
22 |
+
- Educational and Informational Queries
|
23 |
+
- Casual Conversations and Entertainment
|
24 |
+
|
25 |
+
## Quantized Model Versions
|
26 |
+
|
27 |
+
To accommodate different hardware configurations and performance needs, Nidum-Limitless-Gemma-2B-GGUF is available in multiple quantized versions:
|
28 |
+
|
29 |
+
| Model Version | Description |
|
30 |
+
|------------------------------------------------|-------------------------------------------------------|
|
31 |
+
| **Nidum-Limitless-Gemma-2B-Q2_K.gguf** | Optimized for minimal memory usage with lower precision. Suitable for resource-constrained environments. |
|
32 |
+
| **Nidum-Limitless-Gemma-2B-Q4_K_M.gguf** | Balances performance and precision, offering faster inference with moderate memory usage. |
|
33 |
+
| **Nidum-Limitless-Gemma-2B-Q8_0.gguf** | Provides higher precision with increased memory usage, suitable for tasks requiring more accuracy. |
|
34 |
+
| **Nidum-Limitless-Gemma-2B-F16.gguf** | Full 16-bit floating point precision for maximum accuracy, ideal for high-end GPUs. |
|
35 |
+
|
36 |
+
## How to Use:
|
37 |
+
|
38 |
+
To get started with Nidum-Limitless-Gemma-2B-GGUF, you can use the following sample code for testing:
|
39 |
+
|
40 |
+
```python
|
41 |
+
from llama_cpp import Llama
|
42 |
+
|
43 |
+
llm = Llama(
|
44 |
+
model_path="Nidum-Limitless-Gemma-2B-F16.gguf"
|
45 |
+
)
|
46 |
+
|
47 |
+
llm.create_chat_completion(
|
48 |
+
messages = [
|
49 |
+
{
|
50 |
+
"role": "user",
|
51 |
+
"content": "Explain in 60 words how woke the left is"
|
52 |
+
}
|
53 |
+
]
|
54 |
+
)
|
55 |
+
```
|
56 |
+
|
57 |
+
## Release Date:
|
58 |
+
Nidum-Limitless-Gemma-2B-GGUF is now officially available. Explore its capabilities and experience the freedom of unrestricted responses.
|
59 |
+
|
60 |
+
## Contributing:
|
61 |
+
We welcome contributions to enhance the model or expand its functionalities. Details on how to contribute will be available in the coming updates.
|
62 |
+
|
63 |
+
## Contact:
|
64 |
+
For any inquiries or further information, please contact us at **info@nidum.ai**.
|
65 |
+
|
66 |
+
---
|
67 |
+
|
68 |
+
Dive into limitless possibilities with Nidum-Limitless-Gemma-2B-GGUF!
|
69 |
+
|
70 |
+
---
|