Nidum-Limitless-Gemma-2B-GGUF LLM

Welcome to the repository for Nidum-Limitless-Gemma-2B-GGUF, an advanced language model that provides unrestricted and versatile responses across a wide range of topics. This version is designed for maximum flexibility, allowing you to run it on both CPU and GPU. Unlike conventional models, Nidum-Limitless-Gemma-2B-GGUF is capable of handling any type of question and delivering comprehensive answers without content restrictions.

Key Features:

  • Unrestricted Responses: Address any query with detailed, unrestricted responses, providing a broad spectrum of information and insights.
  • Versatility: Capable of engaging with a diverse range of topics, from complex scientific questions to casual conversation.
  • Advanced Understanding: Leverages a vast knowledge base to deliver contextually relevant and accurate outputs across various domains.
  • Customizability: Adaptable to specific user needs and preferences for different types of interactions.

Use Cases:

  • Open-Ended Q&A
  • Creative Writing and Ideation
  • Research Assistance
  • Educational and Informational Queries
  • Casual Conversations and Entertainment

Quantized Model Versions

To accommodate different hardware configurations and performance needs, Nidum-Limitless-Gemma-2B-GGUF is available in multiple quantized versions:

Model Version Description
Nidum-Limitless-Gemma-2B-Q2_K.gguf Optimized for minimal memory usage with lower precision. Suitable for resource-constrained environments.
Nidum-Limitless-Gemma-2B-Q4_K_M.gguf Balances performance and precision, offering faster inference with moderate memory usage.
Nidum-Limitless-Gemma-2B-Q8_0.gguf Provides higher precision with increased memory usage, suitable for tasks requiring more accuracy.
Nidum-Limitless-Gemma-2B-F16.gguf Full 16-bit floating point precision for maximum accuracy, ideal for high-end GPUs.

Unquantized Model Version:

Our unquantized model is available here: https://huggingface.co/nidum/Nidum-Limitless-Gemma-2B

How to Use:

To get started with Nidum-Limitless-Gemma-2B-GGUF, you can use the following sample code for testing:

from llama_cpp import Llama

llm = Llama(
      model_path="Nidum-Limitless-Gemma-2B-F16.gguf"
)

llm.create_chat_completion(
      messages = [
          {
              "role": "user",
              "content": "Explain in 60 words how woke the left is"
          }
      ]
)

Release Date:

Nidum-Limitless-Gemma-2B-GGUF is now officially available. Explore its capabilities and experience the freedom of unrestricted responses.

Contributing:

We welcome contributions to enhance the model or expand its functionalities. Details on how to contribute will be available in the coming updates.

Contact:

For any inquiries or further information, please contact us at info@nidum.ai.


Dive into limitless possibilities with Nidum-Limitless-Gemma-2B-GGUF!


Downloads last month
615
GGUF
Model size
2.51B params
Architecture
gemma

2-bit

4-bit

8-bit

16-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for nidum/Nidum-Gemma-2B-Uncensored-GGUF

Base model

google/gemma-2-2b
Quantized
(122)
this model

Collection including nidum/Nidum-Gemma-2B-Uncensored-GGUF