πŸš€ Nidum Gemma-3-4B IT Uncensored

Welcome to Nidum's Gemma-3-4B IT Uncensored, your gateway to an open and unrestricted AI experience. This powerful model enables users to explore and innovate without boundaries.

✨ Why Choose Nidum Gemma-3-4B IT Uncensored?

  • Unrestricted AI Interaction: Freedom to discuss, explore, and innovate without content limitations.
  • Efficient and Versatile: Optimized performance suitable for various hardware configurations.
  • Diverse Applications: Perfect for creative projects, conversational AI, educational tools, and entertainment.

πŸ“₯ Available Quantized Versions (GGUF)

Quantization Description Bits per Weight Download
Q8_0 Best accuracy and performance 8-bit model-Q8_0.gguf
Q6_K Balance between speed and quality 6-bit model-Q6_K.gguf
Q5_K_M Good accuracy with lower memory usage 5-bit model-Q5_K_M.gguf
Q3_K_M Smaller footprint, good for limited resources 3-bit model-Q3_K_M.gguf
TQ2_0 Very fast inference, minimal memory usage 2-bit model-TQ2_0.gguf
TQ1_0 Minimal memory usage, fastest inference 2-bit model-TQ1_0.gguf

πŸš€ Recommended Applications

  • Creative Writing & Arts: Generate stories, scripts, poetry, and explore creative ideas.
  • Virtual Assistants: Provide natural and unrestricted conversational experiences.
  • Educational Resources: Facilitate engaging, interactive learning environments.
  • Entertainment & Gaming: Create immersive narratives and interactive gameplay experiences.

πŸŽ‰ Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "nidum/nidum-gemma-3-4b-it-uncensored"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)

prompt = "Tell me an imaginative story about a hidden city."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids

generated_ids = model.generate(input_ids, max_length=200)
output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)

print(output)

🌟 Unlock Limitless Creativity

Experience the freedom to innovate and explore without boundaries using Nidum Gemma-3-4B IT Uncensored.

Downloads last month
7
Safetensors
Model size
4.3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for nidum/Nidum-gemma-3-4b-it-Uncensored

Quantizations
2 models