|
### Nidum-Llama-3.2-3B-Uncensored |
|
|
|
### Welcome to Nidum! |
|
At Nidum, we believe in pushing the boundaries of innovation by providing advanced and unrestricted AI models for every application. Dive into our world of possibilities and experience the freedom of **Nidum-Llama-3.2-3B-Uncensored**, tailored to meet diverse needs with exceptional performance. |
|
|
|
--- |
|
|
|
[![GitHub Icon](https://upload.wikimedia.org/wikipedia/commons/thumb/9/95/Font_Awesome_5_brands_github.svg/232px-Font_Awesome_5_brands_github.svg.png)](https://github.com/NidumAI-Inc) |
|
**Explore Nidum's Open-Source Projects on GitHub**: [https://github.com/NidumAI-Inc](https://github.com/NidumAI-Inc) |
|
|
|
--- |
|
### Key Features |
|
|
|
1. **Uncensored Responses**: Capable of addressing any query without content restrictions, offering detailed and uninhibited answers. |
|
2. **Versatility**: Excels in diverse use cases, from complex technical queries to engaging casual conversations. |
|
3. **Advanced Contextual Understanding**: Draws from an expansive knowledge base for accurate and context-aware outputs. |
|
4. **Extended Context Handling**: Optimized for handling long-context interactions for improved continuity and depth. |
|
5. **Customizability**: Adaptable to specific tasks and user preferences through fine-tuning. |
|
|
|
--- |
|
|
|
### Use Cases |
|
|
|
- **Open-Ended Q&A** |
|
- **Creative Writing and Ideation** |
|
- **Research Assistance** |
|
- **Educational Queries** |
|
- **Casual Conversations** |
|
- **Mathematical Problem Solving** |
|
- **Long-Context Dialogues** |
|
|
|
--- |
|
|
|
### How to Use |
|
|
|
To start using **Nidum-Llama-3.2-3B-Uncensored**, follow the sample code below: |
|
|
|
```python |
|
import torch |
|
from transformers import pipeline |
|
|
|
pipe = pipeline( |
|
"text-generation", |
|
model="nidum/Nidum-Llama-3.2-3B-Uncensored", |
|
model_kwargs={"torch_dtype": torch.bfloat16}, |
|
device="cuda", # replace with "mps" to run on a Mac device |
|
) |
|
|
|
messages = [ |
|
{"role": "user", "content": "Tell me something fascinating."}, |
|
] |
|
|
|
outputs = pipe(messages, max_new_tokens=256) |
|
assistant_response = outputs[0]["generated_text"][-1]["content"].strip() |
|
print(assistant_response) |
|
``` |
|
|
|
--- |
|
|
|
### Datasets and Fine-Tuning |
|
|
|
The following fine-tuning datasets are leveraged to enhance specific model capabilities: |
|
|
|
- **Uncensored Data**: Enables unrestricted and uninhibited responses. |
|
- **RAG-Based Fine-Tuning**: Optimizes retrieval-augmented generation for knowledge-intensive tasks. |
|
- **Long Context Fine-Tuning**: Enhances the model's ability to process and maintain coherence in extended conversations. |
|
- **Math-Instruct Data**: Specially curated for precise and contextually accurate mathematical reasoning. |
|
|
|
--- |
|
|
|
### Benchmarks |
|
|
|
After fine-tuning with **uncensored data**, **Nidum-Llama-3.2-3B** demonstrates **superior performance compared to the original LLaMA model**, particularly in accuracy and handling diverse, unrestricted scenarios. |
|
|
|
#### GPQA: Evaluating Domain Expertise |
|
We present **GPQA**, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. |
|
|
|
| **Category** | **Metric** | **LLaMA 3B** | **Nidum 3B** | |
|
|---------------------------------------|------------------------------|--------------|--------------| |
|
| **gpqa_diamond_cot_n_shot** | Exact Match (Flexible) | 0 | 0.2 | |
|
| | Accuracy | 0.1 | 0.2 | |
|
| **gpqa_diamond_generative_n_shot** | Exact Match (Flexible) | 0.3 | 0.5 | |
|
| **gpqa_diamond_zeroshot** | Accuracy | 0.2 | 0.3 | |
|
| **gpqa_extended_cot_n_shot** | Exact Match (Flexible) | 0.2 | 0 | |
|
| **gpqa_extended_cot_zeroshot** | Exact Match (Flexible) | 0.2 | 0.3 | |
|
| **gpqa_extended_generative_n_shot** | Exact Match (Flexible) | 0.1 | 0.2 | |
|
| **gpqa_extended_n_shot** | Accuracy | 0.2 | 0.2 | |
|
| **gpqa_extended_zeroshot** | Accuracy | 0.1 | 0.1 | |
|
| **gpqa_main_cot_n_shot** | Exact Match (Flexible) | 0 | 0.1 | |
|
| **gpqa_main_cot_zeroshot** | Exact Match (Flexible) | 0.2 | 0.2 | |
|
| **gpqa_main_generative_n_shot** | Exact Match (Flexible) | 0.2 | 0.2 | |
|
| **gpqa_main_n_shot** | Accuracy | 0.4 | 0.3 | |
|
| **gpqa_main_zeroshot** | Accuracy | 0.3 | 0.4 | |
|
|
|
--- |
|
|
|
#### HellaSwag: Common Sense Reasoning Benchmark |
|
|
|
HellaSwag evaluates a language model's ability to reason using common sense through sentence completion tasks. |
|
|
|
| **Metric** | **Llama 3B** | **Nidum 3B** | |
|
|---------------------------|--------------|--------------| |
|
| **hellaswag/acc** | 0.3 | 0.4 | |
|
| **hellaswag/acc_norm** | 0.3 | 0.4 | |
|
| **hellaswag/acc_norm_stderr** | 0.15275 | 0.1633 | |
|
| **hellaswag/acc_stderr** | 0.15275 | 0.1633 | |
|
|
|
--- |
|
|
|
### Contributing |
|
|
|
We welcome contributions to improve and extend the model’s capabilities. Stay tuned for updates on how to contribute. |
|
|
|
--- |
|
|
|
### Contact |
|
|
|
For inquiries, collaborations, or further information, please reach out to us at **info@nidum.ai**. |
|
|
|
--- |
|
|
|
### Explore the Possibilities |
|
|
|
Dive into unrestricted creativity and innovation with **Nidum-Llama-3.2-3B-Uncensored**! |