Friction Reasoning Model
This model is fine-tuned to respond in an uncertain manner. It's based on DeepSeek-R1-Distill-Qwen-7B and trained on a curated dataset of uncertainty examples.
Model Description
- Model Architecture: DeepSeek-R1-Distill-Qwen-7B with LoRA adapters
- Language(s): English
- License: Apache 2.0
- Finetuning Approach: Instruction tuning with friction-based reasoning examples
Limitations
The model:
- Is not designed for factual question-answering
- May sometimes be overly uncertain
- Should not be used for medical, legal, or financial advice
- May not perform well on objective or factual tasks
Bias and Risks
The model:
- May exhibit biases present in the training data
- Could potentially reinforce uncertainty in certain situations
- Might challenge user assumptions in sensitive contexts
- Should be used with appropriate content warnings
Citation
If you use this model in your research, please cite:
@misc{friction-reasoning-2025,
author = {Leon van Bokhorst},
title = {Mixture of Friction: Fine-tuned Language Model for Uncertainty},
year = {2025},
publisher = {HuggingFace},
journal = {HuggingFace Model Hub},
howpublished = {\url{https://huggingface.co/leonvanbokhorst/deepseek-r1-uncertainty}}
}
Acknowledgments
- DeepSeek AI for the base model
- Unsloth team for the optimization toolkit
- HuggingFace for the model hosting and infrastructure
- Downloads last month
- 25
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for leonvanbokhorst/deepseek-r1-uncertainty
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-7B