aashish1904's picture
Upload README.md with huggingface_hub
ad3eadf verified
metadata
license: gpl-3.0
datasets:
  - NobodyExistsOnTheInternet/ToxicQAFinal
  - anthracite-org/kalo-opus-instruct-22k-no-refusal
  - Orion-zhen/dpo-toxic-zh
  - unalignment/toxic-dpo-v0.2
  - Crystalcareai/Intel-DPO-Pairs-Norefusals
language:
  - zh
  - en
base_model:
  - Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
  - qwen
  - uncensored

QuantFactory Banner

QuantFactory/Qwen2.5-7B-Instruct-Uncensored-GGUF

This is quantized version of Orion-zhen/Qwen2.5-7B-Instruct-Uncensored created using llama.cpp

Original Model Card

Qwen2.5-7B-Instruct-Uncensored

This model is an uncensored fine-tune version of Qwen2.5-7B-Instruct. However, I can still notice that though uncensored, the model fails to generate detailed descriptions on certain extreme scenarios, which might be associated with deletion on some pretrain datasets in Qwen's pretraining stage.

Check out my roleplay&writing enhanced model based on this model: Orion-zhen/Meissa-Qwen2.5-7B-Instruct

Traning details

I used SFT + DPO to ensure uncensorment as well as trying to maintain original model's capabilities.

  • SFT:
    • NobodyExistsOnTheInternet/ToxicQAFinal
    • anthracite-org/kalo-opus-instruct-22k-no-refusal
  • DPO:
    • Orion-zhen/dpo-toxic-zh
    • unalignment/toxic-dpo-v0.2
    • Crystalcareai/Intel-DPO-Pairs-Norefusals