Pulsar_7B / README.md
rmdhirr's picture
Update README.md
b627f08 verified
|
raw
history blame
No virus
1.19 kB
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - mistral
  - trl
  - dpo
  - uncensored
  - roleplay
  - fine-tune
base_model: MTSAIR/multi_verse_model
library_name: transformers
datasets:
  - grimulkan/theory-of-mind
  - grimulkan/physical-reasoning
  - ResplendentAI/Luna_Alpaca
  - unalignment/toxic-dpo-v0.2
  - kira/math-dpo
  - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED

💫 Pulsar_7B

Pulsar_7B is a fine-tune of MTSAIR/multi_verse_model, trained on these datasets:

  • grimulkan/theory-of-mind
  • grimulkan/physical-reasoning
  • ResplendentAI/Luna_Alpaca
  • unalignment/toxic-dpo-v0.2
  • kira/math-dpo
  • athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED

Quantizations

Thanks to mradermacher, static GGUF quants are available here.


This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.