Edit model card

just all sao10k model test, MOE merge, have fun

Update: ok, it a good model, with all Sao10K L3 RP model feature, hope you guys enjoy it.

My GGUF repo (only have Q4_K_M, I'm so lazy): https://huggingface.co/Alsebay/SaoRPM-2x8B-beta-GGUF

Thank mradermacher for quanting GGUF: https://huggingface.co/mradermacher/SaoRPM-2x8B-GGUF

Imatrix version: https://huggingface.co/mradermacher/SaoRPM-2x8B-i1-GGUF

Downloads last month
16
Safetensors
Model size
13.7B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Alsebay/SaoRPM-2x8B

Finetuned
(8)
this model
Quantizations
2 models