NitralAI's measurement.json was used for quantization

Sekhmet_Bet-L3.1-8B-v0.2

exllamav2 quant for Nitral-AI/Sekhmet_Bet-L3.1-8B-v0.2

Original model information:

image/jpeg

Sekhmet_Bet [v-0.2] - Designed to provide robust solutions to complex problems while offering support and insightful guidance.

GGUF Quant's available thanks to: Reiterate3680 <3 GGUF Here

EXL2 Quant: 5bpw Exl2 Here

Recomended ST Presets: Sekhmet Presets(Same as Hathor's)


Training Note: Sekhmet_Bet [v0.2] is trained on: 1 epoch of Private - Hathor_0.85 Instructions, small subset of creative writing data, roleplaying chat pairs over Sekhmet_Aleph-L3.1-8B-v0.1

Additional Note's: This model was quickly assembled to provide users with a relatively uncensored alternative to L3.1 Instruct, featuring extended context capabilities. (As I will soon be on a short hiatus) The learning rate for this model was set rather low. Therefore, I do not expect it to match the performance levels demonstrated by Hathor versions 0.5, 0.85, or 1.0.

Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Slvcxc/Sekhmet_Bet-L3.1-8B-v0.2-8.0bpw-h8-exl2

Quantized
(6)
this model