Edit model card

Paradigm

This is a 8bpw exl2 quant of the Paradigm 7B model. ChatML or Alpaca instruct sequences both work.


image/jpeg

An incredibly effective and intelligent RP model designed to be the best bot you've ever used. I hope you like it!

GGUF available here: https://huggingface.co/Lewdiculous/Paradigm_7B-GGUF-IQ-Imatrix

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.47
AI2 Reasoning Challenge (25-Shot) 73.63
HellaSwag (10-Shot) 88.66
MMLU (5-Shot) 64.02
TruthfulQA (0-shot) 75.19
Winogrande (5-shot) 84.53
GSM8k (5-shot) 66.79

Configuration

The following YAML configuration was used to produce this model:

merge_method: dare_ties
base_model: ChaoticNeutrals/Eris_Remix_7B
parameters:
  normalize: true
models:
  - model: ChaoticNeutrals/Eris_Remix_7B
    parameters:
      weight: 1
  - model: ResplendentAI/Datura_7B
    parameters:
      weight: 1
  - model: liminerity/Multiverse-Experiment-slerp-7b+jeiku/Alpaca_NSFW_Shuffled_Mistral
    parameters:
      weight: 0.33
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.47
AI2 Reasoning Challenge (25-Shot) 73.63
HellaSwag (10-Shot) 88.66
MMLU (5-Shot) 64.02
TruthfulQA (0-shot) 75.19
Winogrande (5-shot) 84.53
GSM8k (5-shot) 66.79
Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RossAscends/Paradigm_7B_6bpw_exl2

Datasets used to train RossAscends/Paradigm_7B_6bpw_exl2

Evaluation results