Paradigm
This is a 8bpw exl2 quant of the Paradigm 7B model. ChatML or Alpaca instruct sequences both work.
An incredibly effective and intelligent RP model designed to be the best bot you've ever used. I hope you like it!
GGUF available here: https://huggingface.co/Lewdiculous/Paradigm_7B-GGUF-IQ-Imatrix
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 75.47 |
AI2 Reasoning Challenge (25-Shot) | 73.63 |
HellaSwag (10-Shot) | 88.66 |
MMLU (5-Shot) | 64.02 |
TruthfulQA (0-shot) | 75.19 |
Winogrande (5-shot) | 84.53 |
GSM8k (5-shot) | 66.79 |
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: ChaoticNeutrals/Eris_Remix_7B
parameters:
normalize: true
models:
- model: ChaoticNeutrals/Eris_Remix_7B
parameters:
weight: 1
- model: ResplendentAI/Datura_7B
parameters:
weight: 1
- model: liminerity/Multiverse-Experiment-slerp-7b+jeiku/Alpaca_NSFW_Shuffled_Mistral
parameters:
weight: 0.33
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 75.47 |
AI2 Reasoning Challenge (25-Shot) | 73.63 |
HellaSwag (10-Shot) | 88.66 |
MMLU (5-Shot) | 64.02 |
TruthfulQA (0-shot) | 75.19 |
Winogrande (5-shot) | 84.53 |
GSM8k (5-shot) | 66.79 |
- Downloads last month
- 16
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for RossAscends/Paradigm_7B_6bpw_exl2
Merge model
this model
Datasets used to train RossAscends/Paradigm_7B_6bpw_exl2
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard73.630
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard88.660
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.020
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard75.190
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard84.530
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard66.790