Edit model card
YAML Metadata Warning: The pipeline tag "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, any-to-any, other

BigWeave v20 110b

The BigWeave models aim to experimentally identify merge settings for increasing model performance. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared.

Prompting Format

Mistral, Vicuna and Alpaca.

Merge process

This is a merge of 152334H/miqu-1-70b-sf and lizpreciatior/lzlv_70b_fp16_hf. By conducting exl2 measurements, we identify the least important layers of lzlv. These least important layers are extended with layers in-between to create longer series of consecutive layers. These slices are then inserted into miqu.

Merge configuration:

slices:
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [0, 1]
      - model: lizpreciatior/lzlv_70b_fp16_hf
        layer_range: [0, 1]
        parameters:
          weight: 0
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [1,26]
  - sources:
    - model: lizpreciatior/lzlv_70b_fp16_hf
      layer_range: [9,44]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [27,52]
  - sources:
    - model: lizpreciatior/lzlv_70b_fp16_hf
      layer_range: [45,60]
  - sources:
    - model: 152334H/miqu-1-70b-sf
      layer_range: [53,79]
  - sources:
      - model: 152334H/miqu-1-70b-sf
        layer_range: [79, 80]
      - model: lizpreciatior/lzlv_70b_fp16_hf
        layer_range: [79, 80]
        parameters:
          weight: 0
merge_method: linear
parameters:
  weight: 1.0
dtype: float16
tokenizer_source: 152334H/miqu-1-70b-sf

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 68.03
AI2 Reasoning Challenge (25-Shot) 68.17
HellaSwag (10-Shot) 88.54
MMLU (5-Shot) 70.51
TruthfulQA (0-shot) 62.47
Winogrande (5-shot) 82.08
GSM8k (5-shot) 36.39
Downloads last month
83
Safetensors
Model size
110B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for llmixer/BigWeave-v20-110b

Quantizations
1 model

Collection including llmixer/BigWeave-v20-110b

Evaluation results