lemon07r's picture
init
f917a9b verified
|
raw
history blame
3.1 kB
metadata
base_model:
  - nbeerbower/gemma2-gutenberg-9B
  - princeton-nlp/gemma-2-9b-it-SimPO
  - jsgreenawalt/gemma-2-9B-it-advanced-v2.1
  - UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
  - recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp
  - unsloth/gemma-2-9b-it
  - lemon07r/Gemma-2-Ataraxy-v2-9B
  - ifable/gemma-2-Ifable-9B
  - grimjim/Gemma2-Nephilim-v3-9B
  - lemon07r/Gemma-2-Ataraxy-v2a-9B
  - wzhouad/gemma-2-9b-it-WPO-HB
  - lemon07r/Gemma-2-Ataraxy-9B
library_name: transformers
tags:
  - mergekit
  - merge

Gemma-2-Ataraxy-Remix-9B

Another test model. Ignore this for now. Probably wont be good but I am testing a lot of stuff.

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using unsloth/gemma-2-9b-it as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: unsloth/gemma-2-9b-it
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
  - layer_range: [0, 42]
    model: lemon07r/Gemma-2-Ataraxy-9B
  - layer_range: [0, 42]
    model: lemon07r/Gemma-2-Ataraxy-v2-9B
  - layer_range: [0, 42]
    model: lemon07r/Gemma-2-Ataraxy-v2a-9B
  - layer_range: [0, 42]
    model: jsgreenawalt/gemma-2-9B-it-advanced-v2.1
  - layer_range: [0, 42]
    model: ifable/gemma-2-Ifable-9B
  - layer_range: [0, 42]
    model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
  - layer_range: [0, 42]
    model: princeton-nlp/gemma-2-9b-it-SimPO
  - layer_range: [0, 42]
    model: wzhouad/gemma-2-9b-it-WPO-HB
  - layer_range: [0, 42]
    model: nbeerbower/gemma2-gutenberg-9B
  - layer_range: [0, 42]
    model: grimjim/Gemma2-Nephilim-v3-9B
  - layer_range: [0, 42]
    model: recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp
  - layer_range: [0, 42]
    model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
  - layer_range: [0, 42]
    model: unsloth/gemma-2-9b-it