PsykidelicLlama3 / README.md
WesPro's picture
Upload folder using huggingface_hub
784a6ce verified
metadata
base_model:
  - nbeerbower/llama-3-dragonmaid-8B
  - cognitivecomputations/dolphin-2.9-llama3-8b
  - mlabonne/ChimeraLlama-3-8B
  - WesPro/F2PhenotypeDPO
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using mlabonne/ChimeraLlama-3-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mlabonne/ChimeraLlama-3-8B
    # No parameters necessary for base model
  - model: mlabonne/ChimeraLlama-3-8B
    parameters:
      density: 0.6
      weight: 0.2
  - model: nbeerbower/llama-3-dragonmaid-8B
    parameters:
      density: 0.55
      weight: 0.4
  - model: cognitivecomputations/dolphin-2.9-llama3-8b
    parameters:
      density: 0.55
      weight: 0.2
  - model: WesPro/F2PhenotypeDPO
    parameters:
      density: 0.55
      weight: 0.2
merge_method: dare_ties
base_model: mlabonne/ChimeraLlama-3-8B
parameters:
  int8_mask: true
dtype: float16