Llama-3-Nerdy-RP-8B / README.md
Azazelle's picture
Upload 10 files
aade63a verified
metadata
base_model:
  - ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
  - Azazelle/Aura_Llama3
  - Azazelle/llama3-8b-hikikomori-v0.4
  - Azazelle/RP_Format_QuoteAsterisk_Llama3
  - Azazelle/Theory_of_Mind_Llama3
  - Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B
  - Azazelle/ANJIR-ADAPTER-128
  - Azazelle/Llama3_RP_ORPO_LoRA
  - Azazelle/Smarts_Llama3
  - Azazelle/BlueMoon_Llama3
  - Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B
  - Azazelle/Nimue-8B
  - Azazelle/Luna_Llama3
  - Azazelle/Llama-3-LongStory-LORA
  - Azazelle/Llama-3-8B-Abomination-LORA
library_name: transformers
tags:
  - mergekit
  - merge

nerdy_rp

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using output/stop_it_nerd as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: output/stop_it_nerd
dtype: bfloat16
merge_method: model_stock
slices:
- sources:
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Llama-3-8B-Abomination-LORA
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Llama-3-LimaRP-Instruct-LoRA-8B
  - layer_range: [0, 32]
    model: output/stop_it_nerd+ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Llama-3-LongStory-LORA
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/ANJIR-ADAPTER-128
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Llama3_RP_ORPO_LoRA
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/RP_Format_QuoteAsterisk_Llama3
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Theory_of_Mind_Llama3
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Aura_Llama3
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Luna_Llama3
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/BlueMoon_Llama3
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Smarts_Llama3
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/llama3-8b-hikikomori-v0.4
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Nimue-8B
  - layer_range: [0, 32]
    model: output/stop_it_nerd+Azazelle/Llama-3-Instruct-LiPPA-LoRA-8B
  - layer_range: [0, 32]
    model: output/stop_it_nerd