asiansoul's picture
Update README.md
9dcd097 verified
metadata
base_model:
  - beomi/Llama-3-KoEn-8B-Instruct-preview
  - saltlux/Ko-Llama3-Luxia-8B
  - cognitivecomputations/dolphin-2.9-llama3-8b
  - NousResearch/Meta-Llama-3-8B
  - nvidia/Llama3-ChatQA-1.5-8B
  - aaditya/Llama3-OpenBioLLM-8B
  - Danielbrdz/Barcenas-Llama3-8b-ORPO
  - beomi/Llama-3-KoEn-8B-preview
  - abacusai/Llama-3-Smaug-8B
  - NousResearch/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
  - mergekit
  - merge
  - llama

YACHT-Llama-3-KoEn-8B

Screenshot-2024-05-07-at-3-04-45-AM

🎵 [JayLee LLMs Signature Tag] : ✍️ "I need a Jay Jay chat boy" 🎵

Navigating the High Seas of Data: Crafting the Ultimate Yacht Insights with Merged LLMs

Aren’t you sometimes tired of just doing LLM & RAG & Normal Chat app? I'll show you a cool app soon integrating this my merged one(Tuned car). It wouldn't be fun if we only developed cars, so life is ultimately about driving cars and socializing with people.

🧨 When using the Merge model for commercial purposes, a lot of care is needed. A mix of many models can be good, but it can also pose many risks. 🧨

Thank you for visiting my page today!

Your donation makes me feel more free in Dev life. Instead, I will provide you with fun and useful software!!!

I haven't even released 0.001% of the software to you yet!!!

"Donation(ETH/USDT) : 0x8BB117dD4Cc0E19E5536ab211070c0dE039a85c0"

Can you borrow your computer power to merge heavy xtuner with my one cuz my com memory said that he is sick -> DM me!! (code ready)

Diff calculated for model.layers.13.self_attn.q_proj.weight
Diff calculated for model.layers.13.self_attn.k_proj.weight
Diff calculated for model.layers.13.self_attn.v_proj.weight
Diff calculated for model.layers.13.self_attn.o_proj.weight
Diff calculated for model.layers.13.mlp.gate_proj.weight
Diff calculated for model.layers.13.mlp.up_proj.weight
Diff calculated for model.layers.13.mlp.down_proj.weight
Diff calculated for model.layers.13.input_layernorm.weight
Diff calculated for model.layers.13.post_attention_layernorm.weight
Diff calculated for model.layers.14.self_attn.q_proj.weight
Diff calculated for model.layers.14.self_attn.k_proj.weight
Diff calculated for model.layers.14.self_attn.v_proj.weight
Diff calculated for model.layers.14.self_attn.o_proj.weight
Diff calculated for model.layers.14.mlp.gate_proj.weight
Diff calculated for model.layers.14.mlp.up_proj.weight
Diff calculated for model.layers.14.mlp.down_proj.weight
Diff calculated for model.layers.14.input_layernorm.weight
Diff calculated for model.layers.14.post_attention_layernorm.weight

(.venv) jaylee@lees-MacBook-Pro-2 merge % /opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Merge Method

This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: NousResearch/Meta-Llama-3-8B
  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.60  
      weight: 0.25  
  
  - model: beomi/Llama-3-KoEn-8B-preview
    parameters:
      density: 0.55  
      weight: 0.2
  
  - model: saltlux/Ko-Llama3-Luxia-8B
    parameters:
      density: 0.55  
      weight: 0.15
  
  - model: beomi/Llama-3-KoEn-8B-Instruct-preview
    parameters:
      density: 0.55  
      weight: 0.15 
  - model: nvidia/Llama3-ChatQA-1.5-8B
    parameters:
      density: 0.55  
      weight: 0.1  
  - model: cognitivecomputations/dolphin-2.9-llama3-8b
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: Danielbrdz/Barcenas-Llama3-8b-ORPO
    parameters:
      density: 0.55  
      weight: 0.05
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: aaditya/Llama3-OpenBioLLM-8B
    parameters:
      density: 0.55  
      weight: 0.1 
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16

Test

Screenshot-2024-05-07-at-2-45-38-AM Screenshot-2024-05-07-at-2-45-55-AM Screenshot-2024-05-07-at-2-46-11-AM Screenshot-2024-05-07-at-2-46-18-AM Screenshot-2024-05-07-at-2-51-23-AM Screenshot-2024-05-07-at-2-46-48-AM Screenshot-2024-05-07-at-2-49-45-AM Screenshot-2024-05-07-at-3-20-20-PM Screenshot-2024-05-07-at-4-58-18-PM