phi-2-DLEC / README.md
Steelskull's picture
Upload folder using huggingface_hub
f23a241 verified
|
raw
history blame
No virus
2.74 kB
metadata
tags:
  - merge
  - mergekit
  - lazymergekit
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
base_model:
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super
  - abacaj/phi-2-super

phi-2-DLEC

phi-2-DLEC is a merge of the following models using LazyMergekit:

🧩 Configuration

dtype: bfloat16
merge_method: passthrough
slices:
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [0, 3] # Introduces 0, 3
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [3, 8] # Duplicates 3, introduces 4, 7, 8
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [7, 12] # Duplicates 7, 8, introduces 11, 12
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [11, 16] # Duplicates 11, 12, introduces 15, 16
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [15, 20] # Duplicates 15, 16, introduces 19, 20
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [19, 24] # Duplicates 19, 20, introduces 23, 24
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [23, 28] # Duplicates 23, 24, introduces 27, 28
  - sources:
      - model: abacaj/phi-2-super
        layer_range: [27, 32] # Duplicates 27, 28, introduces 31, 32

💻 Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "TheSkullery/phi-2-DLEC"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])