Edit model card

vince62s/phi-2-psy-GGUF

Quantized GGUF model files for phi-2-psy from vince62s

Name Quant method Size
phi-2-psy.fp16.gguf fp16 5.56 GB
phi-2-psy.q2_k.gguf q2_k 1.11 GB
phi-2-psy.q3_k_m.gguf q3_k_m 1.43 GB
phi-2-psy.q4_k_m.gguf q4_k_m 1.74 GB
phi-2-psy.q5_k_m.gguf q5_k_m 2.00 GB
phi-2-psy.q6_k.gguf q6_k 2.29 GB
phi-2-psy.q8_0.gguf q8_0 2.96 GB

Original Model Card:

Phi-2-psy

Phi-2-psy is a merge of the following models:

πŸ† Evaluation

The evaluation was performed using LLM AutoEval on Nous suite.

Model AGIEval GPT4All TruthfulQA Bigbench Average
phi-2-psy 34.4 71.4 48.2 38.1 48.02
phixtral-2x2_8 34.1 70.4 48.8 37.8 47.78
dolphin-2_6-phi-2 33.1 69.9 47.4 37.2 46.89
phi-2-orange 33.4 71.3 49.9 37.3 47.97
phi-2 28.0 70.8 44.4 35.2 44.61

🧩 Configuration

slices:
  - sources:
      - model: rhysjones/phi-2-orange
        layer_range: [0, 32]
      - model: cognitivecomputations/dolphin-2_6-phi-2
        layer_range: [0, 32]
merge_method: slerp
base_model: rhysjones/phi-2-orange
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16

πŸ’» Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("vince62s/phi-2-psy", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("vince62s/phi-2-psy", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
   """
   Print all primes between 1 and n
   """''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
Downloads last month
33
GGUF
Model size
2.78B params
Architecture
phi2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/phi-2-psy-GGUF

Quantized
(1)
this model