Edit model card

phi3_mini-128k_5.6B

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: microsoft/Phi-3-mini-128k-instruct
      layer_range: [0, 24]
  - sources:
    - model: microsoft/Phi-3-mini-128k-instruct
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

Downloads last month
2
Safetensors
Model size
5.63B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with ChenWeiLi/Phi-3-mini-128k_5.6B.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Finetuned from

Collection including ChenWeiLi/Phi-3-mini-128k_5.6B