Edit model card

Breeze-13B-32k-Instruct-v1_0

Breeze-13B-32k-Instruct-v1_0 is a merge of the following models using mergekit:

🧩 Configuration

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 8]
    model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
- sources:
  - layer_range: [4, 12]
    model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
- sources:
  - layer_range: [8, 16]
    model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
- sources:
  - layer_range: [12, 20]
    model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
- sources:
  - layer_range: [16, 24]
    model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
- sources:
  - layer_range: [20, 28]
    model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
- sources:
  - layer_range: [24, 32]
    model: MediaTek-Research/Breeze-7B-32k-Instruct-v1_0
Downloads last month
7
Safetensors
Model size
12.7B params
Tensor type
BF16
·
Inference API
Input a message to start chatting with win10/Breeze-13B-32k-Instruct-v1_0.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.