Text Generation
Transformers
Safetensors
mistral
Safetensors
text-generation-inference
Merge
7b
mistralai/Mistral-7B-Instruct-v0.1
jondurbin/bagel-7b-v0.1
dataset:ai2_arc
dataset:unalignment/spicy-3.1
dataset:codeparrot/apps
dataset:facebook/belebele
dataset:boolq
dataset:jondurbin/cinematika-v0.1
dataset:drop
dataset:lmsys/lmsys-chat-1m
dataset:TIGER-Lab/MathInstruct
dataset:cais/mmlu
dataset:Muennighoff/natural-instructions
dataset:openbookqa
dataset:piqa
dataset:Vezora/Tested-22k-Python-Alpaca
dataset:cakiki/rosetta-code
dataset:Open-Orca/SlimOrca
dataset:spider
dataset:squad_v2
dataset:migtissera/Synthia-v1.3
dataset:datasets/winogrande
Inference Endpoints
has_space
conversational
slices: | |
- sources: | |
- model: mistralai/Mistral-7B-Instruct-v0.1 | |
layer_range: [0, 32] | |
- model: jondurbin/bagel-7b-v0.1 | |
layer_range: [0, 32] | |
merge_method: slerp | |
base_model: mistralai/Mistral-7B-Instruct-v0.1 | |
parameters: | |
t: | |
- filter: self_attn | |
value: [0, 0.5, 0.3, 0.7, 1] | |
- filter: mlp | |
value: [1, 0.5, 0.7, 0.3, 0] | |
- value: 0.5 | |
dtype: bfloat16 | |