A newer version of this model is available:
AINovice2005/LeEmpereur_70-Base
Model Name:
- LeEmpereur_70
Model Description
The pruning was performed using the PruneMe library from Arcee.ai, significantly reducing the model's size. The exact pruning strategy applied involves reducing the number of parameters by approximately 70%.
Configuration:
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: argilla/notus-7b-v1
layer_range: [0, 1]
- sources:
- model: argilla/notus-7b-v1
layer_range: [2,10]
merge_method: passthrough
dtype: bfloat16
๐๐๐ฌ๐ฎ๐ฅ๐ญ๐ฌ: Firstly, the ideal number of parameters to be pruned should be much lower in future iterations.Secondly, sizeable amount of finetuning should be done if model parameters are reduced to a greater extent.
๐๐จ๐ญ๐: This model is made with the intention to be used for fine-tuning. It should not to be used for inference as is.
- Downloads last month
- 55
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for AINovice2005/LeEmpereur_70
Base model
mistralai/Mistral-7B-v0.1
Finetuned
alignment-handbook/zephyr-7b-sft-full
Finetuned
argilla/notus-7b-v1