llama3-Fasal-Mitra
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using unsloth/llama-3-8b-Instruct as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: unsloth/llama-3-8b-Instruct
parameters:
weight: 0.20
- model: Cognitive-Lab/LLama3-Gaja-Hindi-8B-v0.1
parameters:
weight: 0.40
- model: KissanAI/llama3-8b-dhenu-0.1-sft-16bit
parameters:
weight: 0.40
base_model: unsloth/llama-3-8b-Instruct
merge_method: task_arithmetic
dtype: bfloat16
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.