Edit model card

FusionNet_34Bx2_MoE

Fine-tuned model on English language using MoE method.

Model description

The FusionNet_34Bx2_MoE is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet_34Bx2_MoE has 60.8B parameters, and this model is fine-tuned. Enjoy!

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
model = AutoModelForCausalLM.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 77.07
AI2 Reasoning Challenge (25-Shot) 72.95
HellaSwag (10-Shot) 86.22
MMLU (5-Shot) 77.05
TruthfulQA (0-shot) 71.31
Winogrande (5-shot) 83.98
GSM8k (5-shot) 70.89
Downloads last month
1,205
Safetensors
Model size
60.8B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TomGrc/FusionNet_34Bx2_MoE

Quantizations
5 models

Evaluation results