metadata
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- moe
FusionNet_34Bx2_MoE
Fine-tuned model on English language using MoE method.
Model description
The FusionNet_34Bx2_MoE is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet_34Bx2_MoE has 60.8B parameters, and this model is fine-tuned. Enjoy!
Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
model = AutoModelForCausalLM.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")