File size: 646 Bytes
d5faad2 c557555 d5faad2 c557555 b0dc8a9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
---
license: mit
language:
- en
pipeline_tag: text-generation
tags:
- moe
---
# FusionNet_34Bx2_MoE
Fine-tuned model on English language using MoE method.
## Model description
The FusionNet_34Bx2_MoE is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet_34Bx2_MoE has 60.8B parameters, and this model is fine-tuned. Enjoy!
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
model = AutoModelForCausalLM.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
``` |