Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,11 @@ tags:
|
|
9 |
# FusionNet_34Bx2_MoE
|
10 |
Fine-tuned model on English language using MoE method.
|
11 |
## Model description
|
12 |
-
The FusionNet_34Bx2_MoE is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet_34Bx2_MoE has 60.8B parameters, and this model is fine-tuned. Enjoy!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
# FusionNet_34Bx2_MoE
|
10 |
Fine-tuned model on English language using MoE method.
|
11 |
## Model description
|
12 |
+
The FusionNet_34Bx2_MoE is a model to experiment with the MoE method, which could significantly increase the performance of the original model. The FusionNet_34Bx2_MoE has 60.8B parameters, and this model is fine-tuned. Enjoy!
|
13 |
+
## Usage
|
14 |
+
```python
|
15 |
+
import torch
|
16 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
17 |
+
tokenizer = AutoTokenizer.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
|
18 |
+
model = AutoModelForCausalLM.from_pretrained("TomGrc/FusionNet_34Bx2_MoE")
|
19 |
+
```
|