Edit model card

Introduction

MoMo-72B is trained via Supervised Fine-Tuning (SFT) using LoRA, with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-72B is trained using Moreh's MoAI platform, which simplifies the training of large-scale models, and AMD's MI250 GPU.

Details

Used Librarys

  • torch
  • peft

Used Datasets

  • Open-Orca/SlimOrca
  • No other dataset was used
  • No benchmark test set or the training set are used
Model ARC MMLU TruthfulQA GSM8K
V1.4(result < 0.1, %) TBU 0.73 0.71 TBU

Used Environments

How to use

# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-72B-LoRA-V1.4")
model = AutoModelForCausalLM.from_pretrained(
    "moreh/MoMo-72B-LoRA-V1.4"
)
Downloads last month
2,302
Safetensors
Model size
72.3B params
Tensor type
F32
Β·
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Spaces using moreh/MoMo-72B-LoRA-V1.4 5