TimeMobius's picture
Update README.md
0a65ecb
|
raw
history blame
1.98 kB
metadata
license: apache-2.0
language:
  - fr
  - it
  - de
  - es
  - en
  - zh
inference: false

Model Card for Mobius-12B-base-m1

The Mobius-12B-base-m1 Large Language Model (LLM) is a pretrained model based on RWKV v5 arch. We use

Warning

This repo contains weights that are not compatible with Hugging Face transformers library yet. But you can try thisPR as well. RWKV runner or AI00 server also work.

Instruction|Chat format

This format must be strictly respected, otherwise the model will generate sub-optimal outputs.

The template used to build a prompt for the Instruct model is defined as follows:

User: {Instruction|prompt}\n\nAssistant:

Run the model

Need to install this PR pip install -e git://github.com/BBuf/transformers.git

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("TimeMobius/Mobius-12B-base-m1", torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("TimeMobius/Mobius-12B-base-m1", trust_remote_code=True)

text = "x"
prompt = f'Question: {text.strip()}\n\nAnswer:'

inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=40)
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))

Limitations

The Mobius base m1 is the base model can be easily fine-tuned to achieve compelling performance.

Benchmark

Mobius-12B-base-m1
ppl 3.41
piqa 0.78
hellaswag 0.71
winogrande 0.68
arc_challenge 0.42
arc_easy 0.73
openbookqa 0.40
sciq 0.93

@TimeMobius