xiaol's picture
Update README.md
0e173ca verified
metadata
license: apache-2.0

license: apache-2.0

Model url: https://huggingface.co/TimeMobius/Mobius-RWKV-r5-chat-12B-8k

Considering the long context required for training from scratch, we decided to retrain the r5 12B model from 8k. This model exhibits lower diversity compared to its predecessor, but it excels in following instructions and logical understanding. It is possible to utilize both models simultaneously as multi-agents, each performing a different task.

Mobius RWKV r5 chat 12B 8k

Mobius is a RWKV v5.2 arch chat model, benifit from Matrix-Valued States and Dynamic Recurrence

Introduction

Mobius is a RWKV v5.2 arch model, a state based RNN+CNN+Transformer Mixed language model pretrained on a certain amount of data. In comparison with the previous released Mobius, the improvements include:

  • Only 24G Vram to run this model locally with fp16;
  • Significant performance improvement;
  • Multilingual support ;
  • Stable support of 128K context length.
  • Base model Mobius-mega-12B-128k-base

Usage

We encourage you use few shots to use this model, Desipte Directly use User: xxxx\n\nAssistant: xxx\n\n is really good too, Can boost all potential ability.

Recommend Temp and topp: 0.7 0.6/1 0.3/1.5 0.3/0.2 0.8

More details

Mobius 12B 128k based on RWKV v5.2 arch, which is leading state based RNN+CNN+Transformer Mixed large language model which focus opensouce community

  • 10~100 trainning/inference cost reduce;
  • state based,selected memory, which mean good at grok;
  • community support.

requirements

24G vram to run fp16, 12G for int8, 6G for nf4 with Ai00 server.

future plan

If you need a HF version let us know

Mobius-Chat-12B-128k