Edit model card

mlc-chat-una-cybertron-7b-v2-bf16-q3f16_1

An MLC-compiled version of Una Cybertron 7B BF16 quantized to q3f16 for running locally on mobile devices.

Requires a build of MLC Chat for iOS that supports Mistral. As of 12/12/23, that means you will need to build MLC Chat for iOS from source at mlc-llm @ MLC.ai on Github.

Its currently configured to support the ChatML conversation template, but really it is supposed to be ChatML + a custom system message.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .