ai21labs/Jamba-v0.1
Text Generation
•
Updated
•
18.4k
•
1.17k
Fine-tunes of Jamba on big-name datasets
Note The base model.
Note QLoRA on some version of Alpaca (possibly vicgalle/alpaca-gpt4)
Note QLoRA on some version of Platypus (possibly chargoddard/Open-Platypus-Chat)
Note QLoRA(?) on Locutusque/hercules-v4.0
Note LoRA(?) on jondurbin/bagel-v0.5
Note Quantized version of the above
Note Base model created with some mysterious MoE merge (???), then LoRAed on UltraChat
Note Perhaps the same 4xMoE as above? But this time with Severian/Internal-Knowledge-Map