Edit model card

guanaco-33b merged with kaiokendev's 33b SuperHOT 8k LoRA, without quant. (Full FP16 model)

Downloads last month
3
Inference API
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.