https://huggingface.co/cognitivecomputations/dolphin-2.9.1-qwen-110b

#51
by nicoboss - opened

Qwen1.5-110B seems to be a really capable model and I generally love dolphin finetunes so would be great if you could add dolphin-2.9.1-qwen-110b to your queue.

PS: I already answered to your mail and gave you access to an LXC container with GPUs. text-generation-webui was able to use the GPUs inside an LXC container without any issues so your scripts will likely just work the way they are as well.

Hey :)

I deliberately avoided reading mail today, because I know I can't reply properly anyway. Anyway, I read it, and it's great news, but I am not sure when exactly I am able to properly react to it (but certainly this week).

In the meantime, I've added the model to the queue. As usual, static quants should come first, followed by imatrix ones (although the imatrix might be delayed...).

mradermacher changed discussion status to closed

Sign up or log in to comment