THANK YOU

#2
by rombodawg - opened

Ive (And many others) have been waiting for this model forever, especially for coding. Since wizard coder has logic and reasoning issues, im very much looking forward to this all around better model!

waiting for version 1.2 for 70B ;D

Im ready for uncensored version XD

@ehartford WizardLM 70B V1.0 Uncensored?

If @WizardLM released the dataset then I could do this.
They have not released data for WizardLM 1.1 or 1.2, or WizardCoder.
It seems like maybe they have decided, as a policy, not to release their dataset any more.
Which really makes this not an "open source" model. Just, a "permissively licensed" model.

Sadly not, however my datasets are growing, and im hoping they can train models to be as good as wizardcoder and be open source. I already have plans to take my 200k megacode dataset to over 300k in code instructions as well as add the same data to my version 2 lossless coding dataset

@ehartford Maybe you could train it with their open dataset with 196k pairs(With your uncensored version of course.)? That's what I was suppost to meant. Similar to what you did with 13B yesterday.

This model cannot ever be uncensored.

this model was trained on top of llama2-chat. which has censorship baked in.

image.png

Also this means, that this model was trained with two different prompt formats - llama2-chat and vicuna. So it will respond to both prompt formats, and it will respond differently to each.

However, yes, I can take the 196k dataset and train it on 70b. In fact, I will do that, in full weights.

WizardLM Team org

If @WizardLM released the dataset then I could do this.
They have not released data for WizardLM 1.1 or 1.2, or WizardCoder.
It seems like maybe they have decided, as a policy, not to release their dataset any more.
Which really makes this not an "open source" model. Just, a "permissively licensed" model.

Hi,

Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.

Thank you for your understanding.

ehartGOD is going to be doing 70B finetunes

I KNEEL

If @WizardLM released the dataset then I could do this.
They have not released data for WizardLM 1.1 or 1.2, or WizardCoder.
It seems like maybe they have decided, as a policy, not to release their dataset any more.
Which really makes this not an "open source" model. Just, a "permissively licensed" model.

Hi,

Recently, there have been clear changes in the open-source policy and regulations of our overall organization's code, data, and models.
Despite this, we have still worked hard to obtain opening the weights of the model first, but the data involves stricter auditing and is in review with our legal team .
Our researchers have no authority to publicly release them without authorization.

Thank you for your understanding.

Thank you for the clarity! Appreciate your work.

Sign up or log in to comment