Which one is your favorite?

#2
by MrDevolver - opened

Hello again, @migtissera , your models are quite popular in the community. I wonder, which one would you say is your personal favorite? 😊

And thank you for allowing TheBloke to quantize the models for us! ❤

Hey @MrDevolver , thank you! I do this purely out of passion, and good to get positive feedback.

My favourite so far is the gated Synthia-70B-v1.5 model, which surpasses GPT-3.5 and is on-par with Claude-2 (in my opinion!). However, it is gated -- I'm only opening it up with some conditions.

Having said that, training of Synthia-7B-v2.0 have already begun and I hope to beat 7B-v1.5, by a sizeable margin.

Hey @MrDevolver , thank you! I do this purely out of passion, and good to get positive feedback.

My favourite so far is the gated Synthia-70B-v1.5 model, which surpasses GPT-3.5 and is on-par with Claude-2 (in my opinion!). However, it is gated -- I'm only opening it up with some conditions.

Having said that, training of Synthia-7B-v2.0 have already begun and I hope to beat 7B-v1.5, by a sizeable margin.

Thank you. I was always interested in 70B models, but my current hardware is far below the minimum requirements for such big models, so perhaps one day I will be able to run them, just not today yet haha.
It's always exciting to hear that new models are being worked on and those Mistral based models are very popular, so I wish you good luck to get the results you're hoping for with your upcoming models, it would benefit us all. 🙂

Would you use one of my 70B models if I served it through an API? I would need to cover cost though so there will be a small charge. Is this something that you'd be interested in? I do have closed models that are better than the ones released here.

migtissera changed discussion status to closed

Hey @MrDevolver , thank you! I do this purely out of passion, and good to get positive feedback.

My favourite so far is the gated Synthia-70B-v1.5 model, which surpasses GPT-3.5 and is on-par with Claude-2 (in my opinion!). However, it is gated -- I'm only opening it up with some conditions.

Having said that, training of Synthia-7B-v2.0 have already begun and I hope to beat 7B-v1.5, by a sizeable margin.

What is the difference between v1.0, v1.5, and v2.0? Do you plan to do one for Falcon 180b?

Hey! v2.0 dataset is completely new with multi-turn question/answering (even with code).

I have not yet planned Falcon 180B release -- that base model is only marginally better than LLaMA-70B. But I do have the Synthia-70B-v2.0 training right now, will release in about 2 weeks.

Hey! v2.0 dataset is completely new with multi-turn question/answering (even with code).

I have not yet planned Falcon 180B release -- that base model is only marginally better than LLaMA-70B. But I do have the Synthia-70B-v2.0 training right now, will release in about 2 weeks.

Can you share any information on how the dataset was made? Also, can you run benchmarks on v1.5 and v2.0 7B models(As the Open LLM Leaderboard is not running evaluations right now.) Thanks.

It’s an open model, you can run the evals yourself.

Sorry, can’t share any information on my datasets.

It’s an open model, you can run the evals yourself.

Sorry, can’t share any information on my datasets.

No problem, trying to run the evaluations.

Sign up or log in to comment