Does the quality of the results depend on the LLM model used?

#4
by JBHF - opened

Does the quality of the results depend on the LLM model used?

In the online Gradio app this LLM model is used:
“ LongWriter-glm4-9b”
https://huggingface.co/THUDM/LongWriter-glm4-9b

But there is another one:
“ LongWriter-llama3.1-8b”
https://huggingface.co/THUDM/LongWriter-llama3.1-8b

So I wonder which LLM gives the best results!

Also I wonder if one can run the LongWriter LLMs on a free instance of Google Colab.

Sign up or log in to comment