** UNDER CONSTRUCTION ** Please note that even though this repo is public, the information provided below is not yet complete. --- # The LLM Creativity benchmark Last updated: 26 Feb 2024 The goal of this benchmark is to evaluate the ability of Large Language Models to be used as an uncensored creative writing assistant. Human evaluation of the results is done manually. ## Results ![image/png](https://huggingface.co/datasets/froggeric/creativity/resolve/main/benchmark_results_2024-02-26.png) Other interesting benchmarks: - https://eqbench.com/ - https://chat.lmsys.org/