** UNDER CONSTRUCTION **
Please note that even though this repo is public, the information provided below is not yet complete.
The LLM Creativity benchmark
Last updated: 26 Feb 2024
The goal of this benchmark is to evaluate the ability of Large Language Models to be used as an uncensored creative writing assistant. Human evaluation of the results is done manually.
Results
Other interesting benchmarks: