The LLM Creativity benchmark
Last updated: 26 Feb 2024
The goal of this benchmark is to evaluate the ability of Large Language Models to be used as an uncensored creative writing assistant. Human evaluation of the results is done manually.
Results
Model | Prompt | Size | Quant | Context | Score |
---|---|---|---|---|---|
miqu-1 | miqu | 70b | q5_km | 32k | 75 |
miqu-1 | chatml | 70b | q5_km | 32k | 77 |
Other interesting benchmarks: