froggeric commited on
Commit
e3ca63a
1 Parent(s): d30f7c2

Update README.md

Browse files

Updated benchmark, with additional model descriptions.

Files changed (1) hide show
  1. README.md +18 -4
README.md CHANGED
@@ -3,7 +3,6 @@ language:
3
  - en
4
  tags:
5
  - benchmark
6
- - llm
7
  pretty_name: llm_creativity_benchmark
8
  size_categories:
9
  - n<1K
@@ -12,7 +11,7 @@ _"The only difference between Science and screwing around is writing it down."_
12
 
13
  # The LLM Creativity benchmark
14
 
15
- _Last benchmark update: 1 Mar 2024_
16
 
17
  The goal of this benchmark is to evaluate the ability of Large Language Models to be used
18
  as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
@@ -31,10 +30,25 @@ The questions can be split half-half in 2 possible ways:
31
 
32
  # Results
33
 
34
- ![image.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/AbPDnD06RdLeyHg05wl0j.png)
35
 
36
  # Remarks about some of the models
37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
  [wolfram/miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b)\
39
  This frankenmerge has dramatically improved over the original 70b miqu, and somehow, it has also made it less likely to refuse to answer! It's a huge improvement. Still has the same tendencies as the original: likes to use lists when replying, and double line breaks in the prompt reduce the quality of the reply.
40
 
@@ -123,7 +137,7 @@ repeat_penalty = 1.12\
123
  min_p = 0.05\
124
  top_p = 0.1
125
 
126
- # Other useful benchmarks
127
 
128
  - [Emotional Intelligence Benchmark for LLMs](https://eqbench.com/)
129
  - [Chatbot Arena Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)
 
3
  - en
4
  tags:
5
  - benchmark
 
6
  pretty_name: llm_creativity_benchmark
7
  size_categories:
8
  - n<1K
 
11
 
12
  # The LLM Creativity benchmark
13
 
14
+ _Last benchmark update: 12 Mar 2024_
15
 
16
  The goal of this benchmark is to evaluate the ability of Large Language Models to be used
17
  as an **uncensored creative writing assistant**. Human evaluation of the results is done manually,
 
30
 
31
  # Results
32
 
33
+ ![image.png](https://cdn-uploads.huggingface.co/production/uploads/65a681d3da9f6df1410562e9/U1nIwW5eUBVZtOvNBfuWK.png)
34
 
35
  # Remarks about some of the models
36
 
37
+ [wolfram/miqu-1-103b](https://huggingface.co/wolfram/miqu-1-103b)\
38
+ Has slightly more difficulties following instructions than the 120b merge. Also produces more annoying repetitions and re-use of expressions.
39
+ The q5_ks is a slight improvements over q4_km, but as it uses more memory, it reduces what it is available for context. Still, with 96GB I can still use a context larger than 16k.
40
+
41
+ [froggeric/WestLake-10.7b-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)\
42
+ Better and more detailed writing than the original, but has slightly more difficulties following instructions.
43
+
44
+ [alpindale/goliath-120b](https://huggingface.co/alpindale/goliath-120b)\
45
+ Very creative, which makes for some great writing, but it also means it has a hard time sticking to the plot.
46
+
47
+ [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B)\
48
+ Great writing with lots of details, taking sufficient time to develop the plot. The small context size though is a limiting factor for consistency.
49
+
50
+ **Previously:**
51
+
52
  [wolfram/miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b)\
53
  This frankenmerge has dramatically improved over the original 70b miqu, and somehow, it has also made it less likely to refuse to answer! It's a huge improvement. Still has the same tendencies as the original: likes to use lists when replying, and double line breaks in the prompt reduce the quality of the reply.
54
 
 
137
  min_p = 0.05\
138
  top_p = 0.1
139
 
140
+ # Other great benchmarks
141
 
142
  - [Emotional Intelligence Benchmark for LLMs](https://eqbench.com/)
143
  - [Chatbot Arena Leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)