Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,7 @@ tinyllama-3 | 0.3029 | 0.5935 | 0.7329 | 0.5959 | 0.5563 |
|
|
22 |
tinyllama-2.5 | 0.3191 | 0.5896 | 0.7307 | 0.5872 | 0.5566 |
|
23 |
palmer-002 | 0.3242 | 0.5956 | 0.7345 | 0.5888 | 0.5607 |
|
24 |
babbage-002 | 0.3285 | 0.6380 | 0.7606 | 0.6085 | 0.5839 |
|
|
|
25 |
```
|
26 |
|
27 |
This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`.
|
|
|
22 |
tinyllama-2.5 | 0.3191 | 0.5896 | 0.7307 | 0.5872 | 0.5566 |
|
23 |
palmer-002 | 0.3242 | 0.5956 | 0.7345 | 0.5888 | 0.5607 |
|
24 |
babbage-002 | 0.3285 | 0.6380 | 0.7606 | 0.6085 | 0.5839 |
|
25 |
+
# note that this is a zero-shot setting as opposite to open llm leaderboard's few-shot evals.
|
26 |
```
|
27 |
|
28 |
This model shows exceptional performance and as of now is the best tinyllama-size base model. Furthermore, this proves LIMA paper point and serves as a good open-source alternative to openai's `babbage-002`.
|