Update README.md
#3
by
Abdalgader
- opened
README.md
CHANGED
@@ -66,8 +66,11 @@ print(response[0]['generated_text'])
|
|
66 |
<br>
|
67 |
|
68 |
## Benchmarks
|
69 |
-
We report in the following table our internal pipeline benchmarks
|
70 |
-
|
|
|
|
|
|
|
71 |
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
|
72 |
<colgroup>
|
73 |
<col style="width: 10%;">
|
|
|
66 |
<br>
|
67 |
|
68 |
## Benchmarks
|
69 |
+
We report in the following table our internal pipeline benchmarks.
|
70 |
+
- We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
|
71 |
+
- We report **raw scores**.
|
72 |
+
- We use same batch-size across all models.
|
73 |
+
|
74 |
<table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
|
75 |
<colgroup>
|
76 |
<col style="width: 10%;">
|