Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -67,7 +67,10 @@ print(response[0]['generated_text'])
67
  <br>
68
 
69
  ## Benchmarks
70
- We report in the following table our internal pipeline benchmarks:
 
 
 
71
 
72
 
73
 
 
67
  <br>
68
 
69
  ## Benchmarks
70
+ We report in the following table our internal pipeline benchmarks.
71
+ - We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
72
+ - We report **raw scores**.
73
+ - We use same batch-size across all models.
74
 
75
 
76