Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -65,7 +65,10 @@ print(response[0]['generated_text'])
65
  <br>
66
 
67
  # Benchmarks
68
- We report in the following table our internal pipeline benchmarks:
 
 
 
69
 
70
 
71
 
 
65
  <br>
66
 
67
  # Benchmarks
68
+ We report in the following table our internal pipeline benchmarks.
69
+ - We use [lm-evaluation harness](https://github.com/EleutherAI/lm-evaluation-harness).
70
+ - We report **raw scores**.
71
+ - We use same batch-size across all models.
72
 
73
 
74