Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
bleysg commited on
Commit
8a25aa3
1 Parent(s): ffa5716

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -66,7 +66,7 @@ As well, we have evaluated using the methodology and tools for the HuggingFace L
66
  ## AGIEval Performance
67
 
68
  We present our results in two columns.
69
- The column for "`(Orca Paper eval`" uses the methods outlined in the Orca paper, so as to be a direct apples-to-apples comparison with the results from the paper.
70
  The column for "`(HF Leaderboard eval)`" uses EleutherAI's LM Evaluation Harness with settings outlined by HuggingFace. These results are not comparable to the other columns, as the methods are different.
71
 
72
  ![OpenOrca Preview2 AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B/resolve/main/Images/OpenOrcaP2AGIEval.png "AGIEval Performance")
@@ -74,7 +74,7 @@ The column for "`(HF Leaderboard eval)`" uses EleutherAI's LM Evaluation Harness
74
  ## BigBench-Hard Performance
75
 
76
  We present our results in two columns.
77
- The column for "`(Orca Paper eval`" uses the methods outlined in the Orca paper, so as to be a direct apples-to-apples comparison with the results from the paper.
78
  The column for "`(HF Leaderboard eval)`" uses EleutherAI's LM Evaluation Harness with settings outlined by HuggingFace. These results are not comparable to the other columns, as the methods are different.
79
 
80
  ![OpenOrca Preview2 BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B/resolve/main/Images/OpenOrcaP2BigBenchHardEval.png "BigBench-Hard Performance")
 
66
  ## AGIEval Performance
67
 
68
  We present our results in two columns.
69
+ The column for "`(Orca Paper eval)`" uses the methods outlined in the Orca paper, so as to be a direct apples-to-apples comparison with the results from the paper.
70
  The column for "`(HF Leaderboard eval)`" uses EleutherAI's LM Evaluation Harness with settings outlined by HuggingFace. These results are not comparable to the other columns, as the methods are different.
71
 
72
  ![OpenOrca Preview2 AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B/resolve/main/Images/OpenOrcaP2AGIEval.png "AGIEval Performance")
 
74
  ## BigBench-Hard Performance
75
 
76
  We present our results in two columns.
77
+ The column for "`(Orca Paper eval)`" uses the methods outlined in the Orca paper, so as to be a direct apples-to-apples comparison with the results from the paper.
78
  The column for "`(HF Leaderboard eval)`" uses EleutherAI's LM Evaluation Harness with settings outlined by HuggingFace. These results are not comparable to the other columns, as the methods are different.
79
 
80
  ![OpenOrca Preview2 BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B/resolve/main/Images/OpenOrcaP2BigBenchHardEval.png "BigBench-Hard Performance")