pansophic commited on
Commit
cc78599
1 Parent(s): a5cfcd4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -13
README.md CHANGED
@@ -63,19 +63,6 @@ In AlpacaEval, Rocket 🦝 achieves a near 80% win rate, coupled with an average
63
  | **Rocket** 🦝 | **79.75** | **1.42** | **1242** |
64
 
65
 
66
- ## Open LLM leaderboard
67
-
68
- | Metric | Value |
69
- |-----------------------|---------------------------|
70
- | Average | 55.77 |
71
- | ARC | 50.6 |
72
- | HellaSwag | 76.69 |
73
- | MMLU | 47.1 |
74
- | TruthfulQA | 55.82 |
75
- | Winogrande | 67.96 |
76
- | GSM8K | 36.47 |
77
-
78
-
79
  ## Intended uses & limitations
80
  Initially, we fine-tuned the model using a dataset created by merging and curating multiple datasets, available on the HuggingFace Hub. This dataset will be released to the public soon. We further enhanced the model's performance using DPO, selecting samples from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) and [BAAI/JudgeLM-100K](https://huggingface.co/datasets/BAAI/JudgeLM-100K) datasets. The outcome is a highly effective chat model with a 3 billion parameter scale.
81
 
 
63
  | **Rocket** 🦝 | **79.75** | **1.42** | **1242** |
64
 
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Intended uses & limitations
67
  Initially, we fine-tuned the model using a dataset created by merging and curating multiple datasets, available on the HuggingFace Hub. This dataset will be released to the public soon. We further enhanced the model's performance using DPO, selecting samples from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) and [BAAI/JudgeLM-100K](https://huggingface.co/datasets/BAAI/JudgeLM-100K) datasets. The outcome is a highly effective chat model with a 3 billion parameter scale.
68