leaderboard-pr-bot's picture
Adding Evaluation Results
984c374
|
raw
history blame
1.65 kB
---
datasets:
- heegyu/wizard_vicuna_70k_v2
license: apache-2.0
---
Hyperparameters
- 3/8 epoch(3rd epoch checkpoing while 8epoch training)
- 1e-4 -> 1e-5 with cosine lr decay
- batch size 128
- max sequence length 2048
- AdamW(weigth decay=0.01, b1=0.9, b2=0.99, grad_clip=1.0)
- no warmup
- BF16
- Base Model: [openlm-research/open_llama_3b_v2](https://huggingface.co/openlm-research/open_llama_3b_v2)
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("heegyu/WizardVicuna-open-llama-3b-v2")
model = AutoModelForCausalLM.from_pretrained("heegyu/WizardVicuna-open-llama-3b-v2")
inputs = tokenizer(["Human: Hi, nice to meet you!\n\nAssistant: "], return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=16)
print(tokenizer.batch_decode(outputs, skip_special_tokens=False))
```
output: `['Human: Hi, nice to meet you!\n\nAssistant: Hello. Great to meet you too. Well, how can I assist you today?<|endoftext|>']`
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_heegyu__WizardVicuna-open-llama-3b-v2)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 34.11 |
| ARC (25-shot) | 37.71 |
| HellaSwag (10-shot) | 66.6 |
| MMLU (5-shot) | 27.23 |
| TruthfulQA (0-shot) | 36.8 |
| Winogrande (5-shot) | 63.3 |
| GSM8K (5-shot) | 0.99 |
| DROP (3-shot) | 6.12 |