SondosMB commited on
Commit
f9edc46
β€’
1 Parent(s): b870da8

Update constants.py

Browse files
Files changed (1) hide show
  1. constants.py +1 -1
constants.py CHANGED
@@ -8,7 +8,7 @@ BANNER = f'<div style="display: flex; justify-content: space-around;"><img src="
8
 
9
  INTRODUCTION_TEXT= """
10
  # OSQ Benchmark (Evaluating LLMs with OSQs and MCQs)
11
- πŸ”— [Website](https://vila-lab.github.io/Open-LLM-Leaderboard-Website/) | πŸ’» [GitHub](https://github.com/VILA-Lab/Open-LLM-Leaderboard) | πŸ“– [Paper](#) | 🐦 [X1](https://x.com/open_llm_lb) | 🐦 [X2](https://x.com/szq0214)
12
 
13
  > ### Open-LLM-Leaderboard,for evaluating large language models (LLMs) by transitioning from multiple-choice questions (MCQs) to open-style questions.
14
  This approach addresses the inherent biases and limitations of MCQs, such as selection bias and the effect of random guessing. By utilizing open-style questions,
 
8
 
9
  INTRODUCTION_TEXT= """
10
  # OSQ Benchmark (Evaluating LLMs with OSQs and MCQs)
11
+ πŸ”— [Website](https://vila-lab.github.io/Open-LLM-Leaderboard-Website/) | πŸ’» [GitHub](https://github.com/VILA-Lab/Open-LLM-Leaderboard) | πŸ“– [Paper](https://arxiv.org/pdf/2406.07545) | 🐦 [X1](https://x.com/open_llm_lb) | 🐦 [X2](https://x.com/szq0214)
12
 
13
  > ### Open-LLM-Leaderboard,for evaluating large language models (LLMs) by transitioning from multiple-choice questions (MCQs) to open-style questions.
14
  This approach addresses the inherent biases and limitations of MCQs, such as selection bias and the effect of random guessing. By utilizing open-style questions,