Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
File size: 1,761 Bytes
47c3ae2 db7f350 47c3ae2 db7f350 47c3ae2 db7f350 47c3ae2 db7f350 47c3ae2 db7f350 47c3ae2 db7f350 3d87820 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
TITLE = """<h1 align="center" id="space-title">GAIA Leaderboard</h1>"""
CANARY_STRING = "" # TODO
INTRODUCTION_TEXT = f"""
Large language models have seen their potential capabilities increased by several orders of magnitude with the introduction of augmentations, from simple prompting adjustement to actual external tooling (calculators, vision models, ...) or online web retrieval.
To evaluate the next generation of LLMs, we argue for a new kind of benchmark, simple and yet effective to measure actual progress on augmented capabilities,
We therefore present GAIA.
GAIA is made of 3 evaluation levels, depending on the added level of tooling and autonomy the model needs.
We expect the level 1 to be breakable by very good LLMs, and the level 3 to indicate a strong jump in model capabilities.
Each of these levels is divided into two sets: a public dev set, on which people can self report their results, and a private test set, which will be unlocked once public performance passes a threshold on the dev set.
Please do not repost the public dev set, nor use it in training data for your models. Its canary string is """ + CANARY_STRING + """ and files containing this string should be removed from training data.
"""
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results"
CITATION_BUTTON_TEXT = r"""@misc{gaia, # TODO
author = {tbd},
title = {General AI Assistant benchamrk},
year = {2023},
#publisher = {Hugging Face},
#howpublished = "\url{https://huggingface.co/spaces/gaia-benchmark/}"
}"""
def format_warning(msg):
return f"<p style='color: orange; font-size: 20px; text-align: center;'>{msg}</p>"
def format_log(msg):
return f"<p style='color: green; font-size: 20px; text-align: center;'>{msg}</p>" |