llm-leaderboard / benchmarks.csv
Ludwig Stumpp
Add lambada results and description column for benchmarks
01a6a43
raw
history blame
1.16 kB
"Benchmark Name " ,"Author " ,URL ,"Description "
"Chatbot Arena Elo (lmsys) " ,"LMSYS " ,https://lmsys.org/blog/2023-05-03-arena/ ,"In this blog post, we introduce Chatbot Arena, an LLM benchmark platform featuring anonymous randomized battles in a crowdsourced manner. Chatbot Arena adopts the Elo rating system, which is a widely-used rating system in chess and other competitive games. (Source: https://lmsys.org/blog/2023-05-03-arena/)"
"LAMBADA " ,"Paperno et al. " ,https://arxiv.org/abs/1606.06031 ,"The LAMBADA evaluates the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative passages sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole passage, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse. (Source: https://huggingface.co/datasets/lambada)"