Papers
arxiv:2307.10928

FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets

Published on Jul 20, 2023
ยท Submitted by akhaliq on Jul 21, 2023

Abstract

Evaluation of Large Language Models (LLMs) is challenging because aligning to human values requires the composition of multiple skills and the required set of skills varies depending on the instruction. Recent studies have evaluated the performance of LLMs in two ways, (1) automatic evaluation on several independent benchmarks and (2) human or machined-based evaluation giving an overall score to the response. However, both settings are coarse-grained evaluations, not considering the nature of user instructions that require instance-wise skill composition, which limits the interpretation of the true capabilities of LLMs. In this paper, we introduce FLASK (Fine-grained Language Model Evaluation based on Alignment SKill Sets), a fine-grained evaluation protocol that can be used for both model-based and human-based evaluation which decomposes coarse-level scoring to an instance-wise skill set-level. Specifically, we define 12 fine-grained skills needed for LLMs to follow open-ended user instructions and construct an evaluation set by allocating a set of skills for each instance. Additionally, by annotating the target domains and difficulty level for each instance, FLASK provides a holistic view with a comprehensive analysis of a model's performance depending on skill, domain, and difficulty. Through using FLASK, we compare multiple open-sourced and proprietary LLMs and observe highly-correlated findings between model-based and human-based evaluations. FLASK enables developers to more accurately measure the model performance and how it can be improved by analyzing factors that make LLMs proficient in particular skills. For practitioners, FLASK can be used to recommend suitable models for particular situations through comprehensive comparison among various LLMs. We release the evaluation data and code implementation at https://github.com/kaistAI/FLASK.

Community

Very interesting paper! I was wondering if one can argue there is an inherent bias from GPT-4 towards GPT-3.5? Or at least how to minimise that :)

Current evaluation metrics have many problems such as cannot evaluate human alignment and it is not a comprehensive evaluation... ๐Ÿ˜ฅ In this respect, I think the introduction of FLASK can open a new door to the more comprehensive assessment of LLM! ๐Ÿ˜Š Since I feel this paper is fascinating and awesome, I hope this paper is a critical role in opening the new era of LLM evaluation!!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.10928 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.10928 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2307.10928 in a Space README.md to link it from this page.

Collections including this paper 3