Papers
arxiv:2312.07910

PromptBench: A Unified Library for Evaluation of Large Language Models

Published on Dec 13, 2023
· Submitted by akhaliq on Dec 14, 2023
Authors:
,
,

Abstract

The evaluation of large language models (LLMs) is crucial to assess their performance and mitigate potential security risks. In this paper, we introduce PromptBench, a unified library to evaluate LLMs. It consists of several key components that are easily used and extended by researchers: prompt construction, prompt engineering, dataset and model loading, adversarial prompt attack, dynamic evaluation protocols, and analysis tools. PromptBench is designed to be an open, general, and flexible codebase for research purposes that can facilitate original study in creating new benchmarks, deploying downstream applications, and designing new evaluation protocols. The code is available at: https://github.com/microsoft/promptbench and will be continuously supported.

Community

This comment has been hidden
This comment has been hidden

This looks really promising, I love that it combines multiple dimensions with evaluation. This has been lacking in other tools.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2312.07910 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2312.07910 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2312.07910 in a Space README.md to link it from this page.

Collections including this paper 11