Lighteval documentation

Metrics

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Metrics

Metrics

Metric

class lighteval.metrics.Metric

< >

( metric_name: str higher_is_better: bool category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: <built-in function callable> )

CorpusLevelMetric

class lighteval.metrics.utils.metric_utils.CorpusLevelMetric

< >

( metric_name: str higher_is_better: bool category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: <built-in function callable> )

Metric computed over the whole corpora, with computations happening at the aggregation phase

SampleLevelMetric

class lighteval.metrics.utils.metric_utils.SampleLevelMetric

< >

( metric_name: str higher_is_better: bool category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: <built-in function callable> )

Metric computed per sample, then aggregated over the corpus

MetricGrouping

class lighteval.metrics.utils.metric_utils.MetricGrouping

< >

( metric_name: list higher_is_better: dict category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: dict )

Some metrics are more advantageous to compute together at once. For example, if a costly preprocessing is the same for all metrics, it makes more sense to compute it once.

CorpusLevelMetricGrouping

class lighteval.metrics.utils.metric_utils.CorpusLevelMetricGrouping

< >

( metric_name: list higher_is_better: dict category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: dict )

MetricGrouping computed over the whole corpora, with computations happening at the aggregation phase

SampleLevelMetricGrouping

class lighteval.metrics.utils.metric_utils.SampleLevelMetricGrouping

< >

( metric_name: list higher_is_better: dict category: MetricCategory use_case: MetricUseCase sample_level_fn: <built-in function callable> corpus_level_fn: dict )

MetricGrouping are computed per sample, then aggregated over the corpus

Corpus Metrics

CorpusLevelF1Score

class lighteval.metrics.metrics_corpus.CorpusLevelF1Score

< >

( average: str num_classes: int = 2 )

compute

< >

( items: list )

Computes the metric score over all the corpus generated items, by using the scikit learn implementation.

CorpusLevelPerplexityMetric

class lighteval.metrics.metrics_corpus.CorpusLevelPerplexityMetric

< >

( metric_type: str )

compute

< >

( items: list )

Computes the metric score over all the corpus generated items.

CorpusLevelTranslationMetric

class lighteval.metrics.metrics_corpus.CorpusLevelTranslationMetric

< >

( metric_type: str )

compute

< >

( items: list )

Computes the metric score over all the corpus generated items, by using the sacrebleu implementation.

matthews_corrcoef

lighteval.metrics.metrics_corpus.matthews_corrcoef

< >

( items: list ) float

Parameters

  • items (list[dict]) — List of GenerativeCorpusMetricInput

Returns

float

Score

Computes the Matthews Correlation Coefficient, using scikit learn (doc).

Sample Metrics

ExactMatches

class lighteval.metrics.metrics_sample.ExactMatches

< >

( aggregation_function: typing.Callable[[list[float]], float] = <built-in function max> normalize_gold: typing.Optional[typing.Callable[[str], str]] = None normalize_pred: typing.Optional[typing.Callable[[str], str]] = None strip_strings: bool = False type_exact_match: str = 'full' )

compute

< >

( golds: list predictions: list **kwargs ) float

Parameters

  • golds (list[str]) — Reference targets
  • predictions (list[str]) — Predicted strings

Returns

float

Aggregated score over the current sample’s items.

Computes the metric over a list of golds and predictions for one single sample.

compute_one_item

< >

( gold: str pred: str ) float

Parameters

  • gold (str) — One of the possible references
  • pred (str) — One of the possible predictions

Returns

float

The exact match score. Will be 1 for a match, 0 otherwise.

Compares two strings only.

F1_score

class lighteval.metrics.metrics_sample.F1_score

< >

( aggregation_function: typing.Callable[[list[float]], float] = <built-in function max> normalize_gold: typing.Optional[typing.Callable[[str], str]] = None normalize_pred: typing.Optional[typing.Callable[[str], str]] = None strip_strings: bool = False )

compute

< >

( golds: list predictions: list **kwargs ) float

Parameters

  • golds (list[str]) — Reference targets
  • predictions (list[str]) — Predicted strings

Returns

float

Aggregated score over the current sample’s items.

Computes the metric over a list of golds and predictions for one single sample.

compute_one_item

< >

( gold: str pred: str ) float

Parameters

  • gold (str) — One of the possible references
  • pred (str) — One of the possible predictions

Returns

float

The f1 score over the bag of words, computed using nltk.

Compares two strings only.

LoglikelihoodAcc

class lighteval.metrics.metrics_sample.LoglikelihoodAcc

< >

( logprob_normalization: lighteval.metrics.normalizations.LogProbCharNorm | lighteval.metrics.normalizations.LogProbTokenNorm | lighteval.metrics.normalizations.LogProbPMINorm | None = None )

compute

< >

( gold_ixs: list choices_logprob: list unconditioned_logprob: list[float] | None choices_tokens: list[list[int]] | None formatted_doc: Doc **kwargs ) int

Parameters

  • gold_ixs (list[int]) — All the gold choices indices
  • choices_logprob (list[float]) — Summed log-probabilities of all the possible choices for the model, ordered as the choices.
  • unconditioned_logprob (list[float] | None) — Unconditioned log-probabilities for PMI normalization, ordered as the choices.
  • choices_tokens (list[list[int]] | None) — Tokenized choices for token normalization, ordered as the choices.
  • formatted_doc (Doc) — Original document for the sample. Used to get the original choices’ length for possible normalization

Returns

int

The eval score: 1 if the best log-prob choice is in gold, 0 otherwise.

Computes the log likelihood accuracy: is the choice with the highest logprob in choices_logprob present in the gold_ixs?

NormalizedMultiChoiceProbability

class lighteval.metrics.metrics_sample.NormalizedMultiChoiceProbability

< >

( log_prob_normalization: lighteval.metrics.normalizations.LogProbCharNorm | lighteval.metrics.normalizations.LogProbTokenNorm | lighteval.metrics.normalizations.LogProbPMINorm | None = None aggregation_function: typing.Callable[[numpy.ndarray], float] = <function max at 0x7fab9794e6b0> )

compute

< >

( gold_ixs: list choices_logprob: list unconditioned_logprob: list[float] | None choices_tokens: list[list[int]] | None formatted_doc: Doc **kwargs ) float

Parameters

  • gold_ixs (list[int]) — All the gold choices indices
  • choices_logprob (list[float]) — Summed log-probabilities of all the possible choices for the model, ordered as the choices.
  • unconditioned_logprob (list[float] | None) — Unconditioned log-probabilities for PMI normalization, ordered as the choices.
  • choices_tokens (list[list[int]] | None) — Tokenized choices for token normalization, ordered as the choices.
  • formatted_doc (Doc) — Original document for the sample. Used to get the original choices’ length for possible normalization

Returns

float

The probability of the best log-prob choice being a gold choice.

Computes the log likelihood probability: chance of choosing the best choice.

Probability

class lighteval.metrics.metrics_sample.Probability

< >

( normalization: lighteval.metrics.normalizations.LogProbTokenNorm | None = None aggregation_function: typing.Callable[[numpy.ndarray], float] = <function max at 0x7fab9794e6b0> )

compute

< >

( logprobs: list target_tokens: list **kwargs ) float

Parameters

  • gold_ixs (list[int]) — All the gold choices indices
  • choices_logprob (list[float]) — Summed log-probabilities of all the possible choices for the model, ordered as the choices.
  • unconditioned_logprob (list[float] | None) — Unconditioned log-probabilities for PMI normalization, ordered as the choices.
  • choices_tokens (list[list[int]] | None) — Tokenized choices for token normalization, ordered as the choices.
  • formatted_doc (Doc) — Original document for the sample. Used to get the original choices’ length for possible normalization

Returns

float

The probability of the best log-prob choice being a gold choice.

Computes the log likelihood probability: chance of choosing the best choice.

Recall

class lighteval.metrics.metrics_sample.Recall

< >

( at: int )

compute

< >

( choices_logprob: list gold_ixs: list **kwargs ) int

Parameters

  • gold_ixs (list[int]) — All the gold choices indices
  • choices_logprob (list[float]) — Summed log-probabilities of all the possible choices for the model, ordered as the choices.

Returns

int

Score: 1 if one of the top level predicted choices was correct, 0 otherwise.

Computes the recall at the requested depth level: looks at the n best predicted choices (with the highest log probabilities) and see if there is an actual gold among them.

MRR

class lighteval.metrics.metrics_sample.MRR

< >

( length_normalization: bool = False )

compute

< >

( choices_logprob: list gold_ixs: list formatted_doc: Doc **kwargs ) float

Parameters

  • gold_ixs (list[int]) — All the gold choices indices
  • choices_logprob (list[float]) — Summed log-probabilities of all the possible choices for the model, ordered as the choices.
  • formatted_doc (Doc) — Original document for the sample. Used to get the original choices’ length for possible normalization

Returns

float

MRR score.

Mean reciprocal rank. Measures the quality of a ranking of choices (ordered by correctness).

ROUGE

class lighteval.metrics.metrics_sample.ROUGE

< >

( methods: str | list[str] multiple_golds: bool = False bootstrap: bool = False normalize_gold: <built-in function callable> = None normalize_pred: <built-in function callable> = None aggregation_function: <built-in function callable> = None tokenizer: object = None )

compute

< >

( golds: list predictions: list **kwargs ) float or dict

Parameters

  • golds (list[str]) — Reference targets
  • predictions (list[str]) — Predicted strings

Returns

float or dict

Aggregated score over the current sample’s items. If several rouge functions have been selected, returns a dict which maps name and scores.

Computes the metric(s) over a list of golds and predictions for one single sample.

BertScore

class lighteval.metrics.metrics_sample.BertScore

< >

( normalize_gold: <built-in function callable> = None normalize_pred: <built-in function callable> = None )

compute

< >

( golds: list predictions: list **kwargs ) dict

Parameters

  • golds (list[str]) — Reference targets
  • predictions (list[str]) — Predicted strings

Returns

dict

Scores over the current sample’s items.

Computes the prediction, recall and f1 score using the bert scorer.

Extractiveness

class lighteval.metrics.metrics_sample.Extractiveness

< >

( normalize_input: <built-in function callable> = <function remove_braces at 0x7faaaf9a3010> normalize_pred: <built-in function callable> = <function remove_braces_and_strip at 0x7faaaf9a30a0> input_column: str = 'text' )

compute

< >

( predictions: list formatted_doc: Doc **kwargs ) dict[str, float]

Parameters

  • predictions (list[str]) — Predicted strings, a list of length 1.
  • formatted_doc (Doc) — The formatted document.

Returns

dict[str, float]

The extractiveness scores.

Compute the extractiveness of the predictions.

This method calculates coverage, density, and compression scores for a single prediction against the input text.

Faithfulness

class lighteval.metrics.metrics_sample.Faithfulness

< >

( normalize_input: <built-in function callable> = <function remove_braces at 0x7faaaf9a3010> normalize_pred: <built-in function callable> = <function remove_braces_and_strip at 0x7faaaf9a30a0> input_column: str = 'text' )

compute

< >

( predictions: list formatted_doc: Doc **kwargs ) dict[str, float]

Parameters

  • predictions (list[str]) — Predicted strings, a list of length 1.
  • formatted_doc (Doc) — The formatted document.

Returns

dict[str, float]

The faithfulness scores.

Compute the faithfulness of the predictions.

The SummaCZS (Summary Content Zero-Shot) model is used with configurable granularity and model variation.

BLEURT

class lighteval.metrics.metrics_sample.BLEURT

< >

( )

compute

< >

( golds: list predictions: list **kwargs ) float

Parameters

  • golds (list[str]) — Reference targets
  • predictions (list[str]) — Predicted strings

Returns

float

Score over the current sample’s items.

Uses the stored BLEURT scorer to compute the score on the current sample.

BLEU

class lighteval.metrics.metrics_sample.BLEU

< >

( n_gram: int )

compute

< >

( golds: list predictions: list **kwargs ) float

Parameters

  • golds (list[str]) — Reference targets
  • predictions (list[str]) — Predicted strings

Returns

float

Score over the current sample’s items.

Computes the sentence level BLEU between the golds and each prediction, then takes the average.

StringDistance

class lighteval.metrics.metrics_sample.StringDistance

< >

( metric_types: list[str] | str strip_prediction: bool = True )

compute

< >

( golds: list predictions: list **kwargs ) dict

Parameters

  • golds (list[str]) — A list of possible golds. If it contains more than one item, only the first one is kept.
  • predictions (list[str]) — Predicted strings.

Returns

dict

The different scores computed

Computes all the requested metrics on the golds and prediction.

edit_similarity

< >

( s1 s2 )

Compute the edit similarity between two lists of strings.

Edit similarity is also used in the paper Lee, Katherine, et al. “Deduplicating training data makes language models better.” arXiv preprint arXiv:2107.06499 (2021).

longest_common_prefix_length

< >

( s1: ndarray s2: ndarray )

Compute the length of the longest common prefix.

JudgeLLM

class lighteval.metrics.metrics_sample.JudgeLLM

< >

( judge_model_name: str template: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['openai', 'transformers', 'vllm', 'tgi'] short_judge_name: str | None = None )

JudgeLLMMTBench

class lighteval.metrics.metrics_sample.JudgeLLMMTBench

< >

( judge_model_name: str template: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['openai', 'transformers', 'vllm', 'tgi'] short_judge_name: str | None = None )

compute

< >

( predictions: list formatted_doc: Doc **kwargs )

Compute the score of a generative task using a llm as a judge. The generative task can be multiturn with 2 turns max, in that case, we return scores for turn 1 and 2. Also returns user_prompt and judgement which are ignored later by the aggregator.

JudgeLLMMixEval

class lighteval.metrics.metrics_sample.JudgeLLMMixEval

< >

( judge_model_name: str template: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['openai', 'transformers', 'vllm', 'tgi'] short_judge_name: str | None = None )

compute

< >

( sample_ids: list responses: list formatted_docs: list **kwargs )

Compute the score of a generative task using a llm as a judge. The generative task can be multiturn with 2 turns max, in that case, we return scores for turn 1 and 2. Also returns user_prompt and judgement which are ignored later by the aggregator.

MajAtK

class lighteval.metrics.metrics_sample.MajAtK

< >

( k: int normalize_gold: <built-in function callable> = None normalize_pred: <built-in function callable> = None strip_strings: bool = False type_exact_match: str = 'full' )

compute

< >

( golds: list predictions: list **kwargs ) float

Parameters

  • golds (list[str]) — Reference targets
  • predictions (list[str]) — k predicted strings

Returns

float

Aggregated score over the current sample’s items.

Computes the metric over a list of golds and predictions for one single sample. It applies normalisation (if needed) to model prediction and gold, and takes the most frequent answer of all the available ones, then compares it to the gold.

LLM-as-a-Judge

JudgeLM

class lighteval.metrics.llm_as_judge.JudgeLM

< >

( model: str templates: typing.Callable process_judge_response: typing.Callable judge_backend: typing.Literal['openai', 'transformers', 'tgi', 'vllm'] url: str | None = None api_key: str | None = None )

Parameters

  • model (str) — The name of the model.
  • templates (Callable) — A function taking into account the question, options, answer, and gold and returning the judge prompt.
  • process_judge_response (Callable) — A function for processing the judge’s response.
  • judge_backend (Literal[“openai”, “transformers”, “tgi”, “vllm”]) — The backend for the judge.
  • url (str | None) — The URL for the OpenAI API.
  • api_key (str | None) — The API key for the OpenAI API (either OpenAI or HF key).
  • model (str) — The name of the model.
  • template (Callable) — A function taking into account the question, options, answer, and gold and returning the judge prompt.
  • API_MAX_RETRY (int) — The maximum number of retries for the API.
  • API_RETRY_SLEEP (int) — The time to sleep between retries.
  • client (OpenAI | None) — The OpenAI client.
  • pipe (LLM | AutoModel | None) — The Transformers or vllm pipeline.
  • process_judge_response (Callable) — A function for processing the judge’s response.
  • url (str | None) — The URL for the OpenAI API.
  • api_key (str | None) — The API key for the OpenAI API (either OpenAI or HF key).
  • backend (Literal[“openai”, “transformers”, “tgi”, “vllm”]) — The backend for the judge

A class representing a judge for evaluating answers using either the OpenAI or Transformers library.

Methods: evaluate_answer: Evaluates an answer using the OpenAI API or Transformers library. lazy_load_client: Lazy loads the OpenAI client or Transformers pipeline. call_api: Calls the API to get the judge’s response. call_transformers: Calls the Transformers pipeline to get the judge’s response. call_vllm: Calls the VLLM pipeline to get the judge’s response.

evaluate_answer

< >

( question: str answer: str options: list[str] | None = None gold: str | None = None )

Parameters

  • questions (list[str]) — The prompt asked to the evaluated model
  • answers (list[str]) — Answer given by the evaluated model
  • references (list[str]) — A list of reference answers

Evaluates an answer using either Transformers or OpenAI API.

< > Update on GitHub