Tasks
LightevalTask
LightevalTaskConfig
class lighteval.tasks.lighteval_task.LightevalTaskConfig
< source >( name: str prompt_function: typing.Callable[[dict, str], lighteval.tasks.requests.Doc | None] hf_repo: str hf_subset: str metric: list[lighteval.metrics.utils.metric_utils.Metric | lighteval.metrics.metrics.Metrics] | tuple[lighteval.metrics.utils.metric_utils.Metric | lighteval.metrics.metrics.Metrics, ...] hf_revision: typing.Optional[str] = None hf_filter: typing.Optional[typing.Callable[[dict], bool]] = None hf_avail_splits: typing.Union[list[str], tuple[str, ...], NoneType] = <factory> trust_dataset: bool = False evaluation_splits: list[str] | tuple[str, ...] = <factory> few_shots_split: typing.Optional[str] = None few_shots_select: typing.Optional[str] = None generation_size: typing.Optional[int] = None generation_grammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None stop_sequence: typing.Union[list[str], tuple[str, ...], NoneType] = None num_samples: typing.Optional[list[int]] = None suite: list[str] | tuple[str, ...] = <factory> original_num_docs: int = -1 effective_num_docs: int = -1 must_remove_duplicate_docs: bool = False version: int = 0 )
Parameters
- name (str) — Short name of the evaluation task.
- suite (list[str]) — Evaluation suites to which the task belongs.
- prompt_function (Callable[[dict, str], Doc]) — Function used to create the
Doc
samples from each line of the evaluation dataset. - hf_repo (str) — Path of the hub dataset repository containing the evaluation information.
- hf_subset (str) — Subset used for the current task, will be default if none is selected.
- hf_avail_splits (list[str]) — All the available splits in the evaluation dataset
- evaluation_splits (list[str]) — List of the splits actually used for this evaluation
- few_shots_split (str) — Name of the split from which to sample few-shot examples
- few_shots_select (str) — Method with which to sample few-shot examples
- generation_size (int) — Maximum allowed size of the generation
- generation_grammar (TextGenerationInputGrammarType) — The grammar to generate completion according to. Currently only available for TGI and Inference Endpoint models.
- metric (list[str]) — List of all the metrics for the current task.
- stop_sequence (list[str]) — Stop sequence which interrupts the generation for generative metrics.
- original_num_docs (int) — Number of documents in the task
- effective_num_docs (int) — Number of documents used in a specific evaluation
- truncated_num_docs (bool) — Whether less than the total number of documents were used
- trust_dataset (bool) — Whether to trust the dataset at execution or not
- version (int) — The version of the task. Defaults to 0. Can be increased if the underlying dataset or the prompt changes.
Stored configuration of a given LightevalTask
.
LightevalTask
class lighteval.tasks.lighteval_task.LightevalTask
< source >( name: str cfg: LightevalTaskConfig cache_dir: typing.Optional[str] = None )
Return a dict with metric name and its aggregation function for all metrics
construct_requests
< source >( formatted_doc: Doc context: str document_id_seed: str current_task_name: str ) β dict[RequestType, List[Request]]
Parameters
- formatted_doc (Doc) — Formatted document almost straight from the dataset.
- ctx (str) — Context, which is the few shot examples + the query.
- document_id_seed (str) — Index of the document in the task appended with the seed used for the few shot sampling.
- current_task_name (str) — Name of the current task.
Returns
dict[RequestType, List[Request]]
List of requests.
Constructs a list of requests from the task based on the given parameters.
Returns the evaluation documents.
fewshot_docs
< source >( ) β list[Doc]
Returns
list[Doc]
Documents that will be used for few shot examples. One document = one few shot example.
Returns the few shot documents. If the few shot documents are not available, it gets them from the few shot split or the evaluation split.
get_first_possible_fewshot_splits
< source >( available_splits: list[str] | tuple[str, ...] number_of_splits: int = 1 ) β list[str]
Parses the possible fewshot split keys in order: train, then validation keys and matches them with the available keys. Returns the first available.
load_datasets
< source >( tasks: list dataset_loading_processes: int = 1 )
Load datasets from the HuggingFace Hub for the given tasks.
PromptManager
class lighteval.tasks.prompt_manager.PromptManager
< source >( task: LightevalTask lm: LightevalModel )
doc_to_fewshot_sorting_class
< source >( formatted_doc: Doc ) β str
In some cases, when selecting few-shot samples, we want to use specific document classes which need to be specified separately from the target. For example, a document where the gold is a json might want to use only one of the keys of the json to define sorting classes in few shot samples. Else we take the gold.
doc_to_target
< source >( formatted_doc: Doc ) β str
Returns the target of the given document.
doc_to_text
< source >( doc: Doc return_instructions: bool = False ) β str
Returns the query of the document without the instructions. If the document has instructions, it removes them from the query:
Registry
class lighteval.tasks.registry.Registry
< source >( cache_dir: typing.Optional[str] = None custom_tasks: typing.Union[str, pathlib.Path, module, NoneType] = None )
The Registry class is used to manage the task registry and get task classes.
expand_task_definition
< source >( task_definition: str ) β list[str]
get_task_dict
< source >( task_names: list ) β Dict[str, LightevalTask]
Get a dictionary of tasks based on the task name list (suite|task).
Notes:
- Each task in the task_name_list will be instantiated with the corresponding task class.
get_task_instance
< source >( task_name: str ) β LightevalTask
Get the task class based on the task name (suite|task).
Print all the tasks in the task registry.
Requests
class lighteval.tasks.requests.Request
< source >( task_name: str sample_index: int request_index: int context: str metric_categories: list )
Represents a request for a specific task, example and request within that example in the evaluation process. For example in the task βboolqβ, the example βIs the sun hot?β and the requests for that example βIs the sun hot? Yesβ and βIs the sun hot? Noβ.
class lighteval.tasks.requests.LoglikelihoodRequest
< source >( task_name: str sample_index: int request_index: int context: str metric_categories: list choice: str tokenized_context: list = None tokenized_continuation: list = None )
Represents a request for log-likelihood evaluation.
class lighteval.tasks.requests.LoglikelihoodSingleTokenRequest
< source >( task_name: str sample_index: int request_index: int context: str metric_categories: list choices: list tokenized_context: list = None tokenized_continuation: list = None )
Represents a request for calculating the log-likelihood of a single token. Faster because we can get all the loglikelihoods in one pass.
class lighteval.tasks.requests.LoglikelihoodRollingRequest
< source >( task_name: str sample_index: int request_index: int context: str metric_categories: list tokenized_context: list = None tokenized_continuation: list = None )
Represents a request for log-likelihood rolling evaluation.
Inherits from the base Request class.
class lighteval.tasks.requests.GreedyUntilRequest
< source >( task_name: str sample_index: int request_index: int context: str metric_categories: list stop_sequence: typing.Union[str, tuple[str], list[str]] generation_size: typing.Optional[int] generation_grammar: typing.Optional[huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType] = None tokenized_context: list = None num_samples: int = None do_sample: bool = False use_logits: bool = False )
Parameters
- stop_sequence (str) — The sequence of tokens that indicates when to stop generating text.
- generation_size (int) — The maximum number of tokens to generate.
- generation_grammar (TextGenerationInputGrammarType) — The grammar to generate completion according to. Currently only available for TGI models.
- request_type (RequestType) — The type of the request, set to RequestType.GREEDY_UNTIL.
Represents a request for generating text using the Greedy-Until algorithm.
class lighteval.tasks.requests.GreedyUntilMultiTurnRequest
< source >( task_name: str sample_index: int request_index: int context: str metric_categories: list stop_sequence: str generation_size: int use_logits: bool = False )
Represents a request for generating text using the Greedy-Until algorithm.
Datasets
get_original_order
< source >( new_arr: list ) β list
Get the original order of the data.
get_split_start_end
< source >( split_id: int ) β tuple
Get the start and end indices of a dataset split.
Iterator that yields the start and end indices of each dataset split. Also updates the starting batch size for each split (trying to double the batch every time we move to a new split).
class lighteval.data.LoglikelihoodSingleTokenDataset
< source >( requests: list num_dataset_splits: int )
init_split_limits
< source >( num_dataset_splits ) β type
Initialises the split limits based on generation parameters. The splits are used to estimate time remaining when evaluating, and in the case of generative evaluations, to group similar samples together.
For generative tasks, self._sorting_criteria outputs:
- a boolean (whether the generation task uses logits)
- a list (the stop sequences)
- the item length (the actual size sorting factor).
In the current function, we create evaluation groups by generation parameters (logits and eos), so that samples with similar properties get batched together afterwards. The samples will then be further organised by length in each split.
class lighteval.data.GenerativeTaskDatasetNanotron
< source >( requests: list num_dataset_splits: int )
class lighteval.data.GenDistributedSampler
< source >( dataset: Dataset num_replicas: typing.Optional[int] = None rank: typing.Optional[int] = None shuffle: bool = True seed: int = 0 drop_last: bool = False )
A distributed sampler that copy the last element only when drop_last is False so we keep a small padding in the batches as our samples are sorted by length.