unitxt.llm_as_judge module

class unitxt.llm_as_judge.LLMAsJudge(data_classification_policy: List[str] = None, main_score: str = 'llm_as_judge', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] | Dict[str, str] = [], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, reduction_map: Dict[str, List[str]] | NoneType = None, implemented_reductions: List[str] = ['mean', 'weighted_win_rate'], task: Literal['rating.single_turn', 'rating.single_turn_with_reference', 'pairwise_comparative_rating.single_turn'] = __required__, template: unitxt.templates.Template = __required__, system_prompt: unitxt.system_prompts.SystemPrompt = None, format: unitxt.formats.Format = None, inference_model: unitxt.inference.InferenceEngine = __required__, batch_size: int = 32, strip_system_prompt_and_format_from_inputs: bool = True)[source]

Bases: LLMAsJudgeBase, ArtifactFetcherMixin

LLM-as-judge-based metric class for evaluating correctness of generated predictions.

This class uses the source prompt given to the generator and the generator’s predictions to evaluate correctness using one of three supported tasks (rating.single_turn, rating.single_turn_with_reference, pairwise_comparative_rating.single_turn).

main_score

The main score label used for evaluation.

Type:

str

task (Literal["rating.single_turn","rating.single_turn_with_reference",
"pairwise_comparative_rating.single_turn"])

The type of task the llm as judge runs. This defines the output and input format of the judge model.

template

The template used when generating inputs for the judge llm.

Type:

Template

format

The format used when generating inputs for judge llm.

Type:

Format

system_prompt

The system prompt used when generating inputs for judge llm.

Type:

SystemPrompt

strip_system_prompt_and_format_from_inputs

Whether to strip the system prompt and formatting from the inputs that the models that is being judges received, when they are inserted to the llm-as-judge prompt.

Type:

bool

inference_model

The module that creates the inference of the judge llm.

Type:

InferenceEngine

reduction_map

A dictionary specifying the reduction method for the metric.

Type:

dict

batch_size

The size of the bulk.

Type:

int

class unitxt.llm_as_judge.LLMAsJudgeBase(data_classification_policy: List[str] = None, main_score: str = 'llm_as_judge', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] | Dict[str, str] = [], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, reduction_map: Dict[str, List[str]] | NoneType = None, implemented_reductions: List[str] = ['mean', 'weighted_win_rate'], task: str = __required__, template: unitxt.templates.Template = __required__, system_prompt: unitxt.system_prompts.SystemPrompt = None, format: unitxt.formats.Format = None, inference_model: unitxt.inference.InferenceEngine = __required__, batch_size: int = 32)[source]

Bases: BulkInstanceMetric

LLM-as-judge-base metric class for evaluating correctness of generated predictions.

main_score

The main score label used for evaluation.

Type:

str

task

The type of task the llm as judge runs. This defines the output and input format of the judge model.

Type:

str

template

The template used when generating inputs for the judge llm.

Type:

Template

format

The format used when generating inputs for judge llm.

Type:

Format

system_prompt

The system prompt used when generating inputs for judge llm.

Type:

SystemPrompt

inference_model

The module that creates the inference of the judge llm.

Type:

InferenceEngine

reduction_map

A dictionary specifying the reduction method for the metric.

Type:

dict

batch_size

The size of the bulk.

Type:

int

prediction_type: Type | str = typing.Any
class unitxt.llm_as_judge.TaskBasedLLMasJudge(data_classification_policy: List[str] = None, main_score: str = 'llm_as_judge', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] | Dict[str, str] = [], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, reduction_map: Dict[str, List[str]] | NoneType = None, implemented_reductions: List[str] = ['mean', 'weighted_win_rate'], task: str = __required__, template: unitxt.templates.Template = __required__, system_prompt: unitxt.system_prompts.SystemPrompt = None, format: unitxt.formats.Format = None, inference_model: unitxt.inference.InferenceEngine = __required__, batch_size: int = 32, infer_log_probs: bool = False, judge_to_generator_fields_mapping: Dict[str, str] = {}, prediction_field: str | NoneType = None, include_meta_data: bool = True)[source]

Bases: LLMAsJudgeBase

LLM-as-judge-based metric class for evaluating correctness of generated predictions.

This class can use any task and matching template to evaluate the predictions. All task/templates field are taken from the instance’s task_data. The instances sent to the judge can either be: 1.a unitxt dataset, in which case the predictions are copied to a specified field of the task. 2. dictionaries with the fields required by the task and template.

main_score

The main score label used for evaluation.

Type:

str

task

The type of task the llm as judge runs.

Type:

str

This defines the output and input format of the judge model.
template

The template used when generating inputs for the judge llm.

Type:

Template

format

The format used when generating inputs for judge llm.

Type:

Format

system_prompt

The system prompt used when generating inputs for judge llm.

Type:

SystemPrompt

strip_system_prompt_and_format_from_inputs

Whether to strip the system prompt and formatting from the inputs that the models that is being judges received, when they are inserted to the llm-as-judge prompt.

Type:

bool

inference_model

The module that creates the inference of the judge llm.

Type:

InferenceEngine

reduction_map

A dictionary specifying the reduction method for the metric.

Type:

dict

batch_size

The size of the bulk.

Type:

int

infer_log_probs

whether to perform the inference using logprobs. If true, the template’s

Type:

bool

post-processing must support the logprobs output.
judge_to_generator_fields_mapping

optional mapping between the names of the fields in the generator task and the

Type:

Dict[str, str]

judge task. For example, if the generator task uses "reference_answers" and the judge task  expect "ground_truth",
include  {"ground_truth"

“reference_answers”} in this dictionary.

prediction_field

if indicated, and prediction exist, copy prediction to this field name in task_data.

Type:

str | None

include_meta_data

whether to include the inference per-instance metadata in the returned results.

Type:

bool

judge_to_generator_fields_mapping: Dict[str, str] = {}
unitxt.llm_as_judge.get_task_data_dict(task_data)[source]