unitxt.llm_as_judge_from_template module¶
- class unitxt.llm_as_judge_from_template.LLMAsJudge(data_classification_policy: List[str] = None, main_score: str = 'llm_as_judge', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: List[str] = None, ci_method: str = 'BCa', _requirements_list: List[str] | Dict[str, str] = [], requirements: List[str] | Dict[str, str] = [], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, reduction_map: Dict[str, List[str]] | NoneType = None, implemented_reductions: List[str] = ['mean', 'weighted_win_rate'], task: Literal['rating.single_turn', 'rating.single_turn_with_reference', 'pairwise_comparative_rating.single_turn'] = __required__, template: unitxt.templates.Template = __required__, system_prompt: unitxt.system_prompts.SystemPrompt = None, format: unitxt.formats.Format = None, inference_model: unitxt.inference.InferenceEngine = __required__, batch_size: int = 32, strip_system_prompt_and_format_from_inputs: bool = True)[source]¶
Bases:
LLMAsJudgeBase
LLM-as-judge-based metric class for evaluating correctness of generated predictions.
This class uses the source prompt given to the generator and the generator’s predictions to evaluate correctness using one of three supported tasks (rating.single_turn, rating.single_turn_with_reference, pairwise_comparative_rating.single_turn).
- main_score¶
The main score label used for evaluation.
- Type:
str
- task (Literal["rating.single_turn","rating.single_turn_with_reference",
- "pairwise_comparative_rating.single_turn"])
The type of task the llm as judge runs.
- This defines the output and input format of the judge model.
- system_prompt¶
The system prompt used when generating inputs for judge llm.
- Type:
- strip_system_prompt_and_format_from_inputs¶
Whether to strip the system prompt and formatting from the
- Type:
bool
- inputs that the models that is being judges received, when they are inserted to the llm-as-judge prompt.
- inference_model¶
The module that creates the inference of the judge llm.
- Type:
- reduction_map¶
A dictionary specifying the reduction method for the metric.
- Type:
dict
- batch_size¶
The size of the bulk.
- Type:
int
- class unitxt.llm_as_judge_from_template.LLMAsJudgeBase(data_classification_policy: List[str] = None, main_score: str = 'llm_as_judge', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: List[str] = None, ci_method: str = 'BCa', _requirements_list: List[str] | Dict[str, str] = [], requirements: List[str] | Dict[str, str] = [], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, reduction_map: Dict[str, List[str]] | NoneType = None, implemented_reductions: List[str] = ['mean', 'weighted_win_rate'], task: str = __required__, template: unitxt.templates.Template = __required__, system_prompt: unitxt.system_prompts.SystemPrompt = None, format: unitxt.formats.Format = None, inference_model: unitxt.inference.InferenceEngine = __required__, batch_size: int = 32)[source]¶
Bases:
BulkInstanceMetric
,ArtifactFetcherMixin
LLM-as-judge-base metric class for evaluating correctness of generated predictions.
- main_score¶
The main score label used for evaluation.
- Type:
str
- task¶
The type of task the llm as judge runs. This defines the output and input format of the judge model.
- Type:
str
- system_prompt¶
The system prompt used when generating inputs for judge llm.
- Type:
- inference_model¶
The module that creates the inference of the judge llm.
- Type:
- reduction_map¶
A dictionary specifying the reduction method for the metric.
- Type:
dict
- batch_size¶
The size of the bulk.
- Type:
int
- abstract get_metric_results_from_prediction_outputs(outputs: List[Dict[str, Any]]) List[Dict[str, Any]] [source]¶
Generate a scores’ dictionary for each instance.
Return the list of scores dictionaries for the input instances.
- abstract infer_instances(instances: List[Dict[str, Any]]) List[Dict[str, Any]] [source]¶
Generate the dataset and call the inference engine to generate the judges’ predictions.
Return the list of the produced instances with their generated judge predictions.
- prediction_type: Type | str = typing.Any¶
- class unitxt.llm_as_judge_from_template.TaskBasedLLMasJudge(data_classification_policy: List[str] = None, main_score: str = 'llm_as_judge', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: List[str] = None, ci_method: str = 'BCa', _requirements_list: List[str] | Dict[str, str] = [], requirements: List[str] | Dict[str, str] = [], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, reduction_map: Dict[str, List[str]] | NoneType = None, implemented_reductions: List[str] = ['mean', 'weighted_win_rate'], task: str = __required__, template: unitxt.templates.Template = __required__, system_prompt: unitxt.system_prompts.SystemPrompt = None, format: unitxt.formats.Format = None, inference_model: unitxt.inference.InferenceEngine = __required__, batch_size: int = 32, infer_log_probs: bool = False, judge_to_generator_fields_mapping: Dict[str, str] = {}, prediction_field: str | NoneType = None, include_meta_data: bool = True)[source]¶
Bases:
LLMAsJudgeBase
LLM-as-judge-based metric class for evaluating correctness of generated predictions.
This class can use any task and matching template to evaluate the predictions. All task/templates field are taken from the instance’s task_data. The instances sent to the judge can either be: 1.a unitxt dataset, in which case the predictions are copied to a specified field of the task. 2. dictionaries with the fields required by the task and template.
- Parameters:
main_score (str) – The main score label used for evaluation.
task (str) – The type of task the llm as judge runs. This defines the output and input format of the judge model.
template (Template) – The template used when generating inputs for the judge llm.
format (Format) – The format used when generating inputs for judge llm.
system_prompt (SystemPrompt) – The system prompt used when generating inputs for judge llm.
strip_system_prompt_and_format_from_inputs (bool) – Whether to strip the system prompt and formatting from the inputs that the models that is being judges received, when they are inserted to the llm-as-judge prompt.
inference_model (InferenceEngine) – The module that creates the inference of the judge llm.
reduction_map (dict) – A dictionary specifying the reduction method for the metric.
batch_size (int) – The size of the bulk.
infer_log_probs (bool) – whether to perform the inference using logprobs. If true, the template’s post-processing must support the logprobs output.
judge_to_generator_fields_mapping (Dict[str, str]) – optional mapping between the names of the fields in the generator task and the judge task. For example, if the generator task uses “reference_answers” and the judge task expect “ground_truth”, include {“ground_truth”: “reference_answers”} in this dictionary.
prediction_field (str) – if indicated, and prediction exist, copy prediction to this field name in task_data.
include_meta_data (bool) – whether to include the inference per-instance metadata in the returned results.
- judge_to_generator_fields_mapping: Dict[str, str] = {}¶