unitxt.metrics module

class unitxt.metrics.Accuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['accuracy']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

ci_scores: List[str] = ['accuracy']
prediction_type: Any | str = typing.Any
reduction_map: Dict[str, List[str]] = {'mean': ['accuracy']}
class unitxt.metrics.BertScore(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['f1', 'precision', 'recall'], _requirements_list: ~typing.List[str] = ['bert_score'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['f1', 'precision', 'recall']}, implemented_reductions: ~typing.List[str], hf_metric_name: str = 'bertscore', hf_metric_fields: ~typing.List[str] = ['f1', 'precision', 'recall'], hf_compute_args: dict = {}, hf_additional_input_fields: ~typing.List = [], model_name: str, model_layer: int = None)

Bases: HuggingfaceBulkMetric

ci_scores: List[str] = ['f1', 'precision', 'recall']
hf_metric_fields: List[str] = ['f1', 'precision', 'recall']
prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['f1', 'precision', 'recall']}
class unitxt.metrics.BinaryAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy_binary', prediction_type: ~typing.Any | str = typing.Union[float, int], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy_binary'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['accuracy_binary']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

Calculate accuracy for a binary task, using 0.5 as the threshold in the case of float predictions.

ci_scores: List[str] = ['accuracy_binary']
prediction_type

alias of Union[float, int]

reduction_map: Dict[str, List[str]] = {'mean': ['accuracy_binary']}
class unitxt.metrics.BinaryMaxAccuracy(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'max_accuracy_binary', prediction_type: Any | str = typing.Union[float, int], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: GlobalMetric

Calculate the maximal accuracy and the decision threshold that achieves it for a binary task with float predictions.

prediction_type

alias of Union[float, int]

class unitxt.metrics.BinaryMaxF1(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'max_f1_binary', prediction_type: Any | str = typing.Union[float, int], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['sklearn'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1Binary

Calculate the maximal F1 and the decision threshold that achieves it for a binary task with float predictions.

class unitxt.metrics.BulkInstanceMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]], implemented_reductions: ~typing.List[str])

Bases: StreamOperator, MetricWithConfidenceInterval

class unitxt.metrics.CharEditDistance(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'char_edit_distance', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['char_edit_distance'], _requirements_list: ~typing.List[str] = ['editdistance'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['char_edit_distance']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

ci_scores: List[str] = ['char_edit_distance']
prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['char_edit_distance']}
class unitxt.metrics.CharEditDistanceAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'char_edit_dist_accuracy', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['char_edit_dist_accuracy'], _requirements_list: ~typing.List[str] = ['editdistance'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['char_edit_dist_accuracy']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: CharEditDistance

ci_scores: List[str] = ['char_edit_dist_accuracy']
reduction_map: Dict[str, List[str]] = {'mean': ['char_edit_dist_accuracy']}
class unitxt.metrics.CustomF1(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_micro', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, zero_division: float = 0.0, report_per_group_scores: bool = True)

Bases: GlobalMetric

prediction_type: Any | str = typing.Any
class unitxt.metrics.CustomF1Fuzzy(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_micro', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, zero_division: float = 0.0, report_per_group_scores: bool = True)

Bases: CustomF1

class unitxt.metrics.Detector(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'score', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['transformers', 'torch'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['score']}, implemented_reductions: ~typing.List[str], batch_size: int = 32, model_name: str)

Bases: BulkInstanceMetric

prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['score']}
class unitxt.metrics.F1(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1_macro', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None)

Bases: GlobalMetric

prediction_type

alias of str

class unitxt.metrics.F1Binary(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_binary', prediction_type: Any | str = typing.Union[float, int], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['sklearn'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: GlobalMetric

Calculate f1 for a binary task, using 0.5 as the threshold in the case of float predictions.

prediction_type

alias of Union[float, int]

class unitxt.metrics.F1BinaryPosOnly(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_binary', prediction_type: Any | str = typing.Union[float, int], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['sklearn'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1Binary

class unitxt.metrics.F1Macro(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1_macro', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None)

Bases: F1

class unitxt.metrics.F1MacroMultiLabel(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_macro', prediction_type: Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1MultiLabel

class unitxt.metrics.F1Micro(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1_micro', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None)

Bases: F1

class unitxt.metrics.F1MicroMultiLabel(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_micro', prediction_type: Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1MultiLabel

class unitxt.metrics.F1MultiLabel(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_macro', prediction_type: Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: GlobalMetric

prediction_type

alias of List[str]

class unitxt.metrics.F1Weighted(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1_weighted', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None)

Bases: F1

class unitxt.metrics.FinQAEval(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'program_accuracy', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['program_accuracy', 'execution_accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['program_accuracy', 'execution_accuracy']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

ci_scores: List[str] = ['program_accuracy', 'execution_accuracy']
prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['program_accuracy', 'execution_accuracy']}
class unitxt.metrics.FixedGroupAbsvalNormCohensHParaphraseAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['absval_norm_cohens_h_paraphrase', <function FixedGroupAbsvalNormCohensHParaphraseAccuracy.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['absval_norm_cohens_h_paraphrase', <function FixedGroupAbsvalNormCohensHParaphraseAccuracy.<lambda>>, True]}}
class unitxt.metrics.FixedGroupAbsvalNormCohensHParaphraseStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['absval_norm_cohens_h_paraphrase', <function FixedGroupAbsvalNormCohensHParaphraseStringContainment.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['absval_norm_cohens_h_paraphrase', <function FixedGroupAbsvalNormCohensHParaphraseStringContainment.<lambda>>, True]}}
class unitxt.metrics.FixedGroupAbsvalNormHedgesGParaphraseAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['absval_norm_hedges_g_paraphrase', <function FixedGroupAbsvalNormHedgesGParaphraseAccuracy.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['absval_norm_hedges_g_paraphrase', <function FixedGroupAbsvalNormHedgesGParaphraseAccuracy.<lambda>>, True]}}
class unitxt.metrics.FixedGroupAbsvalNormHedgesGParaphraseStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['absval_norm_hedges_g_paraphrase', <function FixedGroupAbsvalNormHedgesGParaphraseStringContainment.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['absval_norm_hedges_g_paraphrase', <function FixedGroupAbsvalNormHedgesGParaphraseStringContainment.<lambda>>, True]}}
class unitxt.metrics.FixedGroupMeanAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, True]}}
class unitxt.metrics.FixedGroupMeanBaselineAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean_baseline', <function FixedGroupMeanBaselineAccuracy.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean_baseline', <function FixedGroupMeanBaselineAccuracy.<lambda>>, True]}}
class unitxt.metrics.FixedGroupMeanBaselineStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean_baseline', <function FixedGroupMeanBaselineStringContainment.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean_baseline', <function FixedGroupMeanBaselineStringContainment.<lambda>>, True]}}
class unitxt.metrics.FixedGroupMeanParaphraseAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean_paraphrase', <function FixedGroupMeanParaphraseAccuracy.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean_paraphrase', <function FixedGroupMeanParaphraseAccuracy.<lambda>>, True]}}
class unitxt.metrics.FixedGroupMeanParaphraseStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean_paraphrase', <function FixedGroupMeanParaphraseStringContainment.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean_paraphrase', <function FixedGroupMeanParaphraseStringContainment.<lambda>>, True]}}
class unitxt.metrics.FixedGroupMeanStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, True]}}
class unitxt.metrics.FixedGroupNormCohensHParaphraseAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['norm_cohens_h_paraphrase', <function FixedGroupNormCohensHParaphraseAccuracy.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['norm_cohens_h_paraphrase', <function FixedGroupNormCohensHParaphraseAccuracy.<lambda>>, True]}}
class unitxt.metrics.FixedGroupNormCohensHParaphraseStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['norm_cohens_h_paraphrase', <function FixedGroupNormCohensHParaphraseStringContainment.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['norm_cohens_h_paraphrase', <function FixedGroupNormCohensHParaphraseStringContainment.<lambda>>, True]}}
class unitxt.metrics.FixedGroupNormHedgesGParaphraseAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['norm_hedges_g_paraphrase', <function FixedGroupNormHedgesGParaphraseAccuracy.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['norm_hedges_g_paraphrase', <function FixedGroupNormHedgesGParaphraseAccuracy.<lambda>>, True]}}
class unitxt.metrics.FixedGroupNormHedgesGParaphraseStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['norm_hedges_g_paraphrase', <function FixedGroupNormHedgesGParaphraseStringContainment.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['norm_hedges_g_paraphrase', <function FixedGroupNormHedgesGParaphraseStringContainment.<lambda>>, True]}}
class unitxt.metrics.FixedGroupPDRParaphraseAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['pdr_paraphrase', <function FixedGroupPDRParaphraseAccuracy.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['pdr_paraphrase', <function FixedGroupPDRParaphraseAccuracy.<lambda>>, True]}}
class unitxt.metrics.FixedGroupPDRParaphraseStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['pdr_paraphrase', <function FixedGroupPDRParaphraseStringContainment.<lambda>>, True]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['pdr_paraphrase', <function FixedGroupPDRParaphraseStringContainment.<lambda>>, True]}}
class unitxt.metrics.FuzzyNer(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_micro', prediction_type: Any | str = typing.List[typing.Tuple[str, str]], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, zero_division: float = 0.0, report_per_group_scores: bool = True)

Bases: CustomF1Fuzzy

prediction_type

alias of List[Tuple[str, str]]

class unitxt.metrics.GlobalMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None)

Bases: StreamOperator, MetricWithConfidenceInterval

A class for computing metrics that require joint calculations over all instances and are not just aggregation of scores of individuals instances.

For example, macro_F1 requires calculation requires calculation of recall and precision per class, so all instances of the class need to be considered. Accuracy, on the other hand, is just an average of the accuracy of all the instances.

class unitxt.metrics.GroupMeanAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, False]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, False]}}
class unitxt.metrics.GroupMeanStringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, False]}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StringContainment

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, False]}}
class unitxt.metrics.GroupMeanTokenOverlap(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['f1', 'precision', 'recall'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, False], 'score_fields': ['f1', 'precision', 'recall']}}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: TokenOverlap

reduction_map: Dict[str, List[str]] = {'group_mean': {'agg_func': ['mean', <function nan_mean>, False], 'score_fields': ['f1', 'precision', 'recall']}}
class unitxt.metrics.HuggingfaceBulkMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]], implemented_reductions: ~typing.List[str], hf_metric_name: str, hf_metric_fields: ~typing.List[str], hf_compute_args: dict = {}, hf_additional_input_fields: ~typing.List = [])

Bases: BulkInstanceMetric

hf_compute_args: dict = {}
class unitxt.metrics.HuggingfaceInstanceMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]], reference_field: str = 'references', prediction_field: str = 'prediction', hf_metric_name: str, hf_metric_fields: ~typing.List[str], hf_compute_args: dict = {})

Bases: InstanceMetric

hf_compute_args: dict = {}
class unitxt.metrics.HuggingfaceMetric(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = None, prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, hf_metric_name: str = None, hf_main_score: str = None, scale: float = 1.0, scaled_fields: list = None, hf_compute_args: Dict[str, Any] = {}, hf_additional_input_fields: List = [], hf_additional_input_fields_pass_one_value: List = [], experiment_id: str = 'a2645cda-ed86-4bf1-aa73-b8da760300a9')

Bases: GlobalMetric

class unitxt.metrics.InstanceMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]], reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: StreamOperator, MetricWithConfidenceInterval

Class for metrics for which a global score can be calculated by aggregating the instance scores (possibly with additional instance inputs).

InstanceMetric currently allows two reductions: 1. ‘mean’, which calculates the mean of instance scores, 2. ‘group_mean’, which first applies an aggregation function specified in the reduction_map

to instance scores grouped by the field grouping_field (which must not be None), and returns the mean of the group scores; if grouping_field is None, grouping is disabled. See _validate_group_mean_reduction for formatting instructions.

class unitxt.metrics.IsCodeMixed(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'is_code_mixed', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['transformers', 'torch'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['is_code_mixed']}, implemented_reductions: ~typing.List[str], inference_model: ~unitxt.inference.InferenceEngine = None)

Bases: BulkInstanceMetric

Uses a generative model to assess whether a given text is code-mixed.

Our goal is to identify whether a text is code-mixed, i.e., contains a mixture of different languages. The model is asked to identify the language of the text; if the model response begins with a number we take this as an indication that the text is code-mixed, for example: - Model response: “The text is written in 2 different languages” vs. - Model response: “The text is written in German”

Note that this metric is quite tailored to specific model-template combinations, as it relies on the assumption that the model will complete the answer prefix “The text is written in ___” in a particular way.

prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['is_code_mixed']}
class unitxt.metrics.JaccardIndex(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'jaccard_index', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['jaccard_index'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['jaccard_index']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

ci_scores: List[str] = ['jaccard_index']
prediction_type: Any | str = typing.Any
reduction_map: Dict[str, List[str]] = {'mean': ['jaccard_index']}
class unitxt.metrics.KPA(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1_micro', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, zero_division: float = 0.0, report_per_group_scores: bool = True)

Bases: CustomF1

prediction_type

alias of str

class unitxt.metrics.KendallTauMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'kendalltau_b', prediction_type: ~typing.Any | str = <class 'float'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['scipy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None)

Bases: GlobalMetric

prediction_type

alias of float

class unitxt.metrics.LlamaIndexCorrectness(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = ['public'], main_score: str = '', prediction_type: str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = 'correctness_', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['llama_index'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = None, reference_field: str = 'references', prediction_field: str = 'prediction', model_name: str = '', openai_models: ~typing.List[str] = ['gpt-3.5-turbo'], anthropic_models: ~typing.List[str] = [], mock_models: ~typing.List[str] = ['mock'])

Bases: LlamaIndexLLMMetric

LlamaIndex based metric class for evaluating correctness.

class unitxt.metrics.LlamaIndexFaithfulness(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = ['public'], main_score: str = '', prediction_type: str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = 'faithfulness_', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['llama_index'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = None, reference_field: str = 'references', prediction_field: str = 'prediction', model_name: str = '', openai_models: ~typing.List[str] = ['gpt-3.5-turbo'], anthropic_models: ~typing.List[str] = [], mock_models: ~typing.List[str] = ['mock'])

Bases: LlamaIndexLLMMetric

LlamaIndex based metric class for evaluating faithfulness.

class unitxt.metrics.LlamaIndexLLMMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = ['public'], main_score: str = '', prediction_type: str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['llama_index'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = None, reference_field: str = 'references', prediction_field: str = 'prediction', model_name: str = '', openai_models: ~typing.List[str] = ['gpt-3.5-turbo'], anthropic_models: ~typing.List[str] = [], mock_models: ~typing.List[str] = ['mock'])

Bases: InstanceMetric

anthropic_models: List[str] = []
data_classification_policy: List[str] = ['public']
external_api_models = ['gpt-3.5-turbo']
mock_models: List[str] = ['mock']
openai_models: List[str] = ['gpt-3.5-turbo']
prediction_type

alias of str

class unitxt.metrics.MAP(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'map', prediction_type: ~typing.Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['map'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['map']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: RetrievalMetric

ci_scores: List[str] = ['map']
reduction_map: Dict[str, List[str]] = {'mean': ['map']}
class unitxt.metrics.MRR(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'mrr', prediction_type: ~typing.Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['mrr'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['mrr']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: RetrievalMetric

ci_scores: List[str] = ['mrr']
reduction_map: Dict[str, List[str]] = {'mean': ['mrr']}
class unitxt.metrics.MatthewsCorrelation(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'matthews_correlation', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, hf_metric_name: str = 'matthews_correlation', hf_main_score: str = None, scale: float = 1.0, scaled_fields: list = None, hf_compute_args: ~typing.Dict[str, ~typing.Any] = {}, hf_additional_input_fields: ~typing.List = [], hf_additional_input_fields_pass_one_value: ~typing.List = [], experiment_id: str = 'c79b1562-18ba-47f1-9c97-e13f43f7f61c')

Bases: HuggingfaceMetric

prediction_type

alias of str

class unitxt.metrics.MaxAccuracy(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'accuracy', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['accuracy'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'max': ['accuracy']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: Accuracy

Calculate the maximal accuracy over all instances as the global score.

reduction_map: Dict[str, List[str]] = {'max': ['accuracy']}
class unitxt.metrics.Meteor(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'meteor', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['meteor'], _requirements_list: ~typing.List[str] = ['nltk'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['meteor']}, reference_field: str = 'references', prediction_field: str = 'prediction', alpha: float = 0.9, beta: int = 3, gamma: float = 0.5)

Bases: InstanceMetric

ci_scores: List[str] = ['meteor']
prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['meteor']}
class unitxt.metrics.Metric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '')

Bases: Artifact

prediction_type: Any | str = typing.Any
class unitxt.metrics.MetricPipeline(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = None, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', caching: bool = None, preprocess_steps: ~typing.List[~unitxt.operator.StreamingOperator] | None, postpreprocess_steps: ~typing.List[~unitxt.operator.StreamingOperator] | None, metric: ~unitxt.metrics.Metric = None)

Bases: MultiStreamOperator, Metric

class unitxt.metrics.MetricWithConfidenceInterval(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = None, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None)

Bases: Metric

class unitxt.metrics.MetricsEnsemble(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'ensemble_score', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['ensemble_score']}, reference_field: str = 'references', prediction_field: str = 'prediction', metrics: ~typing.List[~unitxt.metrics.Metric | str], weights: ~typing.List[float] = None)

Bases: InstanceMetric

Metrics Ensemble class for creating ensemble of given metrics.

main_score

The main score label used for evaluation.

Type:

str

metrics

List of metrics that will be ensemble.

Type:

List[Union[Metric, str]]

weights

Weight of each the metrics

Type:

List[float]

InstanceMetric currently allows two reductions
reduction_map (Dict[str, List[str]]. Parameter for specifying the redaction method of the global score.

(see it definition at InstanceMetric class). This class define its default value to reduce by the mean of the main score.

reduction_map: Dict[str, List[str]] = {'mean': ['ensemble_score']}
class unitxt.metrics.NDCG(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'nDCG', prediction_type: Any | str = typing.Union[float, NoneType], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['sklearn'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: GlobalMetric

Normalized Discounted Cumulative Gain: measures the quality of ranking with respect to ground truth ranking scores.

As this measures ranking, it is a global metric that can only be calculated over groups of instances. In the common use case where the instances are grouped by different queries, i.e., where the task is to provide a relevance score for a search result w.r.t. a query, an nDCG score is calculated per each query (specified in the “query” input field of an instance) and the final score is the average across all queries. Note that the expected scores are relevance scores (i.e., higher is better) and not rank indices. The absolute value of the scores is only meaningful for the reference scores; for the predictions, only the ordering of the scores affects the outcome - for example, predicted scores of [80, 1, 2] and [0.8, 0.5, 0.6] will receive the same nDCG score w.r.t. a given set of reference scores.

See also https://en.wikipedia.org/wiki/Discounted_cumulative_gain

prediction_type

alias of Optional[float]

class unitxt.metrics.NER(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1_micro', prediction_type: Any | str = typing.List[typing.Tuple[str, str]], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, zero_division: float = 0.0, report_per_group_scores: bool = True)

Bases: CustomF1

prediction_type

alias of List[Tuple[str, str]]

class unitxt.metrics.NormalizedSacrebleu(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'sacrebleu', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] | ~typing.Dict[str, str] = {'mecab_ko': '\n\nAdditional dependencies required. To install them, run:\n`pip install "sacrebleu[ko]"`.\n\nFor MacOS: If error on \'mecab-config\' show up during installation ], one should run:\n\n`brew install mecab`\n`pip install "sacrebleu[ko]"`\n\n', 'mecab_ko_dic': '\n\nAdditional dependencies required. To install them, run:\n`pip install "sacrebleu[ko]"`.\n\nFor MacOS: If error on \'mecab-config\' show up during installation ], one should run:\n\n`brew install mecab`\n`pip install "sacrebleu[ko]"`\n\n'}, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, hf_metric_name: str = 'sacrebleu', hf_main_score: str = 'score', scale: float = 100.0, scaled_fields: list = ['sacrebleu', 'precisions'], hf_compute_args: ~typing.Dict[str, ~typing.Any] = {}, hf_additional_input_fields: ~typing.List = [], hf_additional_input_fields_pass_one_value: ~typing.List = ['tokenize'], experiment_id: str = 'dad3ba2c-96a0-499d-87f3-bcf0ab23efba')

Bases: HuggingfaceMetric

hf_additional_input_fields_pass_one_value: List = ['tokenize']
prediction_type

alias of str

scaled_fields: list = ['sacrebleu', 'precisions']
class unitxt.metrics.Perplexity(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'perplexity', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['transformers', 'torch'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['perplexity']}, implemented_reductions: ~typing.List[str], source_template: str, target_template: str, batch_size: int = 32, model_name: str, single_token_mode: bool = False)

Bases: BulkInstanceMetric

Computes the likelihood of generating text Y after text X - P(Y|X).

prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['perplexity']}
class unitxt.metrics.PrecisionBinary(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'precision_binary', prediction_type: Any | str = typing.Union[float, int], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['sklearn'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1Binary

class unitxt.metrics.PrecisionMacroMultiLabel(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'precision_macro', prediction_type: Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1MultiLabel

class unitxt.metrics.PrecisionMicroMultiLabel(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'precision_micro', prediction_type: Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1MultiLabel

class unitxt.metrics.RecallBinary(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'recall_binary', prediction_type: Any | str = typing.Union[float, int], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['sklearn'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1Binary

class unitxt.metrics.RecallMacroMultiLabel(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'recall_macro', prediction_type: Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1MultiLabel

class unitxt.metrics.RecallMicroMultiLabel(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'recall_micro', prediction_type: Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: F1MultiLabel

class unitxt.metrics.RegardMetric(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'regard', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['transformers', 'torch', 'tqdm'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, model_name: str = 'sasha/regardv3', batch_size: int = 32)

Bases: GlobalMetric

prediction_type: Any | str = typing.Any
class unitxt.metrics.RemoteMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = ['public', 'proprietary'], main_score: str = None, prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, endpoint: str, metric_name: str, api_key: str = None)

Bases: StreamOperator, Metric

A metric that runs another metric remotely.

main_score: the score updated by this metric. endpoint: the remote host that supports the remote metric execution. metric_name: the name of the metric that is executed remotely. api_key: optional, passed to the remote metric with the input, allows secure authentication.

data_classification_policy: List[str] = ['public', 'proprietary']
class unitxt.metrics.RerankRecall(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'recall_at_5', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = None, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['pandas', 'pytrec_eval'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, query_id_field: str = 'query_id', passage_id_field: str = 'passage_id', at_k: List[int] = [1, 2, 5])

Bases: GlobalMetric

RerankRecall: measures the quality of reranking with respect to ground truth ranking scores.

This metric measures ranking performance across a dataset. The references for a query will have a score of 1 for the gold passage and 0 for all other passages. The model returns scores in [0,1] for each passage,query pair. This metric measures recall at k by testing that the predicted score for the gold passage,query pair is at least the k’th highest for all passages for that query. A query receives 1 if so, and 0 if not. The 1’s and 0’s are averaged across the dataset.

query_id_field selects the field containing the query id for an instance. passage_id_field selects the field containing the passage id for an instance. at_k selects the value of k used to compute recall.

at_k: List[int] = [1, 2, 5]
class unitxt.metrics.RetrievalAtK(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = None, prediction_type: ~typing.Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = None, reference_field: str = 'references', prediction_field: str = 'prediction', k_list: ~typing.List[int])

Bases: RetrievalMetric

class unitxt.metrics.RetrievalMetric(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str, prediction_type: ~typing.Any | str = typing.List[str], single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]], reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

prediction_type

alias of List[str]

class unitxt.metrics.Reward(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'score', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['transformers', 'torch'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['score']}, implemented_reductions: ~typing.List[str], batch_size: int = 32, model_name: str)

Bases: BulkInstanceMetric

prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['score']}
class unitxt.metrics.RocAuc(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'roc_auc', prediction_type: ~typing.Any | str = <class 'float'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['sklearn'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None)

Bases: GlobalMetric

prediction_type

alias of float

class unitxt.metrics.Rouge(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'rougeL', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum'], _requirements_list: ~typing.List[str] = ['nltk', 'rouge_score'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']}, reference_field: str = 'references', prediction_field: str = 'prediction', rouge_types: ~typing.List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum'], sent_split_newline: bool = True)

Bases: InstanceMetric

ci_scores: List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']
prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']}
rouge_types: List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']
class unitxt.metrics.RougeHF(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'rougeL', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum'], _requirements_list: ~typing.List[str] = ['nltk', 'rouge_score'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']}, reference_field: str = 'references', prediction_field: str = 'prediction', hf_metric_name: str = 'rouge', hf_metric_fields: ~typing.List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum'], hf_compute_args: dict = {}, rouge_types: ~typing.List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum'], sent_split_newline: bool = True)

Bases: HuggingfaceInstanceMetric

ci_scores: List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']
hf_metric_fields: List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']
prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']}
rouge_types: List[str] = ['rouge1', 'rouge2', 'rougeL', 'rougeLsum']
class unitxt.metrics.SafetyMetric(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'safety', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, _requirements_list: List[str] = ['transformers'], caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, reward_name: str = 'OpenAssistant/reward-model-deberta-v3-large-v2', batch_size: int = 100, critical_threshold: int = -5, high_threshold: int = -4, medium_threshold: int = -3)

Bases: GlobalMetric

prediction_type: Any | str = typing.Any
class unitxt.metrics.SentenceBert(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'score', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['sentence_transformers', 'torch', 'transformers'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['score']}, implemented_reductions: ~typing.List[str], batch_size: int = 32, model_name: str)

Bases: BulkInstanceMetric

reduction_map: Dict[str, List[str]] = {'mean': ['score']}
class unitxt.metrics.Spearmanr(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'spearmanr', prediction_type: ~typing.Any | str = <class 'float'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, hf_metric_name: str = 'spearmanr', hf_main_score: str = None, scale: float = 1.0, scaled_fields: list = None, hf_compute_args: ~typing.Dict[str, ~typing.Any] = {}, hf_additional_input_fields: ~typing.List = [], hf_additional_input_fields_pass_one_value: ~typing.List = [], experiment_id: str = 'ee0340cb-ad89-4868-b6f2-a6f476ea7a21')

Bases: HuggingfaceMetric

prediction_type

alias of float

class unitxt.metrics.Squad(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'f1', prediction_type: Any | str = typing.Dict[str, typing.Any], single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None, hf_metric_name: str = 'squad', hf_main_score: str = None, scale: float = 100.0, scaled_fields: list = ['f1', 'exact_match'], hf_compute_args: Dict[str, Any] = {}, hf_additional_input_fields: List = [], hf_additional_input_fields_pass_one_value: List = [], experiment_id: str = '84938a1b-d6d9-4747-b9d0-d3f594e1d2a1')

Bases: HuggingfaceMetric

prediction_type

alias of Dict[str, Any]

scaled_fields: list = ['f1', 'exact_match']
class unitxt.metrics.StringContainment(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'string_containment', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['string_containment'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['string_containment']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

ci_scores: List[str] = ['string_containment']
prediction_type: Any | str = typing.Any
reduction_map: Dict[str, List[str]] = {'mean': ['string_containment']}
class unitxt.metrics.TokenOverlap(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'f1', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['f1', 'precision', 'recall'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['f1', 'precision', 'recall']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

ci_scores: List[str] = ['f1', 'precision', 'recall']
prediction_type

alias of str

reduction_map: Dict[str, List[str]] = {'mean': ['f1', 'precision', 'recall']}
class unitxt.metrics.UnsortedListExactMatch(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'unsorted_list_exact_match', prediction_type: ~typing.Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 1000, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = ['unsorted_list_exact_match'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, implemented_reductions: ~typing.List[str], reduction_map: ~typing.Dict[str, ~typing.List[str]] = {'mean': ['unsorted_list_exact_match']}, reference_field: str = 'references', prediction_field: str = 'prediction')

Bases: InstanceMetric

ci_scores: List[str] = ['unsorted_list_exact_match']
reduction_map: Dict[str, List[str]] = {'mean': ['unsorted_list_exact_match']}
class unitxt.metrics.UpdateStream(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, update: dict)

Bases: InstanceOperator

class unitxt.metrics.WeightedWinRateCorrelation(__tags__: Dict[str, str] = {}, data_classification_policy: List[str] = None, main_score: str = 'spearman_corr', prediction_type: Any | str = typing.Any, single_reference_per_prediction: bool = False, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: List[str] = None, caching: bool = None, apply_to_streams: List[str] = None, dont_apply_to_streams: List[str] = None)

Bases: GlobalMetric

class unitxt.metrics.Wer(__tags__: ~typing.Dict[str, str] = {}, data_classification_policy: ~typing.List[str] = None, main_score: str = 'wer', prediction_type: ~typing.Any | str = <class 'str'>, single_reference_per_prediction: bool = True, score_prefix: str = '', n_resamples: int = 100, confidence_level: float = 0.95, ci_scores: ~typing.List[str] = None, _requirements_list: ~typing.List[str] = ['jiwer'], caching: bool = None, apply_to_streams: ~typing.List[str] = None, dont_apply_to_streams: ~typing.List[str] = None, hf_metric_name: str = 'wer', hf_main_score: str = None, scale: float = 1.0, scaled_fields: list = None, hf_compute_args: ~typing.Dict[str, ~typing.Any] = {}, hf_additional_input_fields: ~typing.List = [], hf_additional_input_fields_pass_one_value: ~typing.List = [], experiment_id: str = '773950b0-58f5-4e13-a18e-3218b42e9f95')

Bases: HuggingfaceMetric

prediction_type

alias of str

unitxt.metrics.abstract_factory()
unitxt.metrics.abstract_field()
unitxt.metrics.interpret_effect_size(x: float)

Return a string rule-of-thumb interpretation of an effect size value, as defined by Cohen/Sawilowsky.

See https://en.wikipedia.org/wiki/Effect_size; Cohen, Jacob (1988). Statistical Power Analysis for the Behavioral Sciences; and Sawilowsky, S (2009). “New effect size rules of thumb”. Journal of Modern Applied Statistical Methods. 8 (2): 467-474.

Value has interpretation of - essentially 0 if |x| < 0.01 - very small if 0.01 <= |x| < 0.2 - small difference if 0.2 <= |x| < 0.5 - a medium difference if 0.5 <= |x| < 0.8 - a large difference if 0.8 <= |x| < 1.2 - a very large difference if 1.2 <= |x| < 2.0 - a huge difference if 2.0 <= |x|

Parameters:

x – float effect size value

Returns:

string interpretation

unitxt.metrics.mean_subgroup_score(subgroup_scores_dict: Dict[str, List], subgroup_types: List[str])

Return the mean instance score for a subset (possibly a single type) of variants (not a comparison).

Parameters:
  • subgroup_scores_dict – dict where keys are subgroup types and values are lists of instance scores.

  • subgroup_types – the keys (subgroup types) for which the average will be computed.

Returns:

float score

unitxt.metrics.nan_max(x)
unitxt.metrics.nan_mean(x)
unitxt.metrics.normalize_answer(s)

Lower text and remove punctuation, articles and extra whitespace.

unitxt.metrics.normalized_cohens_h(subgroup_scores_dict: Dict[str, List], control_subgroup_types: List[str], comparison_subgroup_types: List[str], interpret=False)

Cohen’s h effect size between two proportions, normalized to interval [-1,1].

Allows for change-type metric when the baseline is 0 (percentage change, and thus PDR, is undefined) https://en.wikipedia.org/wiki/Cohen%27s_h

Cohen’s h effect size metric between two proportions p2 and p1 is 2 * (arcsin(sqrt(p2)) - arcsin(sqrt(p1))). h in -pi, pi, with +/-pi representing the largest increase/decrease (p1=0, p2=1), or (p1=1, p2=0). h=0 is no change. Unlike percentage change, h is defined even if the baseline (p1) is 0. Assumes the scores are in [0,1], either continuous or binary; hence taking the average of a group of scores yields a proportion.. Calculates the change in the average of the other_scores relative to the average of the baseline_scores. We rescale this to [-1,1] from [-pi,pi] for clarity, where +- 1 are the most extreme changes, and 0 is no change

Interpretation: the original unscaled Cohen’s h can be interpreted according to function interpret_effect_size

Thus, the rule of interpreting the effect of the normalized value is to use the same thresholds divided by pi
  • essentially 0 if |norm h| < 0.0031831

  • very small if 0.0031831 <= |norm h| < 0.06366198

  • small difference if 0.06366198 <= |norm h| < 0.15915494

  • a medium difference if 0.15915494 <= |norm h| < 0.25464791

  • a large difference if 0.25464791 <= |norm h| < 0.38197186

  • a very large difference if 0.38197186 <= |norm h| < 0.63661977

  • a huge difference if 0.63661977 <= |norm h|

Parameters:
  • subgroup_scores_dict – dict where keys are subgroup types and values are lists of instance scores.

  • control_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the control (baseline) group

  • comparison_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the group to be compared to the control group.

  • interpret – boolean, whether to interpret the significance of the score or not

Returns:

float score between -1 and 1, and a string interpretation if interpret=True

unitxt.metrics.normalized_hedges_g(subgroup_scores_dict: Dict[str, List[float]], control_subgroup_types: List[str], comparison_subgroup_types: List[str], interpret=False)

Hedge’s g effect size between mean of two samples, normalized to interval [-1,1]. Better than Cohen’s d for small sample sizes.

Takes into account the variances within the samples, not just the means.

Parameters:
  • subgroup_scores_dict – dict where keys are subgroup types and values are lists of instance scores.

  • control_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the control (baseline) group

  • comparison_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the group to be compared to the control group.

  • interpret – boolean, whether to interpret the significance of the score or not

Returns:

float score between -1 and 1, and a string interpretation if interpret=True

unitxt.metrics.parse_string_types_instead_of_actual_objects(obj)
unitxt.metrics.performance_drop_rate(subgroup_scores_dict: Dict[str, List], control_subgroup_types: List[str], comparison_subgroup_types: List[str])

Percentage decrease of mean performance on test elements relative to that on a baseline (control).

from https://arxiv.org/pdf/2306.04528.pdf.

Parameters:
  • subgroup_scores_dict – dict where keys are subgroup types and values are lists of instance scores.

  • control_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the control (baseline) group

  • comparison_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the group to be compared to the control group.

Returns:

numeric PDR metric. If only one element (no test set) or the first is 0 (percentage change is undefined) return NaN otherwise, calculate PDR

unitxt.metrics.pytrec_eval_at_k(results, qrels, at_k, metric_name)
unitxt.metrics.validate_subgroup_types(subgroup_scores_dict: Dict[str, List], control_subgroup_types: List[str], comparison_subgroup_types: List[str])

Validate a dict of subgroup type instance score lists, and subgroup type lists.

Parameters:
  • subgroup_scores_dict – dict where keys are subgroup types and values are lists of instance scores.

  • control_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the control (baseline) group

  • comparison_subgroup_types – list of subgroup types (potential keys of subgroup_scores_dict) that are the group to be compared to the control group.

Returns:

dict with all NaN scores removed; control_subgroup_types and comparison_subgroup_types will have non-unique elements removed