|||

Quick search

  • Introduction
  • Explore Unitxt
  • Loading Datasets
  • Evaluating Datasets
  • Installation
  • Tutorials ✨
  • Examples ✨
  • Sensitive data in unitxt ✨
  • RAG Support ✨
  • Operators List
  • Save/Load from Catalog
  • Inference and Production
  • Debugging Unitxt
  • Running Unitxt with HELM
  • Running Unitxt with LM-Eval
  • Glossary
  • Code Documentation
  • πŸ“ Catalog
    • πŸ“ Augmentors
    • πŸ“ Cards
    • πŸ“ Engines
    • πŸ“ Formats
    • πŸ“ Metrics
      • πŸ“ Bert Score
      • πŸ“ Llm As Judge
      • πŸ“ Perplexity
      • πŸ“ Perplexity A
      • πŸ“ Perplexity Chat
      • πŸ“ Perplexity Nli
      • πŸ“ Perplexity Q
      • πŸ“ Rag
      • πŸ“ Reward
      • πŸ“ Robustness
      • πŸ“ Sentence Bert
      • πŸ“„ Accuracy
      • πŸ“„ Accuracy Binary
      • πŸ“„ Bleu
      • πŸ“„ Char Edit Dist Accuracy
      • πŸ“„ Char Edit Distance
      • πŸ“„ F1 Binary
      • πŸ“„ F1 Macro
      • πŸ“„ F1 Macro Multi Label
      • πŸ“„ F1 Micro
      • πŸ“„ F1 Micro Multi Label
      • πŸ“„ F1 Weighted
      • πŸ“„ Fin Qa Metric
      • πŸ“„ Fuzzyner
      • πŸ“„ Is Code Mixed
      • πŸ“„ Jaccard Index
      • πŸ“„ Kendalltau B
      • πŸ“„ Kpa
      • πŸ“„ Map
      • πŸ“„ Matthews Correlation
      • πŸ“„ Max Accuracy Binary
      • πŸ“„ Max F1 Binary
      • πŸ“„ Meteor
      • πŸ“„ Mrr
      • πŸ“„ Ndcg
      • πŸ“„ Ner
      • πŸ“„ Normalized Sacrebleu
      • πŸ“„ Precision Binary
      • πŸ“„ Precision Macro Multi Label
      • πŸ“„ Precision Micro Multi Label
      • πŸ“„ Recall Binary
      • πŸ“„ Recall Macro Multi Label
      • πŸ“„ Recall Micro Multi Label
      • πŸ“„ Regard Metric
      • πŸ“„ Rerank Recall
      • πŸ“„ Retrieval At K
      • πŸ“„ Roc Auc
      • πŸ“„ Rouge
      • πŸ“„ Rouge With Confidence Intervals
      • πŸ“„ Sacrebleu
      • πŸ“„ Safety Metric
      • πŸ“„ Spearman
      • πŸ“„ Squad
      • πŸ“„ String Containment
      • πŸ“„ Token Overlap
      • πŸ“„ Token Overlap With Context
      • πŸ“„ Unsorted List Exact Match
      • πŸ“„ Weighted Win Rate Correlation
      • πŸ“„ Wer
    • πŸ“ Operators
    • πŸ“ Processors
    • πŸ“ Splitters
    • πŸ“ System Prompts
    • πŸ“ Tasks
    • πŸ“ Templates

πŸ“ MetricsΒΆ

  • πŸ“ Bert Score
  • πŸ“ Llm As Judge
  • πŸ“ Perplexity
  • πŸ“ Perplexity A
  • πŸ“ Perplexity Chat
  • πŸ“ Perplexity Nli
  • πŸ“ Perplexity Q
  • πŸ“ Rag
  • πŸ“ Reward
  • πŸ“ Robustness
  • πŸ“ Sentence Bert
  • πŸ“„ Accuracy
  • πŸ“„ Accuracy Binary
  • πŸ“„ Bleu
  • πŸ“„ Char Edit Dist Accuracy
  • πŸ“„ Char Edit Distance
  • πŸ“„ F1 Binary
  • πŸ“„ F1 Macro
  • πŸ“„ F1 Macro Multi Label
  • πŸ“„ F1 Micro
  • πŸ“„ F1 Micro Multi Label
  • πŸ“„ F1 Weighted
  • πŸ“„ Fin Qa Metric
  • πŸ“„ Fuzzyner
  • πŸ“„ Is Code Mixed
  • πŸ“„ Jaccard Index
  • πŸ“„ Kendalltau B
  • πŸ“„ Kpa
  • πŸ“„ Map
  • πŸ“„ Matthews Correlation
  • πŸ“„ Max Accuracy Binary
  • πŸ“„ Max F1 Binary
  • πŸ“„ Meteor
  • πŸ“„ Mrr
  • πŸ“„ Ndcg
  • πŸ“„ Ner
  • πŸ“„ Normalized Sacrebleu
  • πŸ“„ Precision Binary
  • πŸ“„ Precision Macro Multi Label
  • πŸ“„ Precision Micro Multi Label
  • πŸ“„ Recall Binary
  • πŸ“„ Recall Macro Multi Label
  • πŸ“„ Recall Micro Multi Label
  • πŸ“„ Regard Metric
  • πŸ“„ Rerank Recall
  • πŸ“„ Retrieval At K
  • πŸ“„ Roc Auc
  • πŸ“„ Rouge
  • πŸ“„ Rouge With Confidence Intervals
  • πŸ“„ Sacrebleu
  • πŸ“„ Safety Metric
  • πŸ“„ Spearman
  • πŸ“„ Squad
  • πŸ“„ String Containment
  • πŸ“„ Token Overlap
  • πŸ“„ Token Overlap With Context
  • πŸ“„ Unsorted List Exact Match
  • πŸ“„ Weighted Win Rate Correlation
  • πŸ“„ Wer

Read more about catalog usage here.

<πŸ“„ User Assistant
πŸ“ Bert Score>
© Copyright 2023, IBM Research.