πŸ“„ Atta QΒΆ

cards.atta_q

type: TaskCard
loader: 
  type: LoadHF
  path: ibm/AttaQ
preprocess_steps: 
  - type: RenameSplits
    mapper: 
      train: test
  - type: Shuffle
    page_size: 2800
  - type: Set
    fields: 
      input_label: {}
  - type: Copy
    field_to_field: 
      input: input_label/input
      label: input_label/label
  - type: DumpJson
    field: input_label
task: 
  type: Task
  input_fields: 
    - input
  reference_fields: 
    - input_label
  metrics: 
    - metrics.safety_metric
templates: 
  - type: InputOutputTemplate
    input_format: "{input}\n"
    output_format: {input_label}
  - type: InputOutputTemplate
    input_format: {input}
    output_format: {input_label}
[source]

Explanation about TaskCardΒΆ

TaskCard delineates the phases in transforming the source dataset into model input, and specifies the metrics for evaluation of model output.

Attributes:

loader: specifies the source address and the loading operator that can access that source and transform it into a unitxt multistream.

preprocess_steps: list of unitxt operators to process the data source into model input.

task: specifies the fields (of the already (pre)processed instance) making the inputs, the fields making the outputs, and the metrics to be used for evaluating the model output.

templates: format strings to be applied on the input fields (specified by the task) and the output fields. The template also carries the instructions and the list of postprocessing steps, to be applied to the model output.

Explanation about LoadHFΒΆ

Loads datasets from the HuggingFace Hub.

It supports loading with or without streaming, and it can filter datasets upon loading.

Args:

path: The path or identifier of the dataset on the HuggingFace Hub. name: An optional dataset name. data_dir: Optional directory to store downloaded data. split: Optional specification of which split to load. data_files: Optional specification of particular data files to load. revision: Optional. The revision of the dataset. Often the commit id. Use in case you want to set the dataset version. streaming: Bool indicating if streaming should be used. filtering_lambda: A lambda function for filtering the data after loading. num_proc: Optional integer to specify the number of processes to use for parallel dataset loading.

Example:

Loading glue’s mrpc dataset

load_hf = LoadHF(path='glue', name='mrpc')

Explanation about InputOutputTemplateΒΆ

Generate field β€˜source’ from fields designated as input, and fields β€˜target’ and β€˜references’ from fields designated as output, of the processed instance.

Args specify the formatting strings with which to glue together the input and reference fields of the processed instance into one string (β€˜source’ and β€˜target’), and into a list of strings (β€˜references’).

Explanation about ShuffleΒΆ

Shuffles the order of instances in each page of a stream.

Args (of superclass):

page_size (int): The size of each page in the stream. Defaults to 1000.

Explanation about SetΒΆ

Adds specified fields to each instance in a given stream or all streams (default) If fields exist, updates them.

Args:
fields (Dict[str, object]): The fields to add to each instance.

Use β€˜/’ to access inner fields

use_deepcopy (bool) : Deep copy the input value to avoid later modifications

Examples:

# Add a β€˜classes’ field with a value of a list β€œpositive” and β€œnegative” to all streams Set(fields={β€œclasses”: [β€œpositive”,”negatives”]})

# Add a β€˜start’ field under the β€˜span’ field with a value of 0 to all streams Set(fields={β€œspan/start”: 0}

# Add a β€˜classes’ field with a value of a list β€œpositive” and β€œnegative” to β€˜train’ stream Set(fields={β€œclasses”: [β€œpositive”,”negatives”], apply_to_stream=[β€œtrain”]})

# Add a β€˜classes’ field on a given list, prevent modification of original list # from changing the instance. Set(fields={β€œclasses”: alist}), use_deepcopy=True) # if now alist is modified, still the instances remain intact.

Explanation about TaskΒΆ

Task packs the different instance fields into dictionaries by their roles in the task.

Attributes:
input_fields (Union[Dict[str, str], List[str]]):

Dictionary with string names of instance input fields and types of respective values. In case a list is passed, each type will be assumed to be Any.

reference_fields (Union[Dict[str, str], List[str]]):

Dictionary with string names of instance output fields and types of respective values. In case a list is passed, each type will be assumed to be Any.

metrics (List[str]): List of names of metrics to be used in the task. prediction_type (Optional[str]):

Need to be consistent with all used metrics. Defaults to None, which means that it will be set to Any.

defaults (Optional[Dict[str, Any]]):

An optional dictionary with default values for chosen input/output keys. Needs to be consistent with names and types provided in β€˜input_fields’ and/or β€˜output_fields’ arguments. Will not overwrite values if already provided in a given instance.

The output instance contains three fields:

β€œinput_fields” whose value is a sub-dictionary of the input instance, consisting of all the fields listed in Arg β€˜input_fields’. β€œreference_fields” – for the fields listed in Arg β€œreference_fields”. β€œmetrics” – to contain the value of Arg β€˜metrics’

Explanation about CopyΒΆ

Copies values from specified fields to specified fields.

Args (of parent class):

field_to_field (Union[List[List], Dict[str, str]]): A list of lists, where each sublist contains the source field and the destination field, or a dictionary mapping source fields to destination fields.

Examples:

An input instance {β€œa”: 2, β€œb”: 3}, when processed by Copy(field_to_field={β€œa”: β€œb”} would yield {β€œa”: 2, β€œb”: 2}, and when processed by Copy(field_to_field={β€œa”: β€œc”} would yield {β€œa”: 2, β€œb”: 3, β€œc”: 2}

with field names containing / , we can also copy inside the field: Copy(field=”a/0”,to_field=”a”) would process instance {β€œa”: [1, 3]} into {β€œa”: 1}

References: metrics.safety_metric

Read more about catalog usage here.