πŸ“„ Cnn DailymailΒΆ

The CNN / DailyMail Dataset is an English-language dataset containing just over 300k unique news articles as written by journalists at CNN and the Daily Mail. The current version supports both extractive and abstractive summarization, though the original version was created for machine reading and comprehension and abstractive question answering… See the full description on the dataset page: https://huggingface.co/datasets/abisee/cnn_dailymail.

Tags: annotations_creators:no-annotation, language:en, language_creators:found, license:apache-2.0, multilinguality:monolingual, region:us, size_categories:100K<n<1M, source_datasets:original, task_categories:summarization, task_ids:news-articles-summarization

cards.cnn_dailymail

type: TaskCard
loader: 
  type: LoadHF
  path: cnn_dailymail
  name: 3.0.0
preprocess_steps: 
  - type: Rename
    field_to_field: 
      article: document
  - type: Wrap
    field: highlights
    inside: list
    to_field: summaries
  - type: Set
    fields: 
      document_type: article
task: tasks.summarization.abstractive
templates: templates.summarization.abstractive.all
[source]

Explanation about TaskCardΒΆ

TaskCard delineates the phases in transforming the source dataset into model input, and specifies the metrics for evaluation of model output.

Attributes:

loader: specifies the source address and the loading operator that can access that source and transform it into a unitxt multistream.

preprocess_steps: list of unitxt operators to process the data source into model input.

task: specifies the fields (of the already (pre)processed instance) making the inputs, the fields making the outputs, and the metrics to be used for evaluating the model output.

templates: format strings to be applied on the input fields (specified by the task) and the output fields. The template also carries the instructions and the list of postprocessing steps, to be applied to the model output.

Explanation about LoadHFΒΆ

Loads datasets from the HuggingFace Hub.

It supports loading with or without streaming, and it can filter datasets upon loading.

Args:

path: The path or identifier of the dataset on the HuggingFace Hub. name: An optional dataset name. data_dir: Optional directory to store downloaded data. split: Optional specification of which split to load. data_files: Optional specification of particular data files to load. revision: Optional. The revision of the dataset. Often the commit id. Use in case you want to set the dataset version. streaming: Bool indicating if streaming should be used. filtering_lambda: A lambda function for filtering the data after loading. num_proc: Optional integer to specify the number of processes to use for parallel dataset loading.

Example:

Loading glue’s mrpc dataset

load_hf = LoadHF(path='glue', name='mrpc')

Explanation about SetΒΆ

Adds specified fields to each instance in a given stream or all streams (default) If fields exist, updates them.

Args:
fields (Dict[str, object]): The fields to add to each instance.

Use β€˜/’ to access inner fields

use_deepcopy (bool) : Deep copy the input value to avoid later modifications

Examples:

# Add a β€˜classes’ field with a value of a list β€œpositive” and β€œnegative” to all streams Set(fields={β€œclasses”: [β€œpositive”,”negatives”]})

# Add a β€˜start’ field under the β€˜span’ field with a value of 0 to all streams Set(fields={β€œspan/start”: 0}

# Add a β€˜classes’ field with a value of a list β€œpositive” and β€œnegative” to β€˜train’ stream Set(fields={β€œclasses”: [β€œpositive”,”negatives”], apply_to_stream=[β€œtrain”]})

# Add a β€˜classes’ field on a given list, prevent modification of original list # from changing the instance. Set(fields={β€œclasses”: alist}), use_deepcopy=True) # if now alist is modified, still the instances remain intact.

Explanation about RenameΒΆ

Renames fields.

Move value from one field to another, potentially, if field name contains a /, from one branch into another. Remove the from field, potentially part of it in case of / in from_field.

Examples:

Rename(field_to_field={β€œb”: β€œc”}) will change inputs [{β€œa”: 1, β€œb”: 2}, {β€œa”: 2, β€œb”: 3}] to [{β€œa”: 1, β€œc”: 2}, {β€œa”: 2, β€œc”: 3}]

Rename(field_to_field={β€œb”: β€œc/d”}) will change inputs [{β€œa”: 1, β€œb”: 2}, {β€œa”: 2, β€œb”: 3}] to [{β€œa”: 1, β€œc”: {β€œd”: 2}}, {β€œa”: 2, β€œc”: {β€œd”: 3}}]

Rename(field_to_field={β€œb”: β€œb/d”}) will change inputs [{β€œa”: 1, β€œb”: 2}, {β€œa”: 2, β€œb”: 3}] to [{β€œa”: 1, β€œb”: {β€œd”: 2}}, {β€œa”: 2, β€œb”: {β€œd”: 3}}]

Rename(field_to_field={β€œb/c/e”: β€œb/d”}) will change inputs [{β€œa”: 1, β€œb”: {β€œc”: {β€œe”: 2, β€œf”: 20}}}] to [{β€œa”: 1, β€œb”: {β€œc”: {β€œf”: 20}, β€œd”: 2}}]

References: templates.summarization.abstractive.all, tasks.summarization.abstractive

Read more about catalog usage here.