Datasets:

ArXiv:
File size: 6,839 Bytes
f88c6c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f701a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f88c6c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f701a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
"""This section describes unitxt operators.

Operators: Building Blocks of Unitxt Processing Pipelines
==============================================================

Within the Unitxt framework, operators serve as the foundational elements used to assemble processing pipelines.
Each operator is designed to perform specific manipulations on dictionary structures within a stream.
These operators are callable entities that receive a MultiStream as input.
The output is a MultiStream, augmented with the operator's manipulations, which are then systematically applied to each instance in the stream when pulled.

Creating Custom Operators
-------------------------------
To enhance the functionality of Unitxt, users are encouraged to develop custom operators.
This can be achieved by inheriting from any of the existing operators listed below or from one of the fundamental :class:`base operators<unitxt.operator>`.
The primary task in any operator development is to implement the `process` function, which defines the unique manipulations the operator will perform.

General or Specelized Operators
--------------------------------
Some operators are specielized in specific task such as:

- :class:`loaders<unitxt.loaders>` for loading data.
- :class:`splitters<unitxt.splitters>` for fixing data splits.
- :class:`struct_data_operators<unitxt.struct_data_operators>` for structured data operators.

Other specelized operators are used by unitxt internally:

- :class:`templates<unitxt.templates>` for verbalizing data examples.
- :class:`formats<unitxt.formats>` for preparing data for models.

The rest of this section is dedicated for operators that operates on streams.

"""

from typing import (
    List,
    Literal,
    Optional,
)

import pandas as pd

from .operator import (
    MultiStream,
    MultiStreamOperator,
)
from .settings_utils import get_settings
from .stream import ListStream

settings = get_settings()


class JoinStreams(MultiStreamOperator):
    """Join multiple streams into a single stream.

    Args:
        left_stream (str): The stream that will be considered the "left" in the join operations.
        right_stream (str): The stream that will be considered the "right" in the join operations.
        how (Literal["left", "right", "inner", "outer", "cross"]): The type of join to be performed.
        on (Optional[List[str]]): Column names to join on. These must be found in both streams.
        left_on (Optional[List[str]]): Column  names to join on in the left stream.
        right_on (Optional[List[str]]): Column  names to join on in the right streasm.
        new_stream_name (str): The name of the new stream resulting from the merge.

    Examples:
       JoinStreams(left_stream = "questions", right_stream = "answers", how="inner", on="question_id", new_stream_name="question_with_answers" ) Join the 'question' and 'answer' stream based on the 'question_id' field using inner join, resulting with a new stream named "question_with_answers".
       JoinStreams(left_stream = "questions", right_stream = "answers", how="inner", on_left="question_id", on_right="question" new_stream_name="question_with_answers" ) Join the 'question' and 'answer' stream based on the 'question_id' field in the left stream and the 'question' field in the right stream, using inner join, resulting with a new stream named "question_with_answers". This is suitable when the fields have different labels across the streams.
    """

    left_stream: str
    right_stream: str
    how: Literal["left", "right", "inner", "outer", "cross"]
    on: Optional[List[str]] = None
    left_on: Optional[List[str]] = None
    right_on: Optional[List[str]] = None
    new_stream_name: str

    def merge(self, multi_stream) -> List:
        assert self.right_stream in multi_stream and self.left_stream in multi_stream
        stream_dict = dict(multi_stream.items())
        left_stream = list(stream_dict[self.left_stream])
        right_stream = list(stream_dict[self.right_stream])
        left_stream_df = pd.DataFrame(left_stream)
        right_stream_df = pd.DataFrame(right_stream)

        merged_df = pd.merge(
            left_stream_df,
            right_stream_df,
            how=self.how,
            on=self.on,
            left_on=self.left_on,
            right_on=self.right_on,
        )

        def assert_col_values_are_identical(
            df: pd.DataFrame, col_name_1: str, col_name_2
        ):
            assert df.apply(
                lambda row: str(row[col_name_1]) == str(row[col_name_2]),
                axis=1,
            ).all()

        # If 2 streams / Dataframes contains column with the same names, which are not the columns the join is operated
        # on they will be renamed to "[column_name]_x" and "[column_name]_y". Some of these columns are metadsta
        # columns that unitxt adds, which must be kept the same. This code verify that all datasets have
        # the same metadata values and rename the columns accordingly.
        common_cols_to_verify = ["data_classification_policy", "recipe_metadata"]
        for common_col in common_cols_to_verify:
            assert_col_values_are_identical(
                merged_df, f"{common_col}_x", f"{common_col}_y"
            )
            merged_df[common_col] = merged_df[f"{common_col}_x"]
            merged_df = merged_df.drop(
                columns=[f"{common_col}_x", f"{common_col}_y"], errors="ignore"
            )

        assert len(merged_df) > 0, (
            "JoinStreams resulted in an empty stream."
            " If you used 'loader_limit' it might be the cause of the error"
        )
        return merged_df.to_dict(orient="records")

    def process(self, multi_stream: MultiStream) -> MultiStream:
        merged_records = self.merge(multi_stream)
        multi_stream[self.new_stream_name] = ListStream(instances_list=merged_records)
        return multi_stream


class DeleteSplits(MultiStreamOperator):
    """Operator which delete splits in stream.

    Attributes:
        splits (List[str]): The splits to delete from the stream.
    """

    splits: List[str]

    def process(self, multi_stream: MultiStream) -> MultiStream:
        generators = {
            key: val for key, val in multi_stream.items() if key not in self.splits
        }
        return MultiStream(generators)


class DuplicateSplit(MultiStreamOperator):
    """Operator which duplicate a split.

    Attributes:
        split (str): The split to duplicate from the stream.
        to_split (str): The duplicate split's name.
    """

    split: str
    to_split: str

    def process(self, multi_stream: MultiStream) -> MultiStream:
        assert self.split in multi_stream
        generators = multi_stream
        generators[self.to_split] = generators[self.split]
        return MultiStream(generators)