gsarti commited on
Commit
2ab5871
1 Parent(s): db7d3d7

Extra fields

Browse files
Files changed (2) hide show
  1. README.md +23 -4
  2. scat.py +38 -5
README.md CHANGED
@@ -65,6 +65,7 @@ dataset_info:
65
  - [Data Instances](#data-instances)
66
  - [Data Splits](#data-splits)
67
  - [Dataset Creation](#dataset-creation)
 
68
  - [Additional Information](#additional-information)
69
  - [Dataset Curators](#dataset-curators)
70
  - [Licensing Information](#licensing-information)
@@ -96,19 +97,27 @@ The dataset contains a single default configuration. Dataset examples have the f
96
  {
97
  "id": 0,
98
  "context_en": "Air, water, the continents. So, what is your project about and what are its chances of winning? - Well, my project is awesome. - Oh, good. I took two plants, and I gave them sun and water",
99
- "en": "But I gave one special attention to see if <p>it</p> would grow more.",
100
- "context_fr": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux <hon>plantes<hoff> , et je leur ai donné de l'eau et du soleil.",
101
- "fr": "Mais j'ai donné une attention particulière à une pour voir si <p>elle</p> grandit plus.",
 
 
 
 
 
 
102
  "has_supporting_context": true,
103
  }
104
  ```
105
 
106
- In every example, the pronoun of interest and its translation are surrounded by `<p>...</p>` tags. These are guaranteed to be found in the `en` and `fr` field, respectively.
107
 
108
  Any span surrounded by `<hon>...<hoff>` tags was identified by human annotators as supporting context to correctly translate the pronoun of interest. These spans can be missing altogether (i.e. no contextual information needed), or they can be found in any of the available fields. The `has_supporting_context` field indicates whether the example contains any supporting context.
109
 
110
  In the example above, the translation of the pronoun `it` (field `en`) is ambiguous, and the correct translation to the feminine French pronoun `elle` (in field `fr`) is only possible thanks to the supporting feminine noun `plantes` in the field `context_fr`. Since the example contains supporting context, the `has_supporting_context` field is set to `true`.
111
 
 
 
112
  ### Data Splits
113
 
114
  The dataset is split into `train`, `validation` and `test` sets. In the following table, we report the number of examples in the original dataset and in this filtered version in which examples containing malformed tags were removed.
@@ -127,6 +136,16 @@ From the original paper:
127
 
128
  Please refer to the original article [Do Context-Aware Translation Models Pay the Right Attention?](https://aclanthology.org/2021.acl-long.65/) for additional information on dataset creation.
129
 
 
 
 
 
 
 
 
 
 
 
130
  ## Additional Information
131
  ### Dataset Curators
132
 
 
65
  - [Data Instances](#data-instances)
66
  - [Data Splits](#data-splits)
67
  - [Dataset Creation](#dataset-creation)
68
+ - [Additional Preprocessing](#additional-preprocessing)
69
  - [Additional Information](#additional-information)
70
  - [Dataset Curators](#dataset-curators)
71
  - [Licensing Information](#licensing-information)
 
97
  {
98
  "id": 0,
99
  "context_en": "Air, water, the continents. So, what is your project about and what are its chances of winning? - Well, my project is awesome. - Oh, good. I took two plants, and I gave them sun and water",
100
+ "en": "But I gave one special attention to see if it would grow more.",
101
+ "context_fr": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux plantes , et je leur ai donné de l'eau et du soleil.",
102
+ "fr": "Mais j'ai donné une attention particulière à une pour voir si elle grandit plus.",
103
+ "contrast_fr": "Mais j'ai donné une attention particulière à une pour voir si il grandit plus.",
104
+ "context_en_with_tags": "Air, water, the continents. So, what is your project about and what are its chances of winning? - Well, my project is awesome. - Oh, good. I took two plants, and I gave them sun and water",
105
+ "en_with_tags": "But I gave one special attention to see if <p>it</p> would grow more.",
106
+ "context_fr_with_tags": "L'air, l'eau, les continents. Donc, quel est le sujet de ton projet et quelles sont ses chances de gagner ? - Bien, mon projet est impressionnant. - Oh, bien. J'ai pris deux <hon>plantes<hoff> , et je leur ai donné de l'eau et du soleil.",
107
+ "fr_with_tags": "Mais j'ai donné une attention particulière à une pour voir si <p>elle</p> grandit plus.",
108
+ "contrast_fr_with_tags": "Mais j'ai donné une attention particulière à une pour voir si <p>il</p> grandit plus.",
109
  "has_supporting_context": true,
110
  }
111
  ```
112
 
113
+ In every example, the pronoun of interest and its translation are surrounded by `<p>...</p>` tags. These are guaranteed to be found in the `en_with_tags` and `fr_with_tags` field, respectively.
114
 
115
  Any span surrounded by `<hon>...<hoff>` tags was identified by human annotators as supporting context to correctly translate the pronoun of interest. These spans can be missing altogether (i.e. no contextual information needed), or they can be found in any of the available fields. The `has_supporting_context` field indicates whether the example contains any supporting context.
116
 
117
  In the example above, the translation of the pronoun `it` (field `en`) is ambiguous, and the correct translation to the feminine French pronoun `elle` (in field `fr`) is only possible thanks to the supporting feminine noun `plantes` in the field `context_fr`. Since the example contains supporting context, the `has_supporting_context` field is set to `true`.
118
 
119
+ Fields with the `_with_tags` suffix contain tags around pronouns of interest and supporting context, while their counterparts without the suffix contain the same text without tags, to facilitate direct usage with machine translation models.
120
+
121
  ### Data Splits
122
 
123
  The dataset is split into `train`, `validation` and `test` sets. In the following table, we report the number of examples in the original dataset and in this filtered version in which examples containing malformed tags were removed.
 
136
 
137
  Please refer to the original article [Do Context-Aware Translation Models Pay the Right Attention?](https://aclanthology.org/2021.acl-long.65/) for additional information on dataset creation.
138
 
139
+ ### Additional Preprocessing
140
+
141
+ Compared to the original SCAT corpus, the following differences are present in this version:
142
+
143
+ - Examples were filtered using the [filter_scat.py](filter_scat.py) script to retain only examples containing well-formed tags, and remove superfluous tags. Superfluous tags are defined as nested `<hon><p>...</p><hoff>` tags that represent lack of contextual information for disambiguating the correct pronoun. In this case, the outer `<hon>...<hoff>` tag was removed.
144
+
145
+ - Sentences stripped from tags are provided in fields without the `_with_tags` suffix.
146
+
147
+ - An extra contrastive sentence using the pronoun of interest belonging to the opposite gender is available in the `contrast_fr` field. The swap was performed using a simple lexical heuristic (refer to `swap_pronoun` in [`scat.py`](./scat.py)), and we do not guarantee grammatical correctness of the sentence.
148
+
149
  ## Additional Information
150
  ### Dataset Curators
151
 
scat.py CHANGED
@@ -72,7 +72,7 @@ class ScatConfig(datasets.BuilderConfig):
72
  self.target_language = target_language
73
 
74
 
75
- class WmtVat(datasets.GeneratorBasedBuilder):
76
 
77
  VERSION = datasets.Version("1.0.0")
78
 
@@ -80,6 +80,27 @@ class WmtVat(datasets.GeneratorBasedBuilder):
80
 
81
  DEFAULT_CONFIG_NAME = "sentences"
82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
83
  def _info(self):
84
  features = datasets.Features(
85
  {
@@ -88,6 +109,11 @@ class WmtVat(datasets.GeneratorBasedBuilder):
88
  "en": datasets.Value("string"),
89
  "context_fr": datasets.Value("string"),
90
  "fr": datasets.Value("string"),
 
 
 
 
 
91
  "has_supporting_context": datasets.Value("bool"),
92
  }
93
  )
@@ -139,11 +165,18 @@ class WmtVat(datasets.GeneratorBasedBuilder):
139
  has_supporting_context = False
140
  if "<hon>" in allfields and "<hoff>" in allfields:
141
  has_supporting_context = True
 
142
  yield i, {
143
  "id": i,
144
- "context_en": ce,
145
- "en": e,
146
- "context_fr": cf,
147
- "fr": f,
 
 
 
 
 
 
148
  "has_supporting_context": has_supporting_context,
149
  }
 
72
  self.target_language = target_language
73
 
74
 
75
+ class Scat(datasets.GeneratorBasedBuilder):
76
 
77
  VERSION = datasets.Version("1.0.0")
78
 
 
80
 
81
  DEFAULT_CONFIG_NAME = "sentences"
82
 
83
+ @staticmethod
84
+ def clean_string(txt: str):
85
+ return txt.replace("<p>", "").replace("</p>", "").replace("<hon>", "").replace("<hoff>", "")
86
+
87
+ @staticmethod
88
+ def swap_pronoun(txt: str):
89
+ pron: str = re.findall(r"<p>([^<]*)</p>", txt)[0]
90
+ new_pron = pron
91
+ is_cap = pron.istitle()
92
+ if pron.lower() == "elles":
93
+ new_pron = "ils"
94
+ if pron.lower() == "elle":
95
+ new_pron = "il"
96
+ if pron.lower() == "ils":
97
+ new_pron = "elles"
98
+ if pron.lower() == "il":
99
+ new_pron = "elle"
100
+ if is_cap:
101
+ new_pron = new_pron.capitalize()
102
+ return txt.replace(f"<p>{pron}</p>", f"<p>{new_pron}</p>")
103
+
104
  def _info(self):
105
  features = datasets.Features(
106
  {
 
109
  "en": datasets.Value("string"),
110
  "context_fr": datasets.Value("string"),
111
  "fr": datasets.Value("string"),
112
+ "contrast_fr": datasets.Value("string"),
113
+ "context_en_with_tags": datasets.Value("string"),
114
+ "en_with_tags": datasets.Value("string"),
115
+ "context_fr_with_tags": datasets.Value("string"),
116
+ "fr_with_tags": datasets.Value("string"),
117
  "has_supporting_context": datasets.Value("bool"),
118
  }
119
  )
 
165
  has_supporting_context = False
166
  if "<hon>" in allfields and "<hoff>" in allfields:
167
  has_supporting_context = True
168
+ contrast_fr = self.swap_pronoun(f)
169
  yield i, {
170
  "id": i,
171
+ "context_en": self.clean_string(ce),
172
+ "en": self.clean_string(e),
173
+ "context_fr": self.clean_string(cf),
174
+ "fr": self.clean_string(f),
175
+ "contrast_fr": self.clean_string(contrast_fr),
176
+ "context_en_with_tags": ce,
177
+ "en_with_tags": e,
178
+ "context_fr_with_tags": cf,
179
+ "fr_with_tags": f,
180
+ "contrast_fr_with_tags": contrast_fr,
181
  "has_supporting_context": has_supporting_context,
182
  }