Datasets:
Added processed dataframes
Browse files- README.md +53 -13
- task/.gitattributes +1 -0
- task/main/processed_main.csv +3 -0
- task/pretask/processed_pretask.csv +0 -0
README.md
CHANGED
@@ -122,34 +122,74 @@ A single entry in the dataframe represents a segment (~sentence) in the dataset,
|
|
122 |
|`num_words_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
123 |
|`num_words_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
124 |
|`num_words_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
125 |
-
|`num_words_unchanged`
|
126 |
|`tot_words_edits` | Total of all edit types for the sentence. |
|
127 |
|`wer` | Word Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
|
128 |
-
|`num_chars_insert`
|
129 |
-
|`num_chars_delete`
|
130 |
-
|`num_chars_substitute`
|
131 |
-
|`num_chars_unchanged`
|
132 |
-
|`tot_chars_edits`
|
133 |
|`cer` | Character Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
|
134 |
| **Translation quality**| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
135 |
|`mt_pe_bleu` | BLEU score computed between `mt_text` and `pe_text` using [SacreBLEU](https://github.com/mjpost/sacrebleu) with default parameters. |
|
136 |
|`mt_pe_chrf` | chrF score computed between `mt_text` and `pe_text` using [SacreBLEU](https://github.com/mjpost/sacrebleu) with default parameters. |
|
137 |
|`mt_pe_comet` | COMET sentence-level score for the `mt_text` and `pe_text` using [COMET](https://github.com/Unbabel/COMET) |
|
138 |
|`mt_xcomet_xxl_qe` | XCOMET-XXL sentence-level quality estimation score for the `mt_text`. |
|
139 |
|`pe_xcomet_xxl_qe` | XCOMET-XXL sentence-level quality estimation score for the `pe_text`. |
|
140 |
| **Behavioral data** | |
|
141 |
-
|`
|
142 |
|`doc_edit_order` | Index corresponding to the current document edit order. If equal to `doc_id`, the document was edited in the given order. |
|
|
|
|
|
|
|
|
|
|
|
|
|
143 |
|`segment_edit_order` | Index corresponding to the current segment edit order (only first `enter` action counts). If equal to `segment_in_doc_id`, the segment was edited in the given order. |
|
144 |
|`segment_edit_time` | Total editing time for the current segment in seconds (summed time between `enter`-`exit` blocks) |
|
145 |
|`segment_edit_time_filtered` | Total editing time for the current segment in seconds (>5m pauses between logged actions ignored). |
|
146 |
-
|`
|
147 |
-
|`
|
|
|
148 |
|`num_enter_actions` | Number of `enter` actions (focus on textbox) performed by the translator on the current segment during post-editing. |
|
149 |
-
|`
|
150 |
-
|`clear_highlights` | If 1, the Clear Highlights button was pressed for this segment (always 0 for `no_highlight` modality). |
|
151 |
-
|`kpm` | Keystrokes per minute computed for the current segment using `segment_edit_time_filtered`. |
|
152 |
-
|`wpm` | Words per minute computed for the current segment using `segment_edit_time_filtered`. |
|
153 |
|**Texts and annotations**| |
|
154 |
|`src_text` | The original source segment from WMT23 requiring translation. |
|
155 |
|`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
|
|
|
122 |
|`num_words_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
123 |
|`num_words_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
124 |
|`num_words_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
125 |
+
|`num_words_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
|
126 |
|`tot_words_edits` | Total of all edit types for the sentence. |
|
127 |
|`wer` | Word Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
|
128 |
+
|`num_chars_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
129 |
+
|`num_chars_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
130 |
+
|`num_chars_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
|
131 |
+
|`num_chars_unchanged` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
|
132 |
+
|`tot_chars_edits` | Total of all edit types for the sentence. |
|
133 |
|`cer` | Character Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
|
134 |
| **Translation quality**| |
|
135 |
+
|`mt_bleu_max` | Max BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
136 |
+
|`mt_bleu_min` | Min BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
137 |
+
|`mt_bleu_mean` | Mean BLEU score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
138 |
+
|`mt_bleu_std` | Standard deviation of BLEU scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
139 |
+
|`mt_chrf_max` | Max chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
140 |
+
|`mt_chrf_min` | Min chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
141 |
+
|`mt_chrf_mean` | Mean chrF score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
142 |
+
|`mt_chrf_std` | Standard deviation of chrF scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
143 |
+
|`mt_ter_max` | Max TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
144 |
+
|`mt_ter_min` | Min TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
145 |
+
|`mt_ter_mean` | Mean TER score between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
146 |
+
|`mt_ter_std` | Standard deviation of TER scores between `mt_text` and all `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
147 |
+
|`mt_comet_max` | Max COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|
148 |
+
|`mt_comet_min` | Min COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|
149 |
+
|`mt_comet_mean` | Mean COMET sentence-level score for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
|
150 |
+
|`mt_comet_std` | Standard deviation of COMET sentence-level scores for the `mt_text` and all `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|
151 |
+
|`mt_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the mt_text. |
|
152 |
+
|`mt_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the mt_text. |
|
153 |
+
|`pe_bleu_max` | Max BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
154 |
+
|`pe_bleu_min` | Min BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
155 |
+
|`pe_bleu_mean` | Mean BLEU score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
156 |
+
|`pe_bleu_std` | Standard deviation of BLEU scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
157 |
+
|`pe_chrf_max` | Max chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
158 |
+
|`pe_chrf_min` | Min chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
159 |
+
|`pe_chrf_mean` | Mean chrF score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
160 |
+
|`pe_chrf_std` | Standard deviation of chrF scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
161 |
+
|`pe_ter_max` | Max TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
162 |
+
|`pe_ter_min` | Min TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
163 |
+
|`pe_ter_mean` | Mean TER score between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
164 |
+
|`pe_ter_std` | Standard deviation of TER scores between `pe_text` and all other `pe_text` for the corresponding segment using SacreBLEU with default parameters. |
|
165 |
+
|`pe_comet_max` | Max COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|
166 |
+
|`pe_comet_min` | Min COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters. |
|
167 |
+
|`pe_comet_mean` | Mean COMET sentence-level score for the `pe_text` and all other `pe_text` for the corresponding segment using `Unbabel/wmt22-comet-da` with default parameters.|
|
168 |
+
|`pe_comet_std` | Standard deviation of COMET sentence-level scores for the `pe_text` and all other `pe_text` for the corresponding segment using Unbabel/wmt22-comet-da with default parameters. |
|
169 |
+
|`pe_xcomet_qe` | `Unbabel/XCOMET-XXL` sentence-level quality estimation score for the pe_text. |
|
170 |
+
|`pe_xcomet_errors` | List of error spans detected by `Unbabel/XCOMET-XXL` for the pe_text. |
|
171 |
|`mt_pe_bleu` | BLEU score computed between `mt_text` and `pe_text` using [SacreBLEU](https://github.com/mjpost/sacrebleu) with default parameters. |
|
172 |
|`mt_pe_chrf` | chrF score computed between `mt_text` and `pe_text` using [SacreBLEU](https://github.com/mjpost/sacrebleu) with default parameters. |
|
173 |
|`mt_pe_comet` | COMET sentence-level score for the `mt_text` and `pe_text` using [COMET](https://github.com/Unbabel/COMET) |
|
174 |
|`mt_xcomet_xxl_qe` | XCOMET-XXL sentence-level quality estimation score for the `mt_text`. |
|
175 |
|`pe_xcomet_xxl_qe` | XCOMET-XXL sentence-level quality estimation score for the `pe_text`. |
|
176 |
| **Behavioral data** | |
|
177 |
+
|`doc_num_edits` | Total number of edits performed by the translator on the current document. Only the last edit outputs are considered valid. |
|
178 |
|`doc_edit_order` | Index corresponding to the current document edit order. If equal to `doc_id`, the document was edited in the given order. |
|
179 |
+
|`doc_edit_time` | Total editing time for the current document in seconds (from `start` to `end`, no times ignored) |
|
180 |
+
|`doc_edit_time_filtered`| Total editing time for the current document in seconds (from `start` to `end`, >5m pauses between logged actions ignored) |
|
181 |
+
|`doc_keys_per_min` | Keystrokes per minute computed for the current document using `doc_edit_time_filtered`. |
|
182 |
+
|`doc_chars_per_min` | Characters per minute computed for the current document using `doc_edit_time_filtered`. |
|
183 |
+
|`doc_words_per_min` | Words per minute computed for the current document using `doc_edit_time_filtered`. |
|
184 |
+
|`segment_num_edits` | Total number of edits performed by the translator on the current segment. Only edits for the last edit of the doc are considered valid. |
|
185 |
|`segment_edit_order` | Index corresponding to the current segment edit order (only first `enter` action counts). If equal to `segment_in_doc_id`, the segment was edited in the given order. |
|
186 |
|`segment_edit_time` | Total editing time for the current segment in seconds (summed time between `enter`-`exit` blocks) |
|
187 |
|`segment_edit_time_filtered` | Total editing time for the current segment in seconds (>5m pauses between logged actions ignored). |
|
188 |
+
|`segment_keys_per_min` | Keystrokes per minute computed for the current segment using `segment_edit_time_filtered`. |
|
189 |
+
|`segment_chars_per_min` | Characters per minute computed for the current segment using `segment_edit_time_filtered`. |
|
190 |
+
|`segment_words_per_min` | Words per minute computed for the current segment using `segment_edit_time_filtered`. |
|
191 |
|`num_enter_actions` | Number of `enter` actions (focus on textbox) performed by the translator on the current segment during post-editing. |
|
192 |
+
|`remove_highlights` | If True, the Clear Highlights button was pressed for this segment (always false for `no_highlight` modality). |
|
|
|
|
|
|
|
193 |
|**Texts and annotations**| |
|
194 |
|`src_text` | The original source segment from WMT23 requiring translation. |
|
195 |
|`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
|
task/.gitattributes
CHANGED
@@ -1 +1,2 @@
|
|
1 |
main/outputs/eng-nld/logs/main_eng-nld_unsupervised_t3_logs.csv filter=lfs diff=lfs merge=lfs -text
|
|
|
|
1 |
main/outputs/eng-nld/logs/main_eng-nld_unsupervised_t3_logs.csv filter=lfs diff=lfs merge=lfs -text
|
2 |
+
main/processed_main.csv filter=lfs diff=lfs merge=lfs -text
|
task/main/processed_main.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c475e3e4d56f36dc43092d08c8ff7cd7a5f51dd89788afc46a82ef4a3a97273a
|
3 |
+
size 18770435
|
task/pretask/processed_pretask.csv
ADDED
The diff for this file is too large to render.
See raw diff
|
|