gsarti commited on
Commit
ada5861
1 Parent(s): a12425c

Add t13 and readme draft

Browse files
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .DS_Store
README.md CHANGED
@@ -1,3 +1,194 @@
1
  ---
2
- license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - it
5
+ - nl
6
+ license:
7
+ - apache-2.0
8
+ tags:
9
+ - machine-translation
10
+ - quality-estimation
11
+ - post-editing
12
+ - translation
13
+ - behavioral-data
14
+ language_creators:
15
+ - machine-generated
16
+ - expert-generated
17
+ annotations_creators:
18
+ - machine-generated
19
+ pretty_name: qe4pe
20
+ size_categories:
21
+ - 10K<n<100K
22
+ source_datasets:
23
+ - Unbabel/TowerEval-Data-v0.1
24
+ task_categories:
25
+ - translation
26
+ configs:
27
+ - config_name: main
28
+ data_files:
29
+ - split: train
30
+ path: task/main/processed_main.csv
31
+ - config_name: pretask
32
+ data_files:
33
+ - split: train
34
+ path: task/pretask/processed_pretask.csv
35
  ---
36
+
37
+ # Dataset Card for DivEMT
38
+
39
+ *For more details on DivEMT, see our [paper](TBD) and our [Github repository](https://github.com/gsarti/qe4pe)*
40
+
41
+ ## Dataset Description
42
+ - **Source:** [Github](https://github.com/gsarti/qe4pe)
43
+ - **Paper:** [Arxiv](TBD)
44
+ - **Point of Contact:** [Gabriele Sarti](mailto:gabriele.sarti996@gmail.com)
45
+
46
+ [Gabriele Sarti](https://gsarti.com) • [Vilém Zouhar](https://vilda.net/) • [Malvina Nissim](https://malvinanissim.github.io/) • [Grzegorz Chrupała](https://grzegorz.chrupala.me/) • [Ana Guerberof Arenas](https://scholar.google.com/citations?user=i6bqaTsAAAAJ) • [Arianna Bisazza](https://www.cs.rug.nl/~bisazza/)
47
+
48
+
49
+ <img src="TBD" alt="TBD" width="600"/>
50
+
51
+ >Abstract TBD
52
+
53
+ ### Dataset Summary
54
+
55
+ This dataset provides a convenient access to the processed `main` and `pretask` splits of the QE4PE dataset. A sample of challenging documents extracted from WMT23 evaluation data were machine translated from English to Italian and Dutch using [NLLB 3.3B](https://huggingface.co/facebook/nllb-200-3.3B), and post-edited by 12 translators per direction across 4 highlighting modalities employing various word-level quality estimation (QE) strategies to present translators with potential errors during the editing. Additional details are provided in the [main task readme](./task/main/README.md) and in our paper. During the post-editing, behavioral data (keystrokes, pauses and editing times) were collected using the [GroTE](https://github.com/gsarti/grote) online platform.
56
+
57
+ We publicly release the granular editing logs alongside the processed dataset to foster new research on the usability of word-level QE strategies in modern post-editing workflows.
58
+
59
+ ### News 📢
60
+
61
+ **October 2024**: The QE4PE dataset is released on the HuggingFace Hub! 🎉
62
+
63
+ ### Repository Structure
64
+
65
+ The repository is organized as follows:
66
+
67
+ ```shell
68
+ qe4pe/
69
+ ├── questionnaires/ # Configs and results for pre- and post-task questionnaires for translators
70
+ ├── setup/
71
+ │ ├── highlights/ # Outputs of word-level QE strategies used to setup highlighted spans in the tasks
72
+ │ ├── processed/ # Intermediate outputs of the selection process for the main task
73
+ │ └── wmt23/ # Original collection of WMT23 sources and machine-translated outputs
74
+ └── task/
75
+ ├── example/ # Example folder with task structure
76
+ ├── main/ # Main task data, logs, outputs and guidelines
77
+ │ ├── ...
78
+ │ ├── processed_main.csv # Processed main task data, corresponds to the `main` configuration
79
+ │ └── README.md # Details about the main task
80
+ └── pretask/ # Pretask data, logs, outputs and guidelines
81
+ ├── ...
82
+ ├── processed_pretask.csv # Processed pretask data, corresponds to the `pretask` configuration
83
+ └── README.md # Details about the pretask
84
+ ```
85
+
86
+ ### Languages
87
+
88
+ The language data of QE4PE is in English (BCP-47 `en`), Italian (BCP-47 `it`) and Dutch (BCP-47 `nl`).
89
+
90
+ ## Dataset Structure
91
+
92
+ ### Data Instances
93
+
94
+ The dataset contains two configurations, corresponding to the two tasks: `main` and `pretask`. `main` contains the full data collected during the main task and analyzed during our experiments. `pretask` contains the data collected in the initial verification phase, before the main task begins.
95
+
96
+ ### Data Fields
97
+
98
+ A single entry in the dataframe represents a segment (~sentence) in the dataset, that was machine-translated and post-edited by a professional translator. The following fields are contained in the training set:
99
+
100
+ |Field |Description |
101
+ |------------------------|-------------------------------------------------------------------------------------------------------------------------------------|
102
+ | **Identification** | |
103
+ |`unit_id` | The full entry identifier. Format: `qe4pe-{task_id}-{src_lang}-{tgt_lang}-{doc_id}-{segment_in_doc_id}-{translator_main_task_id}`. |
104
+ |`wmt_id` | Identifier of the sentence in the original [WMT23](./data/setup/wmt23/wmttest2023.eng.jsonl) dataset. |
105
+ |`doc_id` | The index of the document in the current configuration of the QE4PE dataset containing the current segment. |
106
+ |`segment_in_doc_id` | The index of the segment inside the current document. |
107
+ |`segment_id` | The index of the segment in the current configurations (i.e. concatenating all segments from all documents in order) |
108
+ |`translator_pretask_id` | The identifier for the translator according to the `pretask` format before modality assignments: `tXX`. |
109
+ |`translator_main_id` | The identifier for the translator according to the `main` task format after modality assignments: `{highlight_modality}_tXX`. |
110
+ |`src_lang` | The source language of the segment. For QE4PE, this is always English (`eng`) |
111
+ |`tgt_lang` | The target language of the segment: either Italian (`ita`) or Dutch (`nld`). |
112
+ |`highlight_modality` | The highlighting modality used for the segment. Values: `no_highlight`, `oracle`, `supervised`, `unsupervised`. |
113
+ | **Text statistics** | |
114
+ |`src_num_chars` | Length of the source segment in number of characters. |
115
+ |`mt_num_chars` | Length of the machine-translated segment in number of characters. |
116
+ |`pe_num_chars` | Length of the post-edited segment in number of characters. |
117
+ |`src_num_words` | Length of the source segment in number of words. |
118
+ |`mt_num_words` | Length of the machine-translated segment in number of words. |
119
+ |`pe_num_words` | Length of the post-edited segment in number of words. |
120
+ | **Edits statistics** | |
121
+ |`num_words_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
122
+ |`num_words_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
123
+ |`num_words_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
124
+ |`num_words_hits` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
125
+ |`num_words_edits` | Total of all edit types for the sentence. |
126
+ |`wer` | Word Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
127
+ |`num_char_insert` | Number of post-editing insertions computed using [jiwer](https://github.com/jitsi/jiwer). |
128
+ |`num_char_delete` | Number of post-editing deletions computed using [jiwer](https://github.com/jitsi/jiwer). |
129
+ |`num_char_substitute` | Number of post-editing substitutions computed using [jiwer](https://github.com/jitsi/jiwer). |
130
+ |`num_char_hits` | Number of post-editing hits computed using [jiwer](https://github.com/jitsi/jiwer). |
131
+ |`num_char_edits` | Total of all edit types for the sentence. |
132
+ |`cer` | Character Error Rate score computed between `mt_text` and `pe_text` using [jiwer](https://github.com/jitsi/jiwer). |
133
+ | **Translation quality**| |
134
+ |`mt_pe_bleu` | BLEU score computed between `mt_text` and `pe_text` using [SacreBLEU](https://github.com/mjpost/sacrebleu) with default parameters. |
135
+ |`mt_pe_chrf` | chrF score computed between `mt_text` and `pe_text` using [SacreBLEU](https://github.com/mjpost/sacrebleu) with default parameters. |
136
+ |`mt_pe_comet` | COMET sentence-level score for the `mt_text` and `pe_text` using [COMET](https://github.com/Unbabel/COMET) |
137
+ |`mt_xcomet_xxl_qe` | XCOMET-XXL sentence-level quality estimation score for the `mt_text`. |
138
+ |`pe_xcomet_xxl_qe` | XCOMET-XXL sentence-level quality estimation score for the `pe_text`. |
139
+ | **Behavioral data** | |
140
+ |`doc_edits` | Total number of edits performed by the translator on the current document. Only the last edit outputs are considered valid. |
141
+ |`doc_edit_order` | Index corresponding to the current document edit order. If equal to `doc_id`, the document was edited in the given order. |
142
+ |`segment_edit_order` | Index corresponding to the current segment edit order (only first `enter` action counts). If equal to `segment_in_doc_id`, the segment was edited in the given order. |
143
+ |`segment_edit_time` | Total editing time for the current segment in seconds (summed time between `enter`-`exit` blocks) |
144
+ |`segment_edit_time_filtered` | Total editing time for the current segment in seconds (>5m pauses between logged actions ignored). |
145
+ |`doc_edit_time` | Total editing time for the current document in seconds (from `start` to `end`, no times ignored) |
146
+ |`doc_edit_time_filtered`| Total editing time for the current document in seconds (from `start` to `end`, >5m pauses between logged actions ignored) |
147
+ |`num_enter_actions` | Number of `enter` actions (focus on textbox) performed by the translator on the current segment during post-editing. |
148
+ |`num_change_actions` | Number of `change` actions (~keystrokes) performed by the translator on the current segment during post-editing. |
149
+ |`clear_highlights` | If 1, the Clear Highlights button was pressed for this segment (always 0 for `no_highlight` modality). |
150
+ |`kpm` | Keystrokes per minute computed for the current segment using `segment_edit_time_filtered`. |
151
+ |`wpm` | Words per minute computed for the current segment using `segment_edit_time_filtered`. |
152
+ |**Texts and annotations**| |
153
+ |`src_text` | The original source segment from WMT23 requiring translation. |
154
+ |`mt_text` | Output of the `NLLB-3.3B` model when translating `src_text` into `tgt_lang` (default config, 5 beams) |
155
+ |`pe_text` | Post-edited version of `mt_text` produced by a professional translator with `highlight_modality`. |
156
+ |`mt_pe_word_aligned` | Aligned visual representation of word-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
157
+ |`mt_pe_char_aligned` | Aligned visual representation of character-level edit operations (I = Insertion, D = Deletion, S = Substitution) (replace `\\n` with `\n` to show the three aligned rows). |
158
+ |`mt_text_highlighted` | Highlighted version of `mt_text` with potential errors according to the `highlight_modality`. |
159
+ |`highlights` | Dictionary of highlighted spans with error severity and position, matching XCOMET format for word-level error annotations. |
160
+
161
+ ### Data Splits
162
+
163
+ |`config` | `split`| |
164
+ |--------:|-------:|-----------------------------------------------:|
165
+ |`main` | `train`| 7776 (51 docs i.e. 324 sents x 24 translators) |
166
+ |`pretask`| `train`| 912 (6 docs i.e. 38 sents x 24 translators) |
167
+
168
+ #### Train Split
169
+
170
+ The `train` split contains the totality of triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation.
171
+
172
+ The following is an example of the subject `oracle_t1` post-editing for `doc1` for `ita`. The field `aligned_edit` is showed over three lines to provide a visual understanding of its contents.
173
+
174
+ ```json
175
+ TBD
176
+ ```
177
+
178
+ The text is provided as-is, without further preprocessing or tokenization.
179
+
180
+
181
+ ### Dataset Creation
182
+
183
+ The datasets were parsed from GroTE logs, inputs and outputs using the `qe4pe process_task_data` command. Refer to the [QE4PE Github repository](https://github.com/gsarti/qe4pe) for additional details. The overall structure and processing of the dataset were inspired by the [DivEMT dataset](https://huggingface.co/datasets/GroNLP/divemt).
184
+
185
+ ## Additional Information
186
+
187
+ ### Dataset Curators
188
+ For problems related to this 🤗 Datasets version, please contact me at [gabriele.sarti996@gmail.com](mailto:gabriele.sarti996@gmail.com).
189
+
190
+ ### Citation Information
191
+
192
+ ```bibtex
193
+ TODO
194
+ ```
questionnaires/pretask_results.csv CHANGED
@@ -23,4 +23,5 @@ time,translation_direction,main_task_modality,pretask_translator_id,main_task_tr
23
  233,eng-nld,no_highlight,t9,no_highlight_t3,nld,,eng-nld,,fulltime_freelancer,,C2,10,1,7.5,1,4,"like_B,like_C,like_D,like_E",,5,5,5,1,1,5,4,"I almost always make use of machine translation, but I wouldn't necessarily call what I do 'post-editing'. With DeepL, for example, you can have the MT rephrase any sentence in multiple ways, allowing you to search for the right word and the best construction. This, combined with online dictionaries (Van Dale), in-context translation websites (Linguee, Reverso), and general web searches, ensures the translator remains fully in control — far from being a 'slave to the machine'. These are just tools to aid the process."
24
  250,eng-nld,oracle,t10,oracle_t3,nld,,eng-nld,,fulltime_freelancer,,C2,10,1,3.5,0.6,4,"like_A,like_D,other","I don't like using MT because of its lack of cultural understanding. Unlike human translators, machines struggle to grasp cultural nuances, idioms, and expressions. ",5,5,5,4,3,5,5,"""One of the primary advantages of MTPE is its ability to enhance efficiency, facilitating swift translation production while significantly reducing my workload. By participating in the post-editing phase, I can focus on enhancing the quality of the text rather than initiating the translation process from the beginning. This approach streamlines the translation workflow, leading to greater productivity. On the other hand, achieving accuracy is one of the foremost challenges in machine translation, with errors frequently occurring in idiomatic expressions and cultural nuances. Additionally, the lack of cultural insight poses a significant hurdle, as language is deeply intertwined with culture, making it difficult for machines to fully understand idiomatic phrases and cultural contexts. Lastly, machine translation is often inadequate for handling specific text types, particularly those that contain specialized jargon or technical terminology. A major strength of AI translation lies in its relentless ability to learn and grow. Through ongoing analysis of training data, these systems refine their accuracy and remain attuned to the constantly shifting world of language."""
25
  275,eng-nld,oracle,t11,oracle_t2,nld,,other,Italian to Dutch,fulltime_freelancer,,C2,10,1,3.5,0.4,5,"dislike_A,dislike_B,dislike_C",,4,5,2,2,1,3,3,"Currently my work consists for the most part (75%) of Italian-Dutch translations in the technical sector/automotive industry. The MT projects for now always severely lack in quality, resulting me having to work more for lower rates. So for the moment I cannot be considered a big fan of post-editing machine translations, on the contrary, but I'm well aware that the evolution is inevitable and that it only can be improved through human contribution."
26
- 438,eng-nld,supervised,t12,supervised_t1,nld,,eng-nld,,fulltime_freelancer,,C2,10,1,7.5,0.6,4,"like_A,like_C,like_D",,5,3,1,1,1,3,5,"It makes the job easier, but I also worry that it may eventually replace me."
 
 
23
  233,eng-nld,no_highlight,t9,no_highlight_t3,nld,,eng-nld,,fulltime_freelancer,,C2,10,1,7.5,1,4,"like_B,like_C,like_D,like_E",,5,5,5,1,1,5,4,"I almost always make use of machine translation, but I wouldn't necessarily call what I do 'post-editing'. With DeepL, for example, you can have the MT rephrase any sentence in multiple ways, allowing you to search for the right word and the best construction. This, combined with online dictionaries (Van Dale), in-context translation websites (Linguee, Reverso), and general web searches, ensures the translator remains fully in control — far from being a 'slave to the machine'. These are just tools to aid the process."
24
  250,eng-nld,oracle,t10,oracle_t3,nld,,eng-nld,,fulltime_freelancer,,C2,10,1,3.5,0.6,4,"like_A,like_D,other","I don't like using MT because of its lack of cultural understanding. Unlike human translators, machines struggle to grasp cultural nuances, idioms, and expressions. ",5,5,5,4,3,5,5,"""One of the primary advantages of MTPE is its ability to enhance efficiency, facilitating swift translation production while significantly reducing my workload. By participating in the post-editing phase, I can focus on enhancing the quality of the text rather than initiating the translation process from the beginning. This approach streamlines the translation workflow, leading to greater productivity. On the other hand, achieving accuracy is one of the foremost challenges in machine translation, with errors frequently occurring in idiomatic expressions and cultural nuances. Additionally, the lack of cultural insight poses a significant hurdle, as language is deeply intertwined with culture, making it difficult for machines to fully understand idiomatic phrases and cultural contexts. Lastly, machine translation is often inadequate for handling specific text types, particularly those that contain specialized jargon or technical terminology. A major strength of AI translation lies in its relentless ability to learn and grow. Through ongoing analysis of training data, these systems refine their accuracy and remain attuned to the constantly shifting world of language."""
25
  275,eng-nld,oracle,t11,oracle_t2,nld,,other,Italian to Dutch,fulltime_freelancer,,C2,10,1,3.5,0.4,5,"dislike_A,dislike_B,dislike_C",,4,5,2,2,1,3,3,"Currently my work consists for the most part (75%) of Italian-Dutch translations in the technical sector/automotive industry. The MT projects for now always severely lack in quality, resulting me having to work more for lower rates. So for the moment I cannot be considered a big fan of post-editing machine translations, on the contrary, but I'm well aware that the evolution is inevitable and that it only can be improved through human contribution."
26
+ 438,eng-nld,supervised,t12,supervised_t1,nld,,eng-nld,,fulltime_freelancer,,C2,10,1,7.5,0.6,4,"like_A,like_C,like_D",,5,3,1,1,1,3,5,"It makes the job easier, but I also worry that it may eventually replace me."
27
+ 47,eng-nld,no_highlight,t13,no_highlight_t4,nld,,eng-nld,,fulltime_freelancer,,C1,7.5,1,7.5,0.6,4,"like_A,like_D",,5,5,4,4,3,4,4,"-"
task/pretask/README.md CHANGED
@@ -9,11 +9,11 @@ pretask/
9
  ├── inputs/
10
  │ ├── eng-ita/
11
  │ │ ├── pretask_eng-ita_doc1_input.txt
12
- │ │ ├── pretask_eng-ita_doc1_input.txt
13
  │ │ └── ... # GroTE input files with tags and ||| source-target separator
14
  │ └── eng-nld/
15
  │ │ ├── pretask_eng-nld_doc1_input.txt
16
- │ │ ├── pretask_eng-nld_doc1_input.txt
17
  │ │ └── ... # GroTE input files with tags and ||| source-target separator
18
  ├── outputs/
19
  │ ├── eng-ita/
@@ -27,7 +27,7 @@ pretask/
27
  │ │ ├── pretask_eng-nld_t1_logs.csv
28
  │ │ └── ... # GroTE logs for every translator (e.g. t1)
29
  │ ├── example_eng-nld_doc1_t1_output.txt
30
- │ └── ... # GroTE output files (one edited target per line)
31
  ├── doc_id_map.json # Source and doc name maps
32
  ├── main_task_assignments.json # Translator assignments to main task modalities
33
  ├── pretask_eng-ita_guidelines.pdf # Task guidelines for translators
@@ -90,6 +90,8 @@ The final assignments (also in `main_task_assignments.json` for machine-readable
90
  | t3 | 103m | unsupervised | unsupervised_t3 |
91
  | t10 | 152m | oracle | oracle_t3 |
92
 
 
 
93
  ## Outputs
94
 
95
  Files in `outputs/eng-ita` and `outputs/eng-nld` contain post-edited outputs (one per line, matching inputs) using format:
 
9
  ├── inputs/
10
  │ ├── eng-ita/
11
  │ │ ├── pretask_eng-ita_doc1_input.txt
12
+ │ │ ├── pretask_eng-ita_doc2_input.txt
13
  │ │ └── ... # GroTE input files with tags and ||| source-target separator
14
  │ └── eng-nld/
15
  │ │ ├── pretask_eng-nld_doc1_input.txt
16
+ │ │ ├── pretask_eng-nld_doc2_input.txt
17
  │ │ └── ... # GroTE input files with tags and ||| source-target separator
18
  ├── outputs/
19
  │ ├── eng-ita/
 
27
  │ │ ├── pretask_eng-nld_t1_logs.csv
28
  │ │ └── ... # GroTE logs for every translator (e.g. t1)
29
  │ ├── example_eng-nld_doc1_t1_output.txt
30
+ │ └── ... # GroTE output files (one post-edited segment per line)
31
  ├── doc_id_map.json # Source and doc name maps
32
  ├── main_task_assignments.json # Translator assignments to main task modalities
33
  ├── pretask_eng-ita_guidelines.pdf # Task guidelines for translators
 
90
  | t3 | 103m | unsupervised | unsupervised_t3 |
91
  | t10 | 152m | oracle | oracle_t3 |
92
 
93
+ > `t13` was added for Dutch as a replacement for `t9`, which compromised a part of the task by not following the guidelines. `t13` was assigned to `no_highlight` using `no_highlight_t4` as an Alias. The pre-task editing time for `t13` was 19 minutes.
94
+
95
  ## Outputs
96
 
97
  Files in `outputs/eng-ita` and `outputs/eng-nld` contain post-edited outputs (one per line, matching inputs) using format:
task/pretask/outputs/eng-nld/logs/pretask_eng-nld_t13_logs.csv ADDED
The diff for this file is too large to render. See raw diff
 
task/pretask/outputs/eng-nld/pretask_eng-nld_doc1_t13_output.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ De gehackte versie van Jedi Knight crashte omdat het een functie aan het einde van een vtable aanriep.
2
+ Het bleek dat het veronderstelde dat het aanroepen van IDirect3D::CreateViewport() een IDirect3DViewport3 zou retourneren, die extra methoden aan het einde heeft gekoppeld in vergelijking met een IDirect3DViewport, wat ik heb geïmplementeerd.
3
+ Voor mij is dit een vrij grote veronderstelling, omdat de viewport alleen wordt gemaakt met behulp van een Direct3D-object, en niet met een Direct3D3-object.
4
+ Nu begrijp ik dat in de praktijk IDirectXObject2 meestal een echte superset is van IDirectXObject, zonder veranderde functiehandtekeningen nieuwe methoden die alleen aan het einde zijn toegevoegd. Maar dit is niet helemaal waar; voor die gevallen is het van belang welke interface u gebruikt om het betreffende object te maken.
5
+ Hoe dan ook, aangezien het hier wel opgaat, moest ik om het probleem op te lossen mijn viewport-implementatie uitbreiden met de IDirect3DViewport3-methoden, zodat de aanroep naar de nieuwe geldig was.
task/pretask/outputs/eng-nld/pretask_eng-nld_doc2_t13_output.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Writing Wonders 5/11: Lacht of huilt je MC meer?
2
+ Abe houdt zijn verdriet verborgen en lacht gemakkelijk, terwijl Tom niet bang is om te huilen, maar minder vermaakt is door onbeleefde humor. Jan zal je pijn doen voordat ze je laat zien dat ze gekwetst is en lacht het meest wanneer dingen ondersteboven staan. Mio weet dat verdriet de constante metgezel van het leven is.
3
+ Na 4000 jaar van ellende bevindt Yl zich in een plaats van liefde en heeft ze enorme gevoelens waar ze niet mee om kan gaan dus is er veel van beide.
task/pretask/outputs/eng-nld/pretask_eng-nld_doc3_t13_output.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Sarcopenie is een leeftijdsgerelateerd syndroom dat wordt gekenmerkt door een verlies van spiermassa en kracht.
2
+ Als gevolg hiervan wordt de onafhankelijkheid van ouderen verminderd en neemt het aantal ziekenhuisopnames en sterfte toe.
3
+ Het begin van sarcopenie begint vaak op middelbare leeftijd als gevolg van een onevenwichtige voeding of ondervoeding in combinatie met een gebrek aan lichamelijke activiteit.
4
+ Dit effect wordt versterkt door gelijktijdige ziekten zoals obesitas of metabole ziekten, waaronder diabetes mellitus.
5
+ Met effectieve preventieve diagnostische procedures en specifieke therapeutische behandeling van sarcopenie kunnen de negatieve effecten op het individu worden verminderd en de negatieve gevolgen voor de gezondheid en de sociaaleconomische effecten worden voorkomen.
6
+ Hiervoor zijn verschillende diagnostische opties beschikbaar.
7
+ Naast basis-klinische methoden zoals het meten van spierkracht, kan sarcopenie ook worden gedetecteerd met behulp van beeldtechnieken zoals dubbele röntgenabsorptiometrie (DXA), computertomografie (CT), magnetische resonantiebeeldvorming (MRI) en sonografie.
8
+ DXA biedt als een eenvoudige en kosteneffectieve methode een lage doseringsoptie voor het beoordelen van de lichaamssamenstelling.
task/pretask/outputs/eng-nld/pretask_eng-nld_doc4_t13_output.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Het doel van de studie was het effect van verminderde geïnjecteerde [18F]FDG-activiteitsniveaus op de kwantitatieve en diagnostische nauwkeurigheid van PET-beelden van patiënten met niet-lesionele epilepsie (NLE) te evalueren. Negen gezonde vrijwilligers en negen patiënten met NLE ondergingen een dynamische lijstmodus (LM) -scans op een volledig geïntegreerd PET/MRI-systeem van 60 minuten.
2
+ De geïnjecteerde FDG-activiteitsniveaus werden vrijwel verminderd door willekeurig tellingen van de laatste 10 minuten van de LM-gegevens te verwijderen, om de volgende activiteitsniveaus te simuleren: 50%, 35%, 20% en 10% van de oorspronkelijke activiteit.
3
+ Vier beeldreconstructies werden geëvalueerd: standaard OSEM, OSEM met resolutieherstel (PSF), de A-MAP en de Asymmetrical Bowsher (AsymBowsher) algoritmen.
4
+ Voor de A-MAP-algoritmen werden twee gewichten geselecteerd (laag en hoog).
5
+ Het beeldcontrast en de geluidsniveaus werden geëvalueerd voor alle proefpersonen, terwijl de verhouding letsel/achtergrond (L/B) alleen werd geëvalueerd voor patiënten.
6
+ Beelden van patiënten werden beoordeeld door een arts in nucleaire geneeskunde op een schaal van 5 punten om de klinische indruk in verband met de verschillende reconstructiealgoritmen te beoordelen.
7
+ Het beeldcontrast en de L/B-verhouding die alle vier reconstructiealgoritmen kenmerken, waren vergelijkbaar, behalve voor reconstructies op basis van slechts 10% van de totale tellingen.
task/pretask/outputs/eng-nld/pretask_eng-nld_doc5_t13_output.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Ik zeg niet dat ze zich moeten schamen dat ze in meisjes zijn veranderd of dat ik stereotiep ben tegenover een persoon die een meisje is, maar ik zeg dat ze het niet leuk zullen vinden om het te doen, omdat ze mensen zijn met hun eigen keuzes, en door niet akkoord te gaan om een meisje te zijn, maakt het ze niet anti-vrouwen.
2
+ Hoeveel jongens om je heen zouden bereid zijn om in Disney-prinsessen te veranderen of om dat kontspel te spelen?
3
+ Misschien wennen ze er na verloop van tijd aan, maar na hoe lang?
4
+ En misschien weten ze dat ze dit allemaal moeten doen om de fans te vermaken.
5
+ Maar door dit te weten maakt niets beter voor hen totdat ze er gelukkig en oprecht mee instemmen.
6
+ Ik heb soms medelijden met hen als ze gevraagd worden om dit alles te doen.
7
+ Ik zou het niet leuk vinden als je van mij een man probeerde te maken en wilde dat ik voor zoveel mensen zou dansen, ook al ben ik een tomboy.
task/pretask/outputs/eng-nld/pretask_eng-nld_doc6_t13_output.txt ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Ik vind het belangrijk hoe ik mensen beïnvloed, maar ik probeer het niet te laten merken.
2
+ Ik vind het moeilijk om deel te nemen aan gesprekken met grote groepen mensen, wat verband houdt met ADHD.
3
+ Soms kan mijn gevoeligheid ervoor zorgen dat ik afstandelijk ben tegenover mensen die ik niet ken.
4
+ Ik zal proberen te doen alsof het me niet kan schelen als iemand me niet mag maar dronken mensen zullen op de vloer vallen en ze smeken om me te mogen.
5
+ Dit zijn allemaal ADHD-problemen, vanwege de afwijzingsgevoelige dysforie.
6
+ Ik ben een goede luisteraar. Veel mensen zien mij als heel schattig of onschuldig.
7
+ Ik kan soms een heel, heel, heel flirterig karakter hebben.
8
+ Ik herinner me dat de moeder van mijn vriendin op de middelbare school zei dat ik de 'persoonlijkheid' had, dus ik zou waarschijnlijk de eerste zijn die een vriendje zou krijgen.