Datasets:
EMBO
/

Languages:
English
ArXiv:
DOI:
License:
File size: 10,189 Bytes
6eb1dc8
 
4eec5d5
 
 
 
 
 
 
f19370b
 
4eec5d5
 
f19370b
6eb1dc8
4eec5d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a526df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86ef832
4eec5d5
9be1e4e
 
 
 
97760b1
86ef832
 
95608c8
97760b1
95608c8
97760b1
95608c8
97760b1
95608c8
97760b1
95608c8
97760b1
86ef832
4eec5d5
 
 
 
 
 
 
f19370b
 
 
 
4eec5d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86ef832
 
4eec5d5
86ef832
4eec5d5
 
86ef832
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4eec5d5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
97760b1
 
 
 
99cc349
 
f19370b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
---
license: cc-by-4.0
task_categories:
- token-classification
language:
- en
tags:
- biology
- medical
- NER
- NEL
size_categories:
- 10K<n<100K
pretty_name: SODA-NLP
---

# SourceData Dataset

> The largest annotated biomedical corpus for machine learning and AI in the publishing context.

SourceData is the largest annotated biomedical dataset for NER and NEL.
It is unique on its focus on the core of scientific evidence:
figure captions. It is also unique on its real-world configuration, since it does not
present isolated sentences out of more general context. It offers full annotated figure
captions that can be further enriched in context using full text, abstracts, or titles.
The goal is to extract the nature of the experiments on them described.
SourceData presents also its uniqueness by labelling the causal relationship
between biological entities present in experiments, assigning experimental roles
to each biomedical entity present in the corpus.

SourceData consistently annotates
nine different biological entities (genes, proteins, cells, tissues,
subcellular components, species, small molecules, and diseases). It is
the first dataset annotating experimental assays
and the roles played on them by the biological entities.
Each entity is linked to their correspondent ontology, allowing
for entity disambiguation and NEL.

## Cite our work

```latex
@misc {embo_2023,
	author       = { Abreu-Vicente, J. \& Lemberger, T. },
	title        = { The SourceData dataset},
	year         = 2023,
	url          = { https://huggingface.co/datasets/EMBO/SourceData },
	doi          = { 10.57967/hf/0495 },
	publisher    = { Hugging Face }
}

@article {Liechti2017,
     author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas},
     title = {SourceData - a semantic platform for curating and searching figures},
     year = {2017},
     volume = {14},
     number = {11},
     doi = {10.1038/nmeth.4471},
     URL = {https://doi.org/10.1038/nmeth.4471},
     eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf},
     journal = {Nature Methods}
}
```


## Dataset usage

The dataset has a semantic versioning. 
Specifying the version at loaded will give different versions. 
Below we is shown the code needed to load the latest available version of the dataset.
Check below at `Changelog` to see the changes in the different versions.

```python
  from datasets import load_dataset
  # Load NER
  ds = load_dataset("EMBO/SourceData", "NER", version="1.0.2")
  # Load PANELIZATION
  ds = load_dataset("EMBO/SourceData", "PANELIZATION", version="1.0.2")
  # Load GENEPROD ROLES
  ds = load_dataset("EMBO/SourceData", "ROLES_GP", version="1.0.2")
  # Load SMALL MOLECULE ROLES
  ds = load_dataset("EMBO/SourceData", "ROLES_SM", version="1.0.2")
  # Load MULTI ROLES
  ds = load_dataset("EMBO/SourceData", "ROLES_MULTI", version="1.0.2")
```
## Dataset Description
- **Homepage:** https://sourcedata.embo.org
- **Repository:** https://github.com/source-data/soda-data
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** thomas.lemberger@embo.org, jorge.abreu@embo.org

Note that we offer the `XML` serialized dataset. This includes all the data needed to perform NEL in SourceData. 
For reproducibility, for each big version of the dataset we provide `split_vx.y.z.json` files to generate the 
train, validation, test splits.

### Supported Tasks and Leaderboards
Tags are provided as [IOB2-style tags](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)).
`PANELIZATION`: figure captions (or figure legends) are usually composed of segments that each refer to one of several 'panels' of the full figure. Panels tend to represent results obtained with a coherent method and depicts data points that can be meaningfully compared to each other. `PANELIZATION` provide the start (B-PANEL_START) of these segments and allow to train for recogntion of the boundary between consecutive panel lengends.
`NER`: biological and chemical entities are labeled. Specifically the following entities are tagged:
- `SMALL_MOLECULE`: small molecules
- `GENEPROD`: gene products (genes and proteins)
- `SUBCELLULAR`: subcellular components
- `CELL_LINE`: cell lines
- `CELL_TYPE`: cell types
- `TISSUE`: tissues and organs
- `ORGANISM`: species
- `DISEASE`: diseases (see limitations)
- `EXP_ASSAY`: experimental assays
`ROLES`: the role of entities with regard to the causal hypotheses tested in the reported results. The tags are:
- `CONTROLLED_VAR`: entities that are associated with experimental variables and that subjected to controlled and targeted perturbations.
- `MEASURED_VAR`: entities that are associated with the variables measured and the object of the measurements.

In the case of experimental roles, it is generated separatedly for `GENEPROD` and `SMALL_MOL` and there is also the `ROLES_MULTI`
that takes both at the same time.

### Languages
The text in the dataset is English.

## Dataset Structure

### Data Instances

### Data Fields

- `words`: `list` of `strings` text tokenized into words.
- `panel_id`: ID of the panel to which the example belongs to in the SourceData database.
- `label_ids`:
  - `entity_types`: `list` of `strings` for the IOB2 tags for entity type; possible value in `["O", "I-SMALL_MOLECULE",  "B-SMALL_MOLECULE",  "I-GENEPROD", "B-GENEPROD", "I-SUBCELLULAR", "B-SUBCELLULAR", "I-CELL_LINE", "B-CELL_LINE", "I-CELL_TYPE", "B-CELL_TYPE", "I-TISSUE", "B-TISSUE", "I-ORGANISM", "B-ORGANISM", "I-EXP_ASSAY", "B-EXP_ASSAY"]`
  - `roles`: `list` of `strings` for the IOB2 tags for experimental roles; values in `["O", "I-CONTROLLED_VAR", "B-CONTROLLED_VAR", "I-MEASURED_VAR", "B-MEASURED_VAR"]`
  - `panel_start`: `list` of `strings` for IOB2 tags `["O", "B-PANEL_START"]`  
  - `multi roles`: There are two different label sets. `labels` is like in `roles`. `is_category` tags `GENEPROD` and `SMALL_MOLECULE`.
### Data Splits

* NER and ROLES
```
  DatasetDict({
      train: Dataset({
          features: ['words', 'labels', 'tag_mask', 'text'],
          num_rows: 55250
      })
      test: Dataset({
          features: ['words', 'labels', 'tag_mask', 'text'],
          num_rows: 6844
      })
      validation: Dataset({
          features: ['words', 'labels', 'tag_mask', 'text'],
          num_rows: 7951
      })
  })
```
* PANELIZATION
```
  DatasetDict({
      train: Dataset({
          features: ['words', 'labels', 'tag_mask'],
          num_rows: 14655
      })
      test: Dataset({
          features: ['words', 'labels', 'tag_mask'],
          num_rows: 1871
      })
      validation: Dataset({
          features: ['words', 'labels', 'tag_mask'],
          num_rows: 2088
      })
  })
```

## Dataset Creation

### Curation Rationale

The dataset was built to train models for the automatic extraction of a knowledge graph based from the scientific literature. The dataset can be used to train models for text segmentation, named entity recognition and semantic role labeling. 

### Source Data

#### Initial Data Collection and Normalization

Figure legends were annotated according to the SourceData framework described in Liechti et al 2017 (Nature Methods, 2017, https://doi.org/10.1038/nmeth.4471). The curation tool at https://curation.sourcedata.io was used to segment figure legends into panel legends, tag enities, assign experiemental roles and normalize with standard identifiers (not available in this dataset). The source data was downloaded from the SourceData API (https://api.sourcedata.io) on 21 Jan 2021.

#### Who are the source language producers?

The examples are extracted from the figure legends from scientific papers in cell and molecular biology. 

### Annotations

#### Annotation process

The annotations were produced manually with expert curators from the SourceData project (https://sourcedata.embo.org)

#### Who are the annotators?

Curators of the SourceData project.

### Personal and Sensitive Information

None known.

## Considerations for Using the Data

### Social Impact of Dataset

Not applicable.

### Discussion of Biases

The examples are heavily biased towards cell and molecular biology and are enriched in examples from papers published in EMBO Press journals (https://embopress.org)

The annotation of diseases has been added recently to the dataset. Although they appear, the number is very low and they are not consistently tagged through the entire dataset. 
We recommend to use the diseases by filtering the examples that contain them.

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

Thomas Lemberger, EMBO.
Jorge Abreu Vicente, EMBO

### Licensing Information

CC BY 4.0

### Citation Information

We are currently working on a paper to present the dataset. It is expected to be ready by 2023 spring. In the meantime, the following paper should be cited.

```latex
  @article {Liechti2017,
  	author = {Liechti, Robin and George, Nancy and Götz, Lou and El-Gebali, Sara and Chasapi, Anastasia and Crespo, Isaac and Xenarios, Ioannis and Lemberger, Thomas},
  	title = {SourceData - a semantic platform for curating and searching figures},
  	year = {2017},
    volume = {14},
    number = {11},
  	doi = {10.1038/nmeth.4471},
  	URL = {https://doi.org/10.1038/nmeth.4471},
  	eprint = {https://www.biorxiv.org/content/early/2016/06/20/058529.full.pdf},
  	journal = {Nature Methods}
  }
```


### Contributions

Thanks to [@tlemberger](https://github.com/tlemberger>) and [@drAbreu](https://github.com/drAbreu>) for adding this dataset.

## Changelog

* **v2.0.2** - (Planned for Sept 2023) Data curated until 20.09.2023. This version will also include the patch for milti-word generic terms.
* **v1.0.2** - Modification of the generic patch in v1.0.1 to include generic terms of more than a word.
* **v1.0.1** - Added a first patch of generic terms. Terms such as cells, fluorescence, or animals where originally tagged, but in this version they are removed.
* **v1.0.0** - First publicly available version of the dataset. Data curated until March 2023.