File size: 6,456 Bytes
1063a67 d8b41ae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
license: cc-by-4.0
---
# Dataset Card for Top Quark Tagging
## Table of Contents
- [Dataset Card for Top Quark Tagging](#dataset-card-for-top-quark-tagging)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://zenodo.org/record/2603256
- **Paper:** https://arxiv.org/abs/1902.09914
- **Point of Contact:** [Gregor Kasieczka](gregor.kasieczka@uni-hamburg.de)
### Dataset Summary
Top Quark Tagging is a dataset of Monte Carlo simulated events produced by proton-proton collisions at the Large Hadron Collider. The top signal and mixed quark-gluon background jets are produced with using Pythia8 with its default tune for a center-of-mass energy of 14 TeV and ignoring multiple interactions and pile-up. The leading 200 jet constituent four-momenta are stored, with zero-padding for jets with fewer than 200 constituents.
### Supported Tasks and Leaderboards
- `tabular-classification`: The dataset can be used to train a model for tabular binary classification, which consists in predicting whether an event is produced from a top signal or quark-gluon background. Success on this task is typically measured by achieving a *high* [accuracy](https://huggingface.co/metrics/accuracy) and AUC score.
## Dataset Structure
### Data Instances
Each row in the datasets consists of the four-momenta of the leading 200 jet constituents, sorted by $p_T$. For jets with fewer than 200 constituents, zero-padding is applied.
```
{'E_0': 474.0711364746094,
'PX_0': -250.34703063964844,
'PY_0': -223.65196228027344,
'PZ_0': -334.73809814453125,
'E_1': 103.23623657226562,
'PX_1': -48.8662223815918,
'PY_1': -56.790775299072266,
'PZ_1': -71.0254898071289,
...
'E_199': 0.0,
'PX_199': 0.0,
'PY_199': 0.0,
'PZ_199': 0.0,
'truthE': 0.0,
'truthPX': 0.0,
'truthPY': 0.0,
'truthPZ': 0.0,
'ttv': 0,
'is_signal_new': 0}
```
### Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- `example_field`: description of `example_field`
Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
### Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences | | | |
| Average Sentence Length | | | |
## Dataset Creation
### Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
### Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
## Considerations for Using the Data
### Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
### Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
## Additional Information
### Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
### Licensing Information
Provide the license and link to the license webpage if available.
### Citation Information
```
@dataset{kasieczka_gregor_2019_2603256,
author = {Kasieczka, Gregor and
Plehn, Tilman and
Thompson, Jennifer and
Russel, Michael},
title = {Top Quark Tagging Reference Dataset},
month = mar,
year = 2019,
publisher = {Zenodo},
version = {v0 (2018\_03\_27)},
doi = {10.5281/zenodo.2603256},
url = {https://doi.org/10.5281/zenodo.2603256}
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
|