Datasets:
ArXiv:
License:
Merge branch 'main' of hf.co:datasets/philipphager/baidu-ultr_tencent-mlm-ctr
Browse files- README.md +102 -0
- baidu-ultr_tencent-mlm-ctr.py +221 -0
README.md
CHANGED
@@ -1,3 +1,105 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
+
viewer: false
|
4 |
---
|
5 |
+
# Baidu ULTR Dataset - Tencent BERT-12l-12h
|
6 |
+
Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank](https://arxiv.org/abs/2207.03051) dataset.
|
7 |
+
This dataset uses the pretrained [BERT cross-encoder (Bert_Layer12_Head12) from Tencent](https://github.com/lixsh6/Tencent_wsdm_cup2023/tree/main/pytorch_unbias) published as part of the WSDM cup 2023 to compute query-document vectors (768 dims).
|
8 |
+
|
9 |
+
## Setup
|
10 |
+
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
|
11 |
+
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
|
12 |
+
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
|
13 |
+
4. You can now use the dataset as described below.
|
14 |
+
|
15 |
+
## Load train / test click dataset:
|
16 |
+
```Python
|
17 |
+
from datasets import load_dataset
|
18 |
+
|
19 |
+
dataset = load_dataset(
|
20 |
+
"philipphager/baidu-ultr_tencent-mlm-ctr",
|
21 |
+
name="clicks",
|
22 |
+
split="train", # ["train", "test"]
|
23 |
+
cache_dir="~/.cache/huggingface",
|
24 |
+
)
|
25 |
+
|
26 |
+
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
27 |
+
```
|
28 |
+
|
29 |
+
## Load expert annotations:
|
30 |
+
```Python
|
31 |
+
from datasets import load_dataset
|
32 |
+
|
33 |
+
dataset = load_dataset(
|
34 |
+
"philipphager/baidu-ultr_tencent-mlm-ctr",
|
35 |
+
name="annotations",
|
36 |
+
split="test",
|
37 |
+
cache_dir="~/.cache/huggingface",
|
38 |
+
)
|
39 |
+
|
40 |
+
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
41 |
+
```
|
42 |
+
|
43 |
+
## Available features
|
44 |
+
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
|
45 |
+
|
46 |
+
### Click dataset
|
47 |
+
| name | dtype | description |
|
48 |
+
|------------------------------|----------------|-------------|
|
49 |
+
| query_id | string | Baidu query_id |
|
50 |
+
| query_md5 | string | MD5 hash of query text |
|
51 |
+
| url_md5 | List[string] | MD5 hash of document url, most reliable document identifier |
|
52 |
+
| text_md5 | List[string] | MD5 hash of document title and abstract |
|
53 |
+
| query_document_embedding | Tensor[float16]| BERT CLS token |
|
54 |
+
| click | Tensor[int32] | Click / no click on a document |
|
55 |
+
| n | int32 | Number of documents for current query, useful for padding |
|
56 |
+
| position | Tensor[int32] | Position in ranking (does not always match original item position) |
|
57 |
+
| media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) |
|
58 |
+
| displayed_time | Tensor[float32]| Seconds a document was displayed on screen |
|
59 |
+
| serp_height | Tensor[int32] | Pixel height of a document on screen |
|
60 |
+
| slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it |
|
61 |
+
|
62 |
+
|
63 |
+
### Expert annotation dataset
|
64 |
+
| name | dtype | description |
|
65 |
+
|------------------------------|----------------|-------------|
|
66 |
+
| query_id | string | Baidu query_id |
|
67 |
+
| query_md5 | string | MD5 hash of query text |
|
68 |
+
| text_md5 | List[string] | MD5 hash of document title and abstract |
|
69 |
+
| query_document_embedding | Tensor[float16]| BERT CLS token |
|
70 |
+
| label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
|
71 |
+
| n | int32 | Number of documents for current query, useful for padding |
|
72 |
+
| frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
|
73 |
+
|
74 |
+
## Example PyTorch collate function
|
75 |
+
Each sample in the dataset is a single query with multiple documents.
|
76 |
+
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
|
77 |
+
|
78 |
+
```Python
|
79 |
+
import torch
|
80 |
+
from typing import List
|
81 |
+
from collections import defaultdict
|
82 |
+
from torch.nn.utils.rnn import pad_sequence
|
83 |
+
from torch.utils.data import DataLoader
|
84 |
+
|
85 |
+
|
86 |
+
def collate_clicks(samples: List):
|
87 |
+
batch = defaultdict(lambda: [])
|
88 |
+
|
89 |
+
for sample in samples:
|
90 |
+
batch["query_document_embedding"].append(sample["query_document_embedding"])
|
91 |
+
batch["position"].append(sample["position"])
|
92 |
+
batch["click"].append(sample["click"])
|
93 |
+
batch["n"].append(sample["n"])
|
94 |
+
|
95 |
+
return {
|
96 |
+
"query_document_embedding": pad_sequence(
|
97 |
+
batch["query_document_embedding"], batch_first=True
|
98 |
+
),
|
99 |
+
"position": pad_sequence(batch["position"], batch_first=True),
|
100 |
+
"click": pad_sequence(batch["click"], batch_first=True),
|
101 |
+
"n": torch.tensor(batch["n"]),
|
102 |
+
}
|
103 |
+
|
104 |
+
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
|
105 |
+
```
|
baidu-ultr_tencent-mlm-ctr.py
ADDED
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from enum import Enum
|
2 |
+
from typing import List
|
3 |
+
|
4 |
+
import datasets
|
5 |
+
import pandas as pd
|
6 |
+
|
7 |
+
from datasets import Features, Value, Array2D, Sequence, SplitGenerator, Split
|
8 |
+
|
9 |
+
|
10 |
+
_CITATION = """\
|
11 |
+
@InProceedings{huggingface:dataset,
|
12 |
+
title = {philipphager/baidu-ultr_tencent-mlm-ctr},
|
13 |
+
author={Philipp Hager, Romain Deffayet},
|
14 |
+
year={2023}
|
15 |
+
}
|
16 |
+
"""
|
17 |
+
|
18 |
+
_DESCRIPTION = """\
|
19 |
+
Query-document vectors and clicks for a subset of the Baidu Unbiased Learning to Rank
|
20 |
+
dataset: https://arxiv.org/abs/2207.03051
|
21 |
+
|
22 |
+
This dataset uses the pretrained BERT cross-encoder (Bert_Layer12_Head12) from Tencent published
|
23 |
+
as part of the WSDM cup 2023 to compute query-document vectors (768 dims):
|
24 |
+
https://github.com/lixsh6/Tencent_wsdm_cup2023/tree/main/pytorch_unbias
|
25 |
+
|
26 |
+
We link the model checkpoint also under `model/`.
|
27 |
+
"""
|
28 |
+
|
29 |
+
_HOMEPAGE = "https://huggingface.co/datasets/philipphager/baidu-ultr_tencent-mlm-ctr/"
|
30 |
+
_LICENSE = "cc-by-nc-4.0"
|
31 |
+
_VERSION = "0.1.0"
|
32 |
+
|
33 |
+
|
34 |
+
class Config(str, Enum):
|
35 |
+
ANNOTATIONS = "annotations"
|
36 |
+
CLICKS = "clicks"
|
37 |
+
|
38 |
+
|
39 |
+
class BaiduUltrBuilder(datasets.GeneratorBasedBuilder):
|
40 |
+
VERSION = datasets.Version(_VERSION)
|
41 |
+
BUILDER_CONFIGS = [
|
42 |
+
datasets.BuilderConfig(
|
43 |
+
name=Config.CLICKS,
|
44 |
+
version=VERSION,
|
45 |
+
description="Load train/val/test clicks from the Baidu ULTR dataset",
|
46 |
+
),
|
47 |
+
datasets.BuilderConfig(
|
48 |
+
name=Config.ANNOTATIONS,
|
49 |
+
version=VERSION,
|
50 |
+
description="Load expert annotations from the Baidu ULTR dataset",
|
51 |
+
),
|
52 |
+
]
|
53 |
+
|
54 |
+
CLICK_FEATURES = Features(
|
55 |
+
{
|
56 |
+
"query_id": Value("string"),
|
57 |
+
"query_md5": Value("string"),
|
58 |
+
"url_md5": Sequence(Value("string")),
|
59 |
+
"text_md5": Sequence(Value("string")),
|
60 |
+
"query_document_embedding": Array2D((None, 768), "float16"),
|
61 |
+
"click": Sequence(Value("int32")),
|
62 |
+
"n": Value("int32"),
|
63 |
+
"position": Sequence(Value("int32")),
|
64 |
+
"media_type": Sequence(Value("int32")),
|
65 |
+
"displayed_time": Sequence(Value("float32")),
|
66 |
+
"serp_height": Sequence(Value("int32")),
|
67 |
+
"slipoff_count_after_click": Sequence(Value("int32")),
|
68 |
+
}
|
69 |
+
)
|
70 |
+
|
71 |
+
ANNOTATION_FEATURES = Features(
|
72 |
+
{
|
73 |
+
"query_id": Value("string"),
|
74 |
+
"query_md5": Value("string"),
|
75 |
+
"text_md5": Value("string"),
|
76 |
+
"query_document_embedding": Array2D((None, 768), "float16"),
|
77 |
+
"label": Sequence(Value("int32")),
|
78 |
+
"n": Value("int32"),
|
79 |
+
"frequency_bucket": Value("int32"),
|
80 |
+
}
|
81 |
+
)
|
82 |
+
|
83 |
+
DEFAULT_CONFIG_NAME = Config.CLICKS
|
84 |
+
|
85 |
+
def _info(self):
|
86 |
+
if self.config.name == Config.CLICKS:
|
87 |
+
features = self.CLICK_FEATURES
|
88 |
+
elif self.config.name == Config.ANNOTATIONS:
|
89 |
+
features = self.ANNOTATION_FEATURES
|
90 |
+
else:
|
91 |
+
raise ValueError(
|
92 |
+
f"Config {self.config.name} must be in ['clicks', 'annotations']"
|
93 |
+
)
|
94 |
+
|
95 |
+
return datasets.DatasetInfo(
|
96 |
+
description=_DESCRIPTION,
|
97 |
+
features=features,
|
98 |
+
homepage=_HOMEPAGE,
|
99 |
+
license=_LICENSE,
|
100 |
+
citation=_CITATION,
|
101 |
+
)
|
102 |
+
|
103 |
+
def _split_generators(self, dl_manager):
|
104 |
+
if self.config.name == Config.CLICKS:
|
105 |
+
train_files = self.download_clicks(dl_manager, parts=[1, 2, 3])
|
106 |
+
test_files = self.download_clicks(dl_manager, parts=[0])
|
107 |
+
|
108 |
+
query_columns = [
|
109 |
+
"query_id",
|
110 |
+
"query_md5",
|
111 |
+
]
|
112 |
+
|
113 |
+
agg_columns = [
|
114 |
+
"query_md5",
|
115 |
+
"url_md5",
|
116 |
+
"text_md5",
|
117 |
+
"position",
|
118 |
+
"click",
|
119 |
+
"query_document_embedding",
|
120 |
+
"media_type",
|
121 |
+
"displayed_time",
|
122 |
+
"serp_height",
|
123 |
+
"slipoff_count_after_click",
|
124 |
+
]
|
125 |
+
|
126 |
+
return [
|
127 |
+
SplitGenerator(
|
128 |
+
name=Split.TRAIN,
|
129 |
+
gen_kwargs={
|
130 |
+
"files": train_files,
|
131 |
+
"query_columns": query_columns,
|
132 |
+
"agg_columns": agg_columns,
|
133 |
+
},
|
134 |
+
),
|
135 |
+
SplitGenerator(
|
136 |
+
name=Split.TEST,
|
137 |
+
gen_kwargs={
|
138 |
+
"files": test_files,
|
139 |
+
"query_columns": query_columns,
|
140 |
+
"agg_columns": agg_columns,
|
141 |
+
},
|
142 |
+
),
|
143 |
+
]
|
144 |
+
elif self.config.name == Config.ANNOTATIONS:
|
145 |
+
test_files = dl_manager.download(["parts/validation.feather"])
|
146 |
+
query_columns = [
|
147 |
+
"query_id",
|
148 |
+
"query_md5",
|
149 |
+
"frequency_bucket",
|
150 |
+
]
|
151 |
+
agg_columns = [
|
152 |
+
"text_md5",
|
153 |
+
"label",
|
154 |
+
"query_document_embedding",
|
155 |
+
]
|
156 |
+
|
157 |
+
return [
|
158 |
+
SplitGenerator(
|
159 |
+
name=Split.TEST,
|
160 |
+
gen_kwargs={
|
161 |
+
"files": test_files,
|
162 |
+
"query_columns": query_columns,
|
163 |
+
"agg_columns": agg_columns,
|
164 |
+
},
|
165 |
+
)
|
166 |
+
]
|
167 |
+
else:
|
168 |
+
raise ValueError("Config name must be in ['clicks', 'annotations']")
|
169 |
+
|
170 |
+
def download_clicks(self, dl_manager, parts: List[int], splits_per_part: int = 10):
|
171 |
+
urls = [
|
172 |
+
f"parts/part-{p}_split-{s}.feather"
|
173 |
+
for p in parts
|
174 |
+
for s in range(splits_per_part)
|
175 |
+
]
|
176 |
+
|
177 |
+
return dl_manager.download(urls)
|
178 |
+
|
179 |
+
def _generate_examples(
|
180 |
+
self,
|
181 |
+
files: List[str],
|
182 |
+
query_columns: List[str],
|
183 |
+
agg_columns: List[str],
|
184 |
+
):
|
185 |
+
"""
|
186 |
+
Reads dataset partitions and aggregates document features per query.
|
187 |
+
:param files: List of .feather files to load from disk.
|
188 |
+
:param query_columns: Columns with one value per query. E.g., query_id,
|
189 |
+
frequency bucket, etc.
|
190 |
+
:param agg_columns: Columns with one value per document that should be
|
191 |
+
aggregated per query. E.g., click, position, query_document_embeddings, etc.
|
192 |
+
:return:
|
193 |
+
"""
|
194 |
+
for file in files:
|
195 |
+
df = pd.read_feather(file)
|
196 |
+
current_query_id = None
|
197 |
+
sample_key = None
|
198 |
+
sample = None
|
199 |
+
|
200 |
+
for i in range(len(df)):
|
201 |
+
row = df.iloc[i]
|
202 |
+
|
203 |
+
if current_query_id != row["query_id"]:
|
204 |
+
if current_query_id is not None:
|
205 |
+
yield sample_key, sample
|
206 |
+
|
207 |
+
current_query_id = row["query_id"]
|
208 |
+
sample_key = f"{file}-{current_query_id}"
|
209 |
+
sample = {"n": 0}
|
210 |
+
|
211 |
+
for column in query_columns:
|
212 |
+
sample[column] = row[column]
|
213 |
+
for column in agg_columns:
|
214 |
+
sample[column] = []
|
215 |
+
|
216 |
+
for column in agg_columns:
|
217 |
+
sample[column].append(row[column])
|
218 |
+
|
219 |
+
sample["n"] += 1
|
220 |
+
|
221 |
+
yield sample_key, sample
|