Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,105 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
+
viewer: false
|
4 |
---
|
5 |
+
# Baidu ULTR Dataset - Tencent BERT-12l-12h
|
6 |
+
Query-document vectors and clicks for a subset of the [Baidu Unbiased Learning to Rank](https://arxiv.org/abs/2207.03051) dataset.
|
7 |
+
This dataset uses the pretrained [BERT cross-encoder (Bert_Layer12_Head12) from Tencent](https://github.com/lixsh6/Tencent_wsdm_cup2023/tree/main/pytorch_unbias) published as part of the WSDM cup 2023 to compute query-document vectors (768 dims).
|
8 |
+
|
9 |
+
## Setup
|
10 |
+
1. Install huggingface [datasets](https://huggingface.co/docs/datasets/installation)
|
11 |
+
2. Install [pandas](https://github.com/pandas-dev/pandas) and [pyarrow](https://arrow.apache.org/docs/python/index.html): `pip install pandas pyarrow`
|
12 |
+
3. Optionally, you might need to install a [pyarrow-hotfix](https://github.com/pitrou/pyarrow-hotfix) if you cannot install `pyarrow >= 14.0.1`
|
13 |
+
4. You can now use the dataset as described below.
|
14 |
+
|
15 |
+
## Load train / test click dataset:
|
16 |
+
```Python
|
17 |
+
from datasets import load_dataset
|
18 |
+
|
19 |
+
dataset = load_dataset(
|
20 |
+
"philipphager/baidu-ultr_tencent-mlm-ctr",
|
21 |
+
name="clicks",
|
22 |
+
split="train", # ["train", "test"]
|
23 |
+
cache_dir="~/.cache/huggingface",
|
24 |
+
)
|
25 |
+
|
26 |
+
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
27 |
+
```
|
28 |
+
|
29 |
+
## Load expert annotations:
|
30 |
+
```Python
|
31 |
+
from datasets import load_dataset
|
32 |
+
|
33 |
+
dataset = load_dataset(
|
34 |
+
"philipphager/baidu-ultr_tencent-mlm-ctr",
|
35 |
+
name="annotations",
|
36 |
+
split="test",
|
37 |
+
cache_dir="~/.cache/huggingface",
|
38 |
+
)
|
39 |
+
|
40 |
+
dataset.set_format("torch") # [None, "numpy", "torch", "tensorflow", "pandas", "arrow"]
|
41 |
+
```
|
42 |
+
|
43 |
+
## Available features
|
44 |
+
Each row of the click / annotation dataset contains the following attributes. Use a custom `collate_fn` to select specific features (see below):
|
45 |
+
|
46 |
+
### Click dataset
|
47 |
+
| name | dtype | description |
|
48 |
+
|------------------------------|----------------|-------------|
|
49 |
+
| query_id | string | Baidu query_id |
|
50 |
+
| query_md5 | string | MD5 hash of query text |
|
51 |
+
| url_md5 | List[string] | MD5 hash of document url, most reliable document identifier |
|
52 |
+
| text_md5 | List[string] | MD5 hash of document title and abstract |
|
53 |
+
| query_document_embedding | Tensor[float16]| BERT CLS token |
|
54 |
+
| click | Tensor[int32] | Click / no click on a document |
|
55 |
+
| n | int32 | Number of documents for current query, useful for padding |
|
56 |
+
| position | Tensor[int32] | Position in ranking (does not always match original item position) |
|
57 |
+
| media_type | Tensor[int32] | Document type (label encoding recommended as ids do not occupy a continous integer range) |
|
58 |
+
| displayed_time | Tensor[float32]| Seconds a document was displayed on screen |
|
59 |
+
| serp_height | Tensor[int32] | Pixel height of a document on screen |
|
60 |
+
| slipoff_count_after_click | Tensor[int32] | Number of times a document was scrolled off screen after previously clicking on it |
|
61 |
+
|
62 |
+
|
63 |
+
### Expert annotation dataset
|
64 |
+
| name | dtype | description |
|
65 |
+
|------------------------------|----------------|-------------|
|
66 |
+
| query_id | string | Baidu query_id |
|
67 |
+
| query_md5 | string | MD5 hash of query text |
|
68 |
+
| text_md5 | List[string] | MD5 hash of document title and abstract |
|
69 |
+
| query_document_embedding | Tensor[float16]| BERT CLS token |
|
70 |
+
| label | Tensor[int32] | Relevance judgment on a scale from 0 (bad) to 4 (excellent) |
|
71 |
+
| n | int32 | Number of documents for current query, useful for padding |
|
72 |
+
| frequency_bucket | int32 | Monthly frequency of query (bucket) from 0 (high frequency) to 9 (low frequency) |
|
73 |
+
|
74 |
+
## Example PyTorch collate function
|
75 |
+
Each sample in the dataset is a single query with multiple documents.
|
76 |
+
The following example demonstrates how to create a batch containing multiple queries with varying numbers of documents by applying padding:
|
77 |
+
|
78 |
+
```Python
|
79 |
+
import torch
|
80 |
+
from typing import List
|
81 |
+
from collections import defaultdict
|
82 |
+
from torch.nn.utils.rnn import pad_sequence
|
83 |
+
from torch.utils.data import DataLoader
|
84 |
+
|
85 |
+
|
86 |
+
def collate_clicks(samples: List):
|
87 |
+
batch = defaultdict(lambda: [])
|
88 |
+
|
89 |
+
for sample in samples:
|
90 |
+
batch["query_document_embedding"].append(sample["query_document_embedding"])
|
91 |
+
batch["position"].append(sample["position"])
|
92 |
+
batch["click"].append(sample["click"])
|
93 |
+
batch["n"].append(sample["n"])
|
94 |
+
|
95 |
+
return {
|
96 |
+
"query_document_embedding": pad_sequence(
|
97 |
+
batch["query_document_embedding"], batch_first=True
|
98 |
+
),
|
99 |
+
"position": pad_sequence(batch["position"], batch_first=True),
|
100 |
+
"click": pad_sequence(batch["click"], batch_first=True),
|
101 |
+
"n": torch.tensor(batch["n"]),
|
102 |
+
}
|
103 |
+
|
104 |
+
loader = DataLoader(dataset, collate_fn=collate_clicks, batch_size=16)
|
105 |
+
```
|