Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
pandas
License:
YuehHanChen commited on
Commit
093009d
·
verified ·
1 Parent(s): 8f260c3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -10
README.md CHANGED
@@ -3,20 +3,33 @@ license: apache-2.0
3
  ---
4
  <p align="center"><h1>Dataset from "Approaching Human-Level Forecasting with Language Models"</h1></p>
5
 
6
- <p>This documentation details the clean dataset derived from the raw data used in our research paper, <strong><a href="https://arxiv.org/abs/2402.18563" target="_blank">Approaching Human-Level Forecasting with Language Models</a></strong>, by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt.</p>
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
  <h2>Data Curation Process</h2>
9
- <p>To enhance the quality and relevance of our dataset, we implemented a rigorous data curation process. This process involved:</p>
10
  <ul>
11
- <li>Filtering out ill-defined questions and those of overly personal or niche interest.</li>
12
- <li>Excluding questions with low forecast submissions or trading volume on platforms like Manifold and Polymarket.</li>
13
- <li>Converting multiple-choice questions into binary format to maintain consistency and focus on binary outcomes.</li>
14
- <li>Ensuring that the test set only contains questions appearing after the knowledge cut-off date for the models used (June 1, 2024), to prevent potential leakage. Questions opened after this date were used for testing, while those resolved before were allocated to the training and validation sets.</li>
15
  </ul>
16
 
17
- <p>The curated dataset includes 5,516 binary questions, with 3,762 allocated for training, 840 for validation, and 914 for testing. This selection was made to ensure a balanced and representative sample of forecasting challenges. Detailed examples and further information on the curation methodology are available in <em>Table 2a</em> and <em>Appendix C</em> of our paper.</p>
18
 
19
- <h2>Research Significance</h2>
20
- <p>The curation and analysis of this dataset are pivotal to our research. They allow us to more accurately assess the forecasting capabilities of language models and explore their potential to match or exceed human-level accuracy in predicting future events. Our findings contribute valuable insights into the effectiveness of language models in complex decision-making scenarios.</p>
21
 
22
- <p>We invite researchers and practitioners to review our methodology and findings for a deeper understanding of the potential and limitations of language models in forecasting applications. For more detailed discussions, please refer to the paper linked at the beginning of this document.</p>
 
3
  ---
4
  <p align="center"><h1>Dataset from "Approaching Human-Level Forecasting with Language Models"</h1></p>
5
 
6
+ <p>This document details the curated dataset developed for our research paper, <strong><a href="https://arxiv.org/abs/2402.18563" target="_blank">Approaching Human-Level Forecasting with Language Models</a></strong>, authored by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt. For inquiries, please contact us via email: <a href="mailto:dhalawi@berkeley.edu">Danny Halawi</a>, <a href="mailto:z0@eecs.berkeley.edu">Fred Zhang</a>, <a href="mailto:john0922ucb@berkeley.edu">Chen Yueh-Han</a>, and <a href="mailto:jsteinhardt@berkeley.edu">Jacob Steinhardt</a>.</p>
7
+
8
+ <h2>Data Source and Format</h2>
9
+ <p>The dataset is compiled from forecasting platforms including Metaculus, Good Judgment Open, INFER, Polymarket, and Manifold. These platforms enable users to predict future events by assigning probabilities to different outcomes, structured as follows:</p>
10
+ <ul>
11
+ <li><strong>Background Description:</strong> Contextual information for each forecasting question.</li>
12
+ <li><strong>Resolution Criterion:</strong> Guidelines on how and when each question is considered resolved.</li>
13
+ <li><strong>Timestamps:</strong> Key dates including the publication (begin date), forecast submission deadline (close date), and outcome resolution (resolve date).</li>
14
+ </ul>
15
+
16
+ <p>Submissions are accepted between the begin date and the earlier of the resolve or close dates. See <em>Table 1</em> in our paper for an in-depth example.</p>
17
+
18
+ <h2>Raw Data Composition</h2>
19
+ <p>The raw dataset encompasses 48,754 questions and 7,174,607 user forecasts from 2015 to 2024, across various question types and topics globally. However, it includes challenges such as ill-defined questions and a significant imbalance in source platform contributions post-June 1, 2023. For a complete view of the raw data, visit <a href="https://huggingface.co/datasets/YuehHanChen/forecasting_raw" target="_blank">our dataset on Hugging Face</a>.</p>
20
 
21
  <h2>Data Curation Process</h2>
22
+ <p>To refine the dataset for analytical rigor, we undertook the following steps:</p>
23
  <ul>
24
+ <li><strong>Filtering:</strong> Exclusion of ill-defined, overly personal, or niche-interest questions to ensure data quality and relevance.</li>
25
+ <li><strong>Adjustment for Imbalance:</strong> Careful selection to mitigate the recent source imbalance, focusing on a diverse representation of forecasting questions.</li>
26
+ <li><strong>Binary Focus:</strong> Conversion of multiple-choice questions to binary format, concentrating on binary outcomes for a streamlined analysis.</li>
27
+ <li><strong>Temporal Segregation:</strong> To prevent leakage from language models' pre-training, the test set includes only questions published after June 1, 2024, with earlier questions allocated to training and validation sets.</li>
28
  </ul>
29
 
30
+ <p>This curation resulted in 5,516 binary questions, with 3,762 for training, 840 for validation, and 914 for testing. Detailed examples and curation insights are provided in <em>Table 2a</em> and <em>Appendix C</em> of our paper.</p>
31
 
32
+ <h2>Significance for Research</h2>
33
+ <p>The curated dataset is pivotal for our investigation into language models' forecasting capabilities, aiming to benchmark against or exceed human predictive performance. It enables focused analysis on high-quality, relevant forecasting questions.</p>
34
 
35
+ <p>Detailed methodologies and insights from our study are available in the linked paper at the beginning of this document. We invite feedback and collaboration to further this field of research.</p>