--- license: apache-2.0 ---

Dataset from "Approaching Human-Level Forecasting with Language Models"

This documentation details the clean dataset derived from the raw data used in our research paper, Approaching Human-Level Forecasting with Language Models, by Danny Halawi, Fred Zhang, Chen Yueh-Han, and Jacob Steinhardt.

Data Curation Process

To enhance the quality and relevance of our dataset, we implemented a rigorous data curation process. This process involved:

The curated dataset includes 5,516 binary questions, with 3,762 allocated for training, 840 for validation, and 914 for testing. This selection was made to ensure a balanced and representative sample of forecasting challenges. Detailed examples and further information on the curation methodology are available in Table 2a and Appendix C of our paper.

Research Significance

The curation and analysis of this dataset are pivotal to our research. They allow us to more accurately assess the forecasting capabilities of language models and explore their potential to match or exceed human-level accuracy in predicting future events. Our findings contribute valuable insights into the effectiveness of language models in complex decision-making scenarios.

We invite researchers and practitioners to review our methodology and findings for a deeper understanding of the potential and limitations of language models in forecasting applications. For more detailed discussions, please refer to the paper linked at the beginning of this document.