You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

[EMNLP2024] DA-Code: Agent Data Science Code Generation Benchmark for Large Language Models

DA-Code is a comprehensive evaluation dataset designed to assess the data analysis and code generation capabilities of LLM in agent-based data science tasks. Our papers and experiment reports have been published on Arxiv.

Dataset Overview

  • 500 complex real-world data analysis tasks across Data Wrangling (DW), Machine Learning (ML), and Exploratory Data Analysis (EDA).
  • Tasks cover the entire data analysis pipeline, from raw data handling to gaining insights using SQL and Python.
  • Each example is meticulously designed to ensure high complexity and quality, with robust evaluation suites.
  • An interactive sandbox environment allows LLMs/Agents to autonomously explore, reason, and complete tasks.

Usage

This dataset can be used to:

  • Evaluate LLMs’ data analysis and code generation capabilities
  • Benchmark autonomous reasoning in real-world tasks
  • Develop and test multi-step data analysis strategies

Citation

If you use this dataset in your research, please cite our paper:


Downloads last month
47