procgen / README.md
EpicPinkPenguin's picture
Upload dataset (part 00005-of-00006)
d5daeee verified
|
raw
history blame
17.9 kB
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 10M<n<100M
task_categories:
  - reinforcement-learning
pretty_name: Procgen Benchmark Dataset
dataset_info:
  - config_name: bigfish
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 31592522500
    dataset_size: 289372500000
  - config_name: bossfight
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 106055802563
    dataset_size: 289372500000
  - config_name: caveflyer
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 99550129733
    dataset_size: 289372500000
  - config_name: chaser
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 21280555326
    dataset_size: 289372500000
  - config_name: climber
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 42586772888
    dataset_size: 289372500000
  - config_name: coinrun
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 51408569531
    dataset_size: 289372500000
  - config_name: dodgeball
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 69558620534
    dataset_size: 289372500000
  - config_name: fruitbot
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 184877608931
    dataset_size: 289372500000
  - config_name: heist
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 25378574656
    dataset_size: 289372500000
  - config_name: jumper
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 32385518771
    dataset_size: 289372500000
  - config_name: leaper
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 44464909989
    dataset_size: 289372500000
  - config_name: maze
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
    download_size: 48512099989
    dataset_size: 289372500000
  - config_name: miner
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 38184188600
    dataset_size: 289372500000
  - config_name: ninja
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 66644794325
    dataset_size: 289372500000
  - config_name: plunder
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 68795928729
    dataset_size: 289372500000
  - config_name: starpilot
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 260435250000
        num_examples: 9000000
      - name: test
        num_bytes: 28937250000
        num_examples: 1000000
    download_size: 170031712117
    dataset_size: 289372500000
configs:
  - config_name: bigfish
    data_files:
      - split: train
        path: bigfish/train-*
      - split: test
        path: bigfish/test-*
  - config_name: bossfight
    data_files:
      - split: train
        path: bossfight/train-*
      - split: test
        path: bossfight/test-*
  - config_name: caveflyer
    data_files:
      - split: train
        path: caveflyer/train-*
      - split: test
        path: caveflyer/test-*
  - config_name: chaser
    data_files:
      - split: train
        path: chaser/train-*
      - split: test
        path: chaser/test-*
  - config_name: climber
    data_files:
      - split: train
        path: climber/train-*
      - split: test
        path: climber/test-*
  - config_name: coinrun
    data_files:
      - split: train
        path: coinrun/train-*
      - split: test
        path: coinrun/test-*
  - config_name: dodgeball
    data_files:
      - split: train
        path: dodgeball/train-*
      - split: test
        path: dodgeball/test-*
  - config_name: fruitbot
    data_files:
      - split: train
        path: fruitbot/train-*
      - split: test
        path: fruitbot/test-*
  - config_name: heist
    data_files:
      - split: train
        path: heist/train-*
      - split: test
        path: heist/test-*
  - config_name: jumper
    data_files:
      - split: train
        path: jumper/train-*
      - split: test
        path: jumper/test-*
  - config_name: leaper
    data_files:
      - split: train
        path: leaper/train-*
      - split: test
        path: leaper/test-*
  - config_name: maze
    data_files:
      - split: train
        path: maze/train-*
      - split: test
        path: maze/test-*
  - config_name: miner
    data_files:
      - split: train
        path: miner/train-*
      - split: test
        path: miner/test-*
  - config_name: ninja
    data_files:
      - split: train
        path: ninja/train-*
      - split: test
        path: ninja/test-*
  - config_name: plunder
    data_files:
      - split: train
        path: plunder/train-*
      - split: test
        path: plunder/test-*
  - config_name: starpilot
    data_files:
      - split: train
        path: starpilot/train-*
      - split: test
        path: starpilot/test-*
tags:
  - procgen
  - bigfish
  - benchmark
  - openai
  - bossfight
  - caveflyer
  - chaser
  - climber
  - dodgeball
  - fruitbot
  - heist
  - jumper
  - leaper
  - maze
  - miner
  - ninja
  - plunder
  - starpilot

Procgen Benchmark

This dataset contains expert trajectories generated by a PPO reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the Procgen Benchmark. The environments were created on distribution_mode=easy and with unlimited levels.

Disclaimer: This is not an official repository from OpenAI.

Dataset Usage

Regular usage (for environment bigfish):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test")

Usage with PyTorch (for environment bossfight):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch")

Agent Performance

The PPO RL agent was trained for 50M steps on each environment and obtained the following final performance metrics.

Environment Steps (Train) Steps (Test) Return Observation
bigfish 9,000,000 1,000,000 29.38
bossfight 9,000,000 1,000,000 10.86
caveflyer 9,000,000 1,000,000 09.58
chaser 9,000,000 1,000,000 11.52
climber 9,000,000 1,000,000 10.66
coinrun 9,000,000 1,000,000 09.84
dodgeball 9,000,000 1,000,000 16.73
fruitbot 9,000,000 1,000,000 21.39
heist 9,000,000 1,000,000 09.95
jumper 9,000,000 1,000,000 08.76
leaper 9,000,000 1,000,000 07.50
maze 9,000,000 1,000,000 09.99
miner 9,000,000 1,000,000 12.66
ninja 9,000,000 1,000,000 09.61
plunder 9,000,000 1,000,000 25.68
starpilot 9,000,000 1,000,000 57.25

Dataset Structure

Data Instances

Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).

{'action': 1,
 'done': False,
 'observation': [[[0, 166, 253],
                  [0, 174, 255],
                  [0, 170, 251],
                  [0, 191, 255],
                  [0, 191, 255],
                  [0, 221, 255],
                  [0, 243, 255],
                  [0, 248, 255],
                  [0, 243, 255],
                  [10, 239, 255],
                  [25, 255, 255],
                  [0, 241, 255],
                  [0, 235, 255],
                  [17, 240, 255],
                  [10, 243, 255],
                  [27, 253, 255],
                  [39, 255, 255],
                  [58, 255, 255],
                  [85, 255, 255],
                  [111, 255, 255],
                  [135, 255, 255],
                  [151, 255, 255],
                  [173, 255, 255],
...
                  [0, 0, 37],
                  [0, 0, 39]]],
 'reward': 0.0,
 'truncated': False}

Data Fields

  • observation: The current RGB observation from the environment.
  • action: The action predicted by the agent for the current observation.
  • reward: The received reward from stepping the environment with the current action.
  • done: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action.
  • truncated: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.

Data Splits

The dataset is divided into a train (90%) and test (10%) split. Each environment-dataset has in sum 10M steps (data points).

Dataset Creation

The dataset was created by training an RL agent with PPO for 25M steps in each environment. The trajectories where generated by sampling from the predicted action distribution at each step (not taking the argmax). The environments were created on distribution_mode=easy and with unlimited levels.

Procgen Benchmark

The Procgen Benchmark, released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.