procgen / README.md
EpicPinkPenguin's picture
Upload dataset
4d357b4 verified
|
raw
history blame
10.8 kB
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 100K<n<1M
task_categories:
  - reinforcement-learning
pretty_name: Procgen Benchmark Dataset
dataset_info:
  - config_name: bigfish
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 3128341675
    dataset_size: 28937250000
  - config_name: bossfight
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 9295623234
    dataset_size: 28937250000
  - config_name: fruitbot
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 8886977797
    dataset_size: 28937250000
  - config_name: miner
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 1895918513
    dataset_size: 28937250000
  - config_name: ninja
    features:
      - name: observation
        dtype:
          array3_d:
            shape:
              - 64
              - 64
              - 3
            dtype: uint8
      - name: action
        dtype: uint8
      - name: reward
        dtype: float32
      - name: done
        dtype: bool
      - name: truncated
        dtype: bool
    splits:
      - name: train
        num_bytes: 26043525000
        num_examples: 900000
      - name: test
        num_bytes: 2893725000
        num_examples: 100000
    download_size: 3296432308
    dataset_size: 28937250000
configs:
  - config_name: bigfish
    data_files:
      - split: train
        path: bigfish/train-*
      - split: test
        path: bigfish/test-*
  - config_name: bossfight
    data_files:
      - split: train
        path: bossfight/train-*
      - split: test
        path: bossfight/test-*
  - config_name: fruitbot
    data_files:
      - split: train
        path: fruitbot/train-*
      - split: test
        path: fruitbot/test-*
  - config_name: miner
    data_files:
      - split: train
        path: miner/train-*
      - split: test
        path: miner/test-*
  - config_name: ninja
    data_files:
      - split: train
        path: ninja/train-*
      - split: test
        path: ninja/test-*
tags:
  - procgen
  - bigfish
  - benchmark
  - openai
  - bossfight
  - caveflyer
  - chaser
  - climber
  - dodgeball
  - fruitbot
  - heist
  - jumper
  - leaper
  - maze
  - miner
  - ninja
  - plunder
  - starpilot

Procgen Benchmark

This dataset contains expert trajectories generated by a PPO reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the Procgen Benchmark. The environments were created on distribution_mode=easy and with unlimited levels.

Disclaimer: This is not an official repository from OpenAI.

Dataset Usage

Regular usage (for environment bigfish):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test")

Usage with PyTorch (for environment bossfight):

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch")
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch")

Agent Performance

The PPO RL agent was trained for 50M steps on each environment and obtained the following final performance metrics.

Environment Return
bigfish 32.77
bossfight 12.49
caveflyer xx.xx
chaser xx.xx
climber xx.xx
coinrun xx.xx
dodgeball xx.xx
fruitbot xx.xx
heist xx.xx
jumper xx.xx
leaper xx.xx
maze xx.xx
miner xx.xx
ninja xx.xx
plunder xx.xx
starpilot xx.xx

Dataset Structure

Data Instances

Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).

{'action': 1,
 'done': False,
 'observation': [[[0, 166, 253],
                  [0, 174, 255],
                  [0, 170, 251],
                  [0, 191, 255],
                  [0, 191, 255],
                  [0, 221, 255],
                  [0, 243, 255],
                  [0, 248, 255],
                  [0, 243, 255],
                  [10, 239, 255],
                  [25, 255, 255],
                  [0, 241, 255],
                  [0, 235, 255],
                  [17, 240, 255],
                  [10, 243, 255],
                  [27, 253, 255],
                  [39, 255, 255],
                  [58, 255, 255],
                  [85, 255, 255],
                  [111, 255, 255],
                  [135, 255, 255],
                  [151, 255, 255],
                  [173, 255, 255],
...
                  [0, 0, 37],
                  [0, 0, 39]]],
 'reward': 0.0,
 'truncated': False}

Data Fields

  • observation: The current RGB observation from the environment.
  • action: The action predicted by the agent for the current observation.
  • reward: The received reward from stepping the environment with the current action.
  • done: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action.
  • truncated: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.

Data Splits

The dataset is divided into a train (90%) and test (10%) split. Each environment-dataset has in sum 1M steps (data points).

Dataset Creation

The dataset was created by training an RL agent with PPO for 50M steps in each environment. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribution. Consequently the rollout policy is deterministic. The environments were created on distribution_mode=easy and with unlimited levels.

Video Samples

Here is a collection of videos with the RGB observations from the dataset.

Environment Observation
bigfish
bossfight
caveflyer
chaser
climber
coinrun
dodgeball
fruitbot
heist
jumper
leaper
maze
miner
ninja
plunder
starpilot

Procgen Benchmark

The Procgen Benchmark, released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.