procgen / README.md
EpicPinkPenguin's picture
Update README.md
6befdc6 verified
|
raw
history blame
4.58 kB
metadata
license: apache-2.0
dataset_info:
  features:
    - name: observation
      dtype:
        array3_d:
          shape:
            - 64
            - 64
            - 3
          dtype: uint8
    - name: action
      dtype: uint8
    - name: reward
      dtype: float32
    - name: done
      dtype: bool
    - name: truncated
      dtype: bool
  splits:
    - name: train
      num_bytes: 26043525000
      num_examples: 900000
    - name: test
      num_bytes: 2893725000
      num_examples: 100000
  download_size: 3128341675
  dataset_size: 28937250000
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
task_categories:
  - reinforcement-learning
language:
  - en
tags:
  - procgen
  - bigfish
  - benchmark
  - openai
pretty_name: Procgen Benchmark - Bigfish
size_categories:
  - 100K<n<1M

Procgen Benchmark - Bigfish

This dataset contains trajectories generated by a PPO reinforcement learning agent trained on the Bigfish environment from the Procgen Benchmark. The agent has been trained for 50M steps and the final evaluation performance is 32.33.

Dataset Usage

Regular usage:

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="train")
test_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="test")

Usage with PyTorch:

from datasets import load_dataset
train_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="train").with_format("torch")
test_dataset = load_dataset("EpicPinkPenguin/procgen_bigfish", split="test").with_format("torch")

Dataset Structure

Data Instances

Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).

{'action': 1,
 'done': False,
 'observation': [[[0, 166, 253],
                  [0, 174, 255],
                  [0, 170, 251],
                  [0, 191, 255],
                  [0, 191, 255],
                  [0, 221, 255],
                  [0, 243, 255],
                  [0, 248, 255],
                  [0, 243, 255],
                  [10, 239, 255],
                  [25, 255, 255],
                  [0, 241, 255],
                  [0, 235, 255],
                  [17, 240, 255],
                  [10, 243, 255],
                  [27, 253, 255],
                  [39, 255, 255],
                  [58, 255, 255],
                  [85, 255, 255],
                  [111, 255, 255],
                  [135, 255, 255],
                  [151, 255, 255],
                  [173, 255, 255],
...
                  [0, 0, 37],
                  [0, 0, 39]]],
 'reward': 0.0,
 'truncated': False}

Data Fields

  • observation: The current RGB observation from the environment.
  • action: The action predicted by the agent for the current observation.
  • reward: The received reward from stepping the environment with the current action.
  • done: If the new observation is the start of a new episode. Obtained after stepping the environment with the current action.
  • truncated: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.

Data Splits

The dataset is divided into a train (90%) and test (10%) split

Dataset Creation

The dataset was created by training an RL agent with PPO for 50M steps on the Procgen Bigfish environment. The agent obtained a final performance of 32.33. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribution.

Procgen Benchmark

The Procgen Benchmark, released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.