--- language: - en license: apache-2.0 size_categories: - 100K | | bossfight | | | caveflyer | | | chaser | | | climber | | | coinrun | | | dodgeball | | | fruitbot | | | heist | | | jumper | | | leaper | | | maze | | | miner | | | ninja | | | plunder | | | starpilot | | ## Procgen Benchmark The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.