EpicPinkPenguin
commited on
Commit
•
3d4cb67
1
Parent(s):
c1a6ffe
Update README.md
Browse filesAlter text to accomodate for a single dataset containing all Procgen environment datasets
README.md
CHANGED
@@ -6,7 +6,7 @@ size_categories:
|
|
6 |
- 100K<n<1M
|
7 |
task_categories:
|
8 |
- reinforcement-learning
|
9 |
-
pretty_name: Procgen Benchmark
|
10 |
dataset_info:
|
11 |
- config_name: bossfight
|
12 |
features:
|
@@ -80,29 +80,68 @@ tags:
|
|
80 |
- bigfish
|
81 |
- benchmark
|
82 |
- openai
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
---
|
84 |
-
# Procgen Benchmark
|
85 |
|
86 |
-
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/brMaX1xgew7ulqkMU0Ahi.mp4"></video>
|
87 |
|
88 |
-
This dataset contains trajectories generated by a [PPO](https://arxiv.org/abs/1707.06347) reinforcement learning agent trained on the
|
|
|
|
|
89 |
|
90 |
## Dataset Usage
|
91 |
|
92 |
-
Regular usage:
|
93 |
```python
|
94 |
from datasets import load_dataset
|
95 |
-
train_dataset = load_dataset("EpicPinkPenguin/
|
96 |
-
test_dataset = load_dataset("EpicPinkPenguin/
|
97 |
```
|
98 |
|
99 |
-
Usage with PyTorch:
|
100 |
```python
|
101 |
from datasets import load_dataset
|
102 |
-
train_dataset = load_dataset("EpicPinkPenguin/
|
103 |
-
test_dataset = load_dataset("EpicPinkPenguin/
|
104 |
```
|
105 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
## Dataset Structure
|
107 |
### Data Instances
|
108 |
Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).
|
@@ -151,7 +190,7 @@ Each data instance represents a single step consisting of tuples of the form (ob
|
|
151 |
The dataset is divided into a `train` (90%) and `test` (10%) split
|
152 |
|
153 |
## Dataset Creation
|
154 |
-
The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 50M steps
|
155 |
|
156 |
## Procgen Benchmark
|
157 |
The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.
|
|
|
6 |
- 100K<n<1M
|
7 |
task_categories:
|
8 |
- reinforcement-learning
|
9 |
+
pretty_name: Procgen Benchmark Dataset
|
10 |
dataset_info:
|
11 |
- config_name: bossfight
|
12 |
features:
|
|
|
80 |
- bigfish
|
81 |
- benchmark
|
82 |
- openai
|
83 |
+
- bossfight
|
84 |
+
- caveflyer
|
85 |
+
- chaser
|
86 |
+
- climber
|
87 |
+
- dodgeball
|
88 |
+
- fruitbot
|
89 |
+
- heist
|
90 |
+
- jumper
|
91 |
+
- leaper
|
92 |
+
- maze
|
93 |
+
- miner
|
94 |
+
- ninja
|
95 |
+
- plunder
|
96 |
+
- starpilot
|
97 |
---
|
98 |
+
# Procgen Benchmark
|
99 |
|
100 |
+
<video controls autoplay loop src="https://cdn-uploads.huggingface.co/production/uploads/633c1daf31c06121a58f2df9/brMaX1xgew7ulqkMU0Ahi.mp4"></video>
|
101 |
|
102 |
+
This dataset contains expert trajectories generated by a [PPO](https://arxiv.org/abs/1707.06347) reinforcement learning agent trained on each of the 16 procedurally-generated gym environments from the [Procgen Benchmark](https://openai.com/index/procgen-benchmark/).
|
103 |
+
|
104 |
+
Disclaimer: This is not an official repository from OpenAI.
|
105 |
|
106 |
## Dataset Usage
|
107 |
|
108 |
+
Regular usage (for environment bigfish):
|
109 |
```python
|
110 |
from datasets import load_dataset
|
111 |
+
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="train")
|
112 |
+
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bigfish", split="test")
|
113 |
```
|
114 |
|
115 |
+
Usage with PyTorch (for environment bossfight):
|
116 |
```python
|
117 |
from datasets import load_dataset
|
118 |
+
train_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="train").with_format("torch")
|
119 |
+
test_dataset = load_dataset("EpicPinkPenguin/procgen", name="bossfight", split="test").with_format("torch")
|
120 |
```
|
121 |
|
122 |
+
## Agent Performance
|
123 |
+
The PPO RL agent was trained for 50M steps on each environment and obtained the following final performance metrics.
|
124 |
+
|
125 |
+
| Environment | Return |
|
126 |
+
|-------------|--------|
|
127 |
+
| bigfish | 32.77 |
|
128 |
+
| bossfight | 12.49 |
|
129 |
+
| caveflyer | xx.xx |
|
130 |
+
| chaser | xx.xx |
|
131 |
+
| climber | xx.xx |
|
132 |
+
| coinrun | xx.xx |
|
133 |
+
| dodgeball | xx.xx |
|
134 |
+
| fruitbot | xx.xx |
|
135 |
+
| heist | xx.xx |
|
136 |
+
| jumper | xx.xx |
|
137 |
+
| leaper | xx.xx |
|
138 |
+
| maze | xx.xx |
|
139 |
+
| miner | xx.xx |
|
140 |
+
| ninja | xx.xx |
|
141 |
+
| plunder | xx.xx |
|
142 |
+
| starpilot | xx.xx |
|
143 |
+
|
144 |
+
|
145 |
## Dataset Structure
|
146 |
### Data Instances
|
147 |
Each data instance represents a single step consisting of tuples of the form (observation, action, reward, done, truncated) = (o_t, a_t, r_{t+1}, done_{t+1}, trunc_{t+1}).
|
|
|
190 |
The dataset is divided into a `train` (90%) and `test` (10%) split
|
191 |
|
192 |
## Dataset Creation
|
193 |
+
The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 50M steps in each environment. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribution. Consequently the rollout policy is deterministic.
|
194 |
|
195 |
## Procgen Benchmark
|
196 |
The [Procgen Benchmark](https://openai.com/index/procgen-benchmark/), released by OpenAI, consists of 16 procedurally-generated environments designed to measure how quickly reinforcement learning (RL) agents learn generalizable skills. It emphasizes experimental convenience, high diversity within and across environments, and is ideal for evaluating both sample efficiency and generalization. The benchmark allows for distinct training and test sets in each environment, making it a standard research platform for the OpenAI RL team. It aims to address the need for more diverse RL benchmarks compared to complex environments like Dota and StarCraft.
|