EpicPinkPenguin
commited on
Commit
•
3db9c61
1
Parent(s):
c806698
Update README.md
Browse filesAdd dataset sample size
README.md
CHANGED
@@ -187,7 +187,7 @@ Each data instance represents a single step consisting of tuples of the form (ob
|
|
187 |
- `truncated`: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.
|
188 |
|
189 |
### Data Splits
|
190 |
-
The dataset is divided into a `train` (90%) and `test` (10%) split
|
191 |
|
192 |
## Dataset Creation
|
193 |
The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 50M steps in each environment. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribution. Consequently the rollout policy is deterministic. The environments were created on `distribution_mode=easy` and with unlimited levels.
|
|
|
187 |
- `truncated`: If the new observation is the start of a new episode due to truncation. Obtained after stepping the environment with the current action.
|
188 |
|
189 |
### Data Splits
|
190 |
+
The dataset is divided into a `train` (90%) and `test` (10%) split. Each environment-dataset has in sum 1M steps (data points).
|
191 |
|
192 |
## Dataset Creation
|
193 |
The dataset was created by training an RL agent with [PPO](https://arxiv.org/abs/1707.06347) for 50M steps in each environment. The trajectories where generated by taking the argmax action at each step, corresponding to taking the mode of the action distribution. Consequently the rollout policy is deterministic. The environments were created on `distribution_mode=easy` and with unlimited levels.
|