|
--- |
|
tags: |
|
- Breakout-v5 |
|
- deep-reinforcement-learning |
|
- reinforcement-learning |
|
- custom-implementation |
|
library_name: cleanrl |
|
model-index: |
|
- name: PPO |
|
results: |
|
- task: |
|
type: reinforcement-learning |
|
name: reinforcement-learning |
|
dataset: |
|
name: Breakout-v5 |
|
type: Breakout-v5 |
|
metrics: |
|
- type: mean_reward |
|
value: 755.60 +/- 175.27 |
|
name: mean_reward |
|
verified: false |
|
--- |
|
|
|
# (CleanRL) **PPO** Agent Playing **Breakout-v5** |
|
|
|
This is a trained model of a PPO agent playing Breakout-v5. |
|
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be |
|
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py). |
|
|
|
## Get Started |
|
|
|
To use this model, please install the `cleanrl` package with the following command: |
|
|
|
``` |
|
pip install "cleanrl[jax,envpool,atari]" |
|
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Breakout-v5 |
|
``` |
|
|
|
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail. |
|
|
|
|
|
## Command to reproduce the training |
|
|
|
```bash |
|
curl -OL https://huggingface.co/cleanrl/Breakout-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py |
|
curl -OL https://huggingface.co/cleanrl/Breakout-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml |
|
curl -OL https://huggingface.co/cleanrl/Breakout-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock |
|
poetry install --all-extras |
|
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Breakout-v5 --seed 1 |
|
``` |
|
|
|
# Hyperparameters |
|
```python |
|
{'actor_device_ids': [0], |
|
'actor_devices': ['gpu:0'], |
|
'anneal_lr': True, |
|
'async_batch_size': 20, |
|
'async_update': 3, |
|
'batch_size': 15360, |
|
'capture_video': False, |
|
'clip_coef': 0.1, |
|
'concurrency': True, |
|
'cuda': True, |
|
'distributed': True, |
|
'ent_coef': 0.01, |
|
'env_id': 'Breakout-v5', |
|
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper', |
|
'gae_lambda': 0.95, |
|
'gamma': 0.99, |
|
'global_learner_decices': ['gpu:1', |
|
'gpu:2', |
|
'gpu:3', |
|
'gpu:5', |
|
'gpu:6', |
|
'gpu:7'], |
|
'hf_entity': 'cleanrl', |
|
'learner_device_ids': [1, 2, 3], |
|
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'], |
|
'learning_rate': 0.00025, |
|
'local_batch_size': 7680, |
|
'local_minibatch_size': 1920, |
|
'local_num_envs': 60, |
|
'local_rank': 0, |
|
'max_grad_norm': 0.5, |
|
'minibatch_size': 3840, |
|
'norm_adv': True, |
|
'num_envs': 120, |
|
'num_minibatches': 4, |
|
'num_steps': 128, |
|
'num_updates': 3255, |
|
'profile': False, |
|
'save_model': True, |
|
'seed': 1, |
|
'target_kl': None, |
|
'test_actor_learner_throughput': False, |
|
'torch_deterministic': True, |
|
'total_timesteps': 50000000, |
|
'track': True, |
|
'update_epochs': 4, |
|
'upload_model': True, |
|
'vf_coef': 0.5, |
|
'wandb_entity': None, |
|
'wandb_project_name': 'cleanba', |
|
'world_size': 2} |
|
``` |
|
|