(CleanRL) PPO Agent Playing Hopper-v4
This is a trained model of a PPO agent playing Hopper-v4. The model was trained by using CleanRL and the most up-to-date training code can be found here.
Get Started
To use this model, please install the cleanrl
package with the following command:
pip install "cleanrl[ppo_fix_continuous_action]"
python -m cleanrl_utils.enjoy --exp-name ppo_fix_continuous_action --env-id Hopper-v4
Please refer to the documentation for more detail.
Command to reproduce the training
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed1/raw/main/ppo_fix_continuous_action.py
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/sdpkjc/Hopper-v4-ppo_fix_continuous_action-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_fix_continuous_action.py --save-model --upload-model --hf-entity sdpkjc --env-id Hopper-v4 --seed 1 --track
Hyperparameters
{'anneal_lr': True,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.2,
'clip_vloss': True,
'cuda': True,
'ent_coef': 0.0,
'env_id': 'Hopper-v4',
'exp_name': 'ppo_fix_continuous_action',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'sdpkjc',
'learning_rate': 0.0003,
'max_grad_norm': 0.5,
'minibatch_size': 64,
'norm_adv': True,
'num_envs': 1,
'num_minibatches': 32,
'num_steps': 2048,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 1000000,
'track': True,
'update_epochs': 10,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
Evaluation results
- mean_reward on Hopper-v4self-reported1476.53 +/- 348.71