DeepRL-unit1-optuna / README.md
humnrdble's picture
Update README.md
49b5b7c verified
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 285.14 +/- 21.10
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
Made as part of the Deep RL course: https://huggingface.co/learn/deep-rl-course. Tuned with Optuna, as introduced in the course. This is my first successful attempt of using Optuna, so do not expect the code or parameters to be ideal!
I was able to improve upon my result from Unit1, https://huggingface.co/humnrdble/DeepRL-unit1. Both models were trained for 1500000 steps. The video of my first attempt certainly looks smoother, but scores worse.
The code is available in unit1-notebook-tuned.ipynb, but no attempt was made to make it particularly legible.
Hyperparameters deviating from the Stable-baselines3 baseline:
- gamma: 1-0.006075594024321983
- max_grad_norm: 1.8559426752164974
- exponent_n_steps: 9 (i.e. 2**9 steps)
- learning_rate: 0.0011176199638550707
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```