Emperor-WS's picture
Initial commit
e47c460
|
raw
history blame
2.53 kB
metadata
library_name: stable-baselines3
tags:
  - Walker2DBulletEnv-v0
  - deep-reinforcement-learning
  - reinforcement-learning
  - stable-baselines3
model-index:
  - name: TD3
    results:
      - task:
          type: reinforcement-learning
          name: reinforcement-learning
        dataset:
          name: Walker2DBulletEnv-v0
          type: Walker2DBulletEnv-v0
        metrics:
          - type: mean_reward
            value: 2280.24 +/- 566.59
            name: mean_reward
            verified: false

TD3 Agent playing Walker2DBulletEnv-v0

This is a trained model of a TD3 agent playing Walker2DBulletEnv-v0 using the stable-baselines3 library and the RL Zoo.

The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.

Usage (with SB3 RL Zoo)

RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo
SB3: https://github.com/DLR-RM/stable-baselines3
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib

Install the RL Zoo (with SB3 and SB3-Contrib):

pip install rl_zoo3
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo td3 --env Walker2DBulletEnv-v0 -orga Emperor-WS -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2DBulletEnv-v0  -f logs/

If you installed the RL Zoo3 via pip (pip install rl_zoo3), from anywhere you can do:

python -m rl_zoo3.load_from_hub --algo td3 --env Walker2DBulletEnv-v0 -orga Emperor-WS -f logs/
python -m rl_zoo3.enjoy --algo td3 --env Walker2DBulletEnv-v0  -f logs/

Training (with the RL Zoo)

python -m rl_zoo3.train --algo td3 --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo td3 --env Walker2DBulletEnv-v0 -f logs/ -orga Emperor-WS

Hyperparameters

OrderedDict([('batch_size', 256),
             ('buffer_size', 200000),
             ('gamma', 0.98),
             ('gradient_steps', 1),
             ('learning_rate', 0.0007),
             ('learning_starts', 10000),
             ('n_timesteps', 1000000.0),
             ('noise_std', 0.1),
             ('noise_type', 'normal'),
             ('policy', 'MlpPolicy'),
             ('policy_kwargs', 'dict(net_arch=[400, 300])'),
             ('train_freq', 1),
             ('normalize', False)])

Environment Arguments

{'render_mode': 'rgb_array'}