This repository contains model weights for the agents performing in RoadEnv.
Models
- Recurrent Soft Actor-Critic (RSAC/SAC-LSTM) [Agent] [Training] [Test]
- Recurrent Soft Actor-Critic Share (RSAC-Share) [Paper] [Agent] [Training]
- Soft Actor-Critic (SAC) [Agent] [Training] [Test]
Usage
# Register environment
from road_env import register_road_envs
register_road_envs()
# Make environment
import gymnasium as gym
env = gym.make('urban-road-v0', render_mode='rgb_array')
# Configure parameters (example)
env.configure({
"random_seed": None,
"duration": 60,
})
obs, info = env.reset()
# Graphic display
import matplotlib.pyplot as plt
plt.imshow(env.render())
# Execution
done = truncated = False
while not (done or truncated):
action = ... # Your agent code here
obs, reward, done, truncated, info = env.step(action)
env.render() # Update graphic
Evaluation results
- mean-reward on urban-road-v0self-reported0.53 - 0.72
- mean-reward on urban-road-v0self-reported0.62 - 0.76