Upload namespace_metadata.json
Browse files
antmaze/namespace_metadata.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"display_name": "Ant Maze", "description": "The [Ant Maze](https://robotics.farama.org/envs/maze/ant_maze/) datasets present a navigation domain that replaces the 2D ball from <a href=\"../pointmaze\" title=\"poitnmaze\">pointmaze</a> with the more complex 8-DoF <a href=\"https://gymnasium.farama.org/environments/mujoco/ant/\" title=\"ant\">Ant</a> quadruped robot. This dataset was introduced in [D4RL](https://github.com/Farama-Foundation/D4RL/wiki/Tasks#antmaze)[1] to test the stitching challenge using a\nmorphologically complex robot that could mimic real-world robotic navigation tasks. Additionally, for this task the reward is sparse 0-1 which is activated upon reaching the goal.\n\nTo collect the data, a goal reaching expert policy is previously trained with the [SAC](https://stable-baselines3.readthedocs.io/en/master/modules/sac.html#stable_baselines3.sac.SAC) algorithm provided in Stable Baselines 3[2]. This goal reaching policy is then used by the Ant agent to follow a set of waypoints generated by a planner ([QIteration](https://towardsdatascience.com/fundamental-iterative-methods-of-reinforcement-learning-df8ff078652a))[3] to the final goal location. Because the controllers memorize the reached waypoints, the data collection policy is non-Markovian.\n\n## References\n\n[1] Fu, Justin, et al. \u2018D4RL: Datasets for Deep Data-Driven Reinforcement Learning\u2019. CoRR, vol. abs/2004.07219, 2020, https://arxiv.org/abs/2004.07219.\n\n[2] Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, & Noah Dormann (2021). Stable-Baselines3: Reliable Reinforcement Learning Implementations. Journal of Machine Learning Research, 22(268), 1-8.\n\n[3] Lambert, Nathan. \u2018Fundamental Iterative Methods of Reinforcement Learnin\u2019. Apr 8, 2020, https://towardsdatascience.com/fundamental-iterative-methods-of-reinforcement-learning-df8ff078652a"}
|