Upload abstract/2307.02276.txt with huggingface_hub
Browse files- abstract/2307.02276.txt +5 -0
abstract/2307.02276.txt
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Standard reinforcement learning (RL) agents never intelligently explore like a human, for example by taking into account complex domain priors and previous explorations. Even basic intelligent exploration strategies, such as exhaustive search, are only inefficiently or poorly approximated by approaches like novelty search or intrinsic motivation. More complicated strategies, such as learning new skills, climbing stairs, opening doors, or conducting experiments, are even more difficult. This lack of intelligent exploration limits sample efficiency and prevents solving hard exploration domains. We argue that a core barrier prohibiting many RL approaches from learning intelligent exploration is that the methods attempt to explore and exploit simultaneously, which often conflicts with the goals.
|
2 |
+
|
3 |
+
We propose a novel meta-RL framework called First-Explore, which consists of two policies: one policy that learns to explore and one policy that learns to exploit. Once trained, we can explore with the explore policy for as long as desired, and then exploit based on all the information gained during exploration. This approach avoids the conflict of trying to do both exploration and exploitation at once.
|
4 |
+
|
5 |
+
We demonstrate that First-Explore can learn intelligent exploration strategies such as exhaustive search and more. It outperforms dominant standard RL and meta-RL approaches on domains where exploration requires sacrificing reward. First-Explore is a significant step towards creating meta-RL algorithms capable of learning human-level exploration, which is essential to solve challenging unseen hard-exploration domains.
|