申请试用
HOT
登录
注册
 
DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

Reboot
/
发布于
/
1748
人观看
Domain adaptation is an important open problem in deep reinforcement learning (RL). In many scenarios of interest data is hard to obtain, so agents may learn a source policy in a setting where data is readily available, with the hope that it generalises well to the target domain. We propose a new multi-stage RL agent,DARLA(Disent Angled Representation Learning Agent), which learns to see be for elearning to act. DARLA’s vision is based on learning a disentangled representation of the observed environment. Once DARLA can see, it is able to acquire source policies that are robust to many domain shifts - even with no access to the target domain. DARLA significantly outperforms conventional baselines in zero-shot domain adaptation scenarios, an effect that holds across a variety of RL environments (Jaco arm, DeepMind Lab) and base RL algorithms (DQN, A3C and EC).
9点赞
3收藏
0下载
相关推荐
确认
3秒后跳转登录页面
去登陆