Research Abstract
Imitation learning (IL) frameworks in robotics typically assume that a domain expert's demonstration always contains a correct way of doing the task. Despite its theoretical convenience, this assumption has limited practical values for an IL-powered robot in real world. There are many reasons for an expert in the real world to provide demonstrations that may contain incorrect or potentially unsafe way of doing a task. In order for IL-powered robots to work in the real world, IL frameworks need to detect such adversarial demonstrations and not learn from them. This paper proposes an IL framework that can autonomously detect and remove adversarial demonstrations, if they exist in the demonstration set, as it directly learns a task policy from the expert. The proposed framework that we term Robust Maximum Entropy behavior cloning (R-MaxEnt) learns a stochastic model that maps states to actions. In doing so, R-MaxEnt …
Research Date
Research Department
Research Journal
2021 International Conference on Intelligent Robots and Systems (IROS)
Research Member
Research Pages
7835-7841
Research Publisher
IEEE/RSJ
Research Website
https://ieeexplore.ieee.org/abstract/document/9636203/
Research Year
2021