3:15 - 3:30 AEST |
12:15 pm - 12:30 pm CDT |
19:15 - 19:30 CEST |
17:15 - 17:30 UTC |
Paper #3 – Query-based Targeted Action-Space Adversarial Policies on Deep Reinforcement Learning Agents
Query-based targeted action-space adversarial policies on deep reinforcement learning agents
- Xian Yeow Lee
- Yasaman Esfandiari
- Kai Liang Tan
- Soumik Sarkar
Advances in computing resources have resulted in the increasing complexity of cyber-physical systems (CPS). As the complexity of CPS evolved, the focus has shifted to deep reinforcement learning-based (DRL) methods for control of these systems. This is in part due to: 1) difficulty of obtaining accurate models of complex CPS for traditional control 2) DRL algorithms’ capability of learning control policies from data which can be adapted and scaled to real, complex CPS. To securely deploy DRL in production, it is essential to examine the weaknesses of DRL-based controllers (policies) towards malicious attacks from all angles. This work investigates targeted attacks in the action-space domain (actuation attacks), which perturbs the outputs of a controller. We show that a black-box attack model that generates perturbations with respect to an adversarial goal can be formulated as another reinforcement learning problem. Thus, an adversarial policy can be trained using conventional DRL methods. Experimental results showed that adversarial policies which only observe the nominal policy’s output generate stronger attacks than adversarial policies that observe the nominal policy’s input and output. Further analysis revealed that nominal policies whose outputs are frequently at the boundaries of the action space are naturally more robust towards adversarial policies. Lastly, we propose the use of adversarial training with transfer learning to induce robust behaviors into the nominal policy, which decreases the rate of successful targeted attacks by approximately 50%.