Vulnerability Assessment of Reinforcement Learning in Stochastic Security Games

Published in ---, 2024

Reinforcement learning (RL) offers a promising approach for adaptable security systems, while the vulnerability of RL algorithms remain a concern. This paper investigates the vulnerability of RL (specifically independent Q-learning) algorithms used for defense in the context of stochastic security games with linear influence networks. As a benchmark for the worst-case scenarios, we consider an advanced attacker possessing complete knowledge of the game and the defender’s learning rule. Leveraging this knowledge, the attacker faces a Markov decision process (MDP) with high-dimensional continuum state space. We present a neural network-based approximation scheme to solve the MDP and show its effectiveness through extensive simulations of stochastic security games. Our work identifies potential weaknesses in RL algorithms and highlights the importance of considering vulnerability assessment in developing robust RL-based security solutions.