Attacks on Learning in Multi-agent Systems

Many learning algorithms have been proposed for design of control policies in cooperative and competitive multi-agent systems. We explore the robustness of some such algorithms to the presence of strategic agents. First, we show that some recently proposed multi-agent reinforcement learning algorithms are vulnerable to being hijacked by even one agent that prioritizes individual utility function over the team utility function and propose a way to make the algorithms robust to such attacks. Then we consider a game set up in which agents are employing a fictitious play-based learning algorithm and show that an agent can move the game to a more favorable equilibrium by deviating from the prescribed algorithm.