Recent years have witnessed significant interest in the area of multi-agent or networked control systems, with applications ranging from autonomous vehicle teams to communication networks to smart grid energy systems. The setup is a collection of decision-making components with local information and limited communication interacting to balance a collective objective with local incentives. While game theory is well known for its traditional role as a modeling framework in social sciences, it is seeing growing interest as a design approach for distributed control. Of particular interest is game theoretic learning, in which the focus shifts away from equilibrium solution concepts and towards the dynamics of how decision makers reach equilibrium. This talk presents a tutorial overview of game theoretic learning, from its origins as a "descriptive" tool for social systems to its "prescriptive" role as an approach to design on linear learning algorithms for distributed architecture control. The talk presents a s