Jeff S. Shamma

Headshot Photo
First Name: 
Jeff
Last Name: 
Shamma

Jeff S. Shamma is a Professor and Chair of Electrical Engineering at the King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia, where he is also the director of the Robotics, Intelligent Systems & Control laboratory (RISC). He is the former Julian T. Hightower Chair in Systems & Control in the School of Electrical and Computer Engineering at Georgia Tech. He also has held faculty positions at the University of Minnesota, The University of Texas at Austin, and the University of California, Los Angeles. Shamma received a Ph.D. in systems science and engineering from MIT in 1988. He is the recipient of an NSF Young Investigator Award, the American Automatic Control Council Donald P. Eckman Award, and the Mohammed Dahleh Award, and he has been an IEEE Fellow since 2006. He is currently the deputy editor-in-chief for the IEEE Transactions on Control of Network Systems. Shamma's research is in the general area of feedback control and systems theory. His most recent research has been in decision and control for distributed multiagent systems and the related topics of game theory and network science, with applications to cyberphysical and societal network systems.

Contact Information
Affiliation: 
King Abdullah University of Science and Technology (KAUST), KSA
Position: 
Distinguished Lecturer; Deputy Editor-in-Chief - IEEE Transactions on Control of Network Systems; TCNS Senior Editor

Distinguished Lecture Program

Talk Title: Game theory and multi-agent control

Recent years have witnessed significant interest in the area of multi-agent or networked control systems, with applications ranging from autonomous vehicle teams to communication networks to smart grid energy systems. The setup is a collection of decision-making components with local information and limited communication interacting to balance a collective objective with local incentives. While game theory is well known for its traditional role as a modeling framework in social sciences, it is seeing growing interest as a design approach for distributed control. Of particular interest is game theoretic learning, in which the focus shifts away from equilibrium solution concepts and towards the dynamics of how decision makers reach equilibrium. This talk presents a tutorial overview of game theoretic learning, from its origins as a "descriptive" tool for social systems to its "prescriptive" role as an approach to design on linear learning algorithms for distributed architecture control. The talk presents a sampling of prior and recent results in these areas along with several illustrative examples of distributed coordination.

Talk Title: Exploring Bounded Rationality in Game Theory

Solution concepts in game theory, such as Nash equilibrium, traditionally ignore the processes and associated computational costs of how agents go about deriving strategies. The notion of bounded rationality seeks to address such issues through a variety of alternative formulations. This talk presents two settings motivated by bounded rationality. First, we consider incomplete information dynamic games. A Nash equilibrium in this setting requires each agent to solve a partially observed Markov decision problem that requires knowledge of a possibly extensive environment as well as the strategies of other agents. We introduce an alternative notion, called “empirical evidence equilibria”, in which agents form naive models with available measurements. These models reflect an agent’s limited awareness of its surroundings, and the level of naivety or sophistication can be different for each agent. We show that such equilibria are guaranteed to exist for any profile of agent rationality and compare the concept to mean field equilibria. Second, we investigate learning in evolutionary games, where the focus is on the dynamic behaviors away from equilibrium rather than characterizations of equilibrium. A lingering issue in this framework is what constitutes “natural” versus “concocted” learning rules. Building on prior work on so-called “stable games”, we introduce a class of dynamics motivated by control theoretic passivity theory. We show how passivity theory both captures and extends selected prior work on evolutionary games and offers a candidate for what constitutes natural learning.