IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Twenty years ago I delivered a plenary lecture with the same title at the ACC in Boston. I will go back and reflect on the successes and failures, on what we have learned and which problems remain open. The focus will be on robust and constrained control and the real time implementation of control algorithms. I will comment on the progress we have made on the control of hybrid systems and how our vastly more powerful computational resources have affected the design tools we have at our disposal. Throughout the lecture, industrial examples from the automotive and power electronics domains and the industrial energy sector will illustrate the arguments.
Hybrid systems combine continuous-time dynamics with discrete modes of operation. The states of such system usually have two distinct components: one that evolves continuously, typically according to a differential equation; and another one that only changes through instantaneous jumps.
We present a model for Stochastic Hybrid Systems (SHSs) where transitions between discrete modes are triggered by stochastic events, much like transitions between states of a continuous-time Markov chains. However, in SHSs the rate at which transitions occur depends on both the continuous and the discrete states of the hybrid system. The combination of continuous dynamics, discrete events, and stochasticity results in a modeling framework with tremendous expressive power, making SHSs appropriate to describe the dynamics of a wide variety of systems. This observation has been the driving force behind the several recent research efforts aimed at developing tools to analyze these systems.
In this talk, we use several examples to illustrate the use of SHSs as a versatile modeling tool to describe dynamical systems that arise in distributed control and estimation, networked control systems, molecular biology, and ecology. In parallel, we will also discuss several mathematical tools that can be used to analyze such systems, including the use of the extended generator, Lyapunov-based arguments, infinite-dimensional moment dynamics, and finite-dimensional truncations.
Central banks and funds investment managers work with mathematical models. In recent years, a new class of model has come into prominence—generalized dynamic factor models. These are characterized by having a modest number of inputs, corresponding to key economic variables and industry-sector-wide variables for central banks and funds managers respectively, and a large number of outputs, economic time series data or individual stock price movements for example. It is common to postulate that the input variables are linked to the output variables by a finite-dimensional linear time-invariant discrete-time dynamic model, the outputs of which are corrupted by noise to yield the measured data. The key problems faced by central banks or funds managers are model fitting given the output data (but not the input data), and then using the model for prediction purposes. These are essentially tasks usually considered by those practicing identification and time series modelling. Nevertheless there is considerable underlying linear system theory. This flows from the fact that the underlying transfer function matrix is tall. This presentation will describe a number of consequences of this seemingly trivial fact, and then go on to indicate how to cope with time series with different periodicities, e.g. monthly and quarterly, where multirate signal processing and control concepts are of relevance.
Moore's Law describes an important trend in the history of computer hardware: that the number of transistors that can be inexpensively placed on an integrated circuit is increasing exponentially, doubling approximately every two years. The self-fulfilling prophesy of Moore's Law is under threat. The new bottleneck comes not from hardware and technology capabilities, but from control, computation, and algorithmic constraints in various steps in the design flow such as verification and optical proximity correction.
In the first part of the talk, we describe our efforts in developing a new class of wireless sensors for use in semiconductor manufacturing. These sensors are fully self-contained with on board power, communications, and signal processing electronics. The sensors offer unprecedented spatial and time resolution, making them suitable for equipment diagnostics and design, and for process optimization and control. We will illustrate the applications of these sensors in IC processing, and describe our efforts at commercializing this technology.
In the second part of the talk, will outline three very large scale computational problems that are critical to next generation ICs. These are: Inverse lithography, design verification, and design-for manufacturability. We formulate these mathematically using the common language of clip calculus and show possible solutions. We will conclude with arguing for the vital importance of computation and modeling in this economically important field.
In economic networks, behaviors that seem intelligently conceived (for the most part) emerge without a coordinating entity, but as a result of multiple agents pursuing self-interested strategies. Such behaviors have traditionally been interpreted using paradigms of non-cooperative game theory. However, standard game theory usually assumes that the players know the models of their own payoff functions and the actions of the other players. Relying on the method of extremum seeking, we design algorithms that do not need any modeling information and where, instead, the players employ only the measurements of their own payoff values. The extremum seeking algorithms are proved to converge to the Nash equilibria of the underlying non-cooperative games. In other words, extremum seeking allows the player to learn its Nash strategy. Extremum seeking algorithms are not restricted to games with a limited number of players but are, in fact, applicable to games with uncountably many players. While in finite games each player employing extremum seeking must employ a distinct probing frequency, a remarkable situation arises in games with uncountably many players - we show that, as long as any frequency is employed by countably many players, convergence to the Nash equilibrium is guaranteed. Such large (for all practical purposes uncountable) games arise in future energy trading markets involving households that own plug-in hybrid electric vehicles, whose battery capacity is used for the storage of energy during periods of excess production from wind and solar sources and for selling energy back to the grid, at a price, or in quantity, determined by a controller pursuing profit maximization for the household with the help of an extremum seeking algorithm.
Extremely rare events occur in several areas of science and engineering. Some examples are huge movements in the prices of assets and commodities; and weather phenomena such as high winds and hurricanes. Attempts to model these kinds of events using the law of large numbers often fail because "rare" events seem to occur *far more frequently* than a Gaussian distribution would suggest. In some areas, such as clinical trials of drug candidates, a "law of large numbers" will not apply because the available data is far too small to permit the drawing of confident conclusions. For instance, one may have to estimate a few dozen parameters on the basis of a few dozen samples (whereas prudence would suggest at least a few thousand samples -- but these are simply not available).
In this talk we give a *very elementary introduction* to some theoretical methods for modeling and coping with extremely rare or adverse events. To handle rare events, we suggest the use of "heavy tailed" random variables, which have finite first moment but infinite variance. The form of the "law of large numbers" for such variables, and how these affect the design process, will be discussed. To model adverse events with limited data, we suggest the use of "worst case probability distributions" which can be formulated as a linear programming problem, and often leads to surprisingly good insights.
There has been remarkable progress in sampled-data control theory in the last two decades. The main achievement here is that there exists a digital (discrete-time) control law that takes the intersample behavior into account and makes the overall analog (continuous-time) performance optimal, in the sense of H-infinity norm. This naturally suggests its application to digital signal processing where the same hybrid nature of analog and digital is always prevalent. A crucial observation here is that the perfect band-limiting hypothesis, widely accepted in signal processing, is often inadequate for many practical situations. In practice, the original analog signals (sounds, images, etc.) are neither fully band-limited nor even close to be band-limited in the current processing standards. The problem is to interpolate high-frequency components beyond the so-called Nyquist frequency, and this is nothing but the intersample signals discarded through sampling. Assuming a natural signal generator model, sampled-data control theory provides an optimal platform for such problems. This new method has been implemented to a custom LSI chips by SANYO corporation, and has made success of producing over 12 million chips. This talk provides a new problem formulation, design procedure, and various applications in sound processing/compression and image processing.
This talk presents the Mean Field (or Nash Certainty Equivalence (NCE)) methodology initiated with Min-Yi Huang and Roland Malhamé for the analysis and control of large population stochastic dynamic systems. Optimal control problems for multi-agent stochastic systems, in particular those with non-classical information patterns and those with intrinsic competitive behavior, are in general intractable. Inspired by mean field approximations in statistical mechanics, we analyse the common situation where the dynamics and rewards of any given agent are influenced by certain averages of the mass multi-agent behavior. The basic result is that classes of such systems possess game theoretic (Nash) equilibria wherein each agent employs feedback control laws depending upon both its local state and the collectively generated mass effect. In the infinite population limit the agents become statistically independent, a phenomenon related to the propagation of chaos in mathematical physics. Explicit solutions in the linear quadratic Gaussian (LQG) - NCE case generalize classical LQG control to the massive multi-agent situation, while extensions of the Mean Field notion enable one to analyze a range of problems in systems and control. Specifically, generalizations to nonlinear problems may be expressed in terms of controlled McKean-Vlasov Markov processes, while localized (or weighted) mean field extensions, the effect of possible major players and adaptive control generalizations permit applications to microeconomics, biology and communications; furthermore, the standard equations of consensus theory, which are of relevance to flocking behavior in artificial and biological systems, have been shown to be derivable from the basic LQG - NCE equations. In the distinct point process setting, the Mean Field formulation yields call admission control laws which realize competitive equilibria for complex communication networks. In this talk we shall motivate the Mean Field approach to stochastic control, survey the current results in the area by various research groups and make connections to physics, biology and economics. This talk presents joint work with Minyi Huang and Roland Malhamé, and Arman Kizilkale, Arthur Lazarte, Zhongjing Ma and Mojtaba Nourian.
Decoherence, which is caused due to the interaction of a quantum system with its environment plagues all quantum systems and leads to the loss of quantum properties that are vital for quantum computation and quantum information processing. Superficially, this problem appears to be the disturbance decoupling problem in classical control theory. In this talk first we briefly review recent advances in Quantum Control. Then we propose a novel strategy using techniques from geometric systems theory to completely eliminate decoherence and also provide conditions under which it can be done so. A novel construction employing an auxiliary system, the bait, which is instrumental to decoupling the system from the environment, is presented. This literally corresponds to the Internal Model Principle for Quantum Mechanical Systems which is quite different from the classical theory due to the quantum nature of the system. Almost all the earlier work on decoherence control employ density matrix and stochastic master equations to analyze the problem. Our approach to decoherence control involves the bilinear input affine model of quantum control system which lends itself to various techniques from classical control theory, but with non-trivial modifications to the quantum regime. This approach yields interesting results on open loop decouplability and Decoherence Free Subspaces (DFS). The results are also shown to be superior to the ones obtained via master equations. Finally, a methodology to synthesize feedback parameters itself is given, that technology permitting, could be implemented for practical 2-qubit systems performing decoherence free Quantum Computing. Open problems and future directions in quantum control also will be discussed.
We address several issues that are important for developing a comprehensive understanding of the problems of control over networks. Proceeding from bottom to top, we describe theoretical frameworks to study the following issues, and present some answers: (i) Network information theory: Are there limits to information transfer over wireless networks? How should nodes in a network cooperate to achieve information transfer? (ii) In-network information processing: How should data from distributed sensors be fused over a wireless network? Can one classify functions of sensor data vis-a-vis how difficult they are to compute over a wireless network? (iii) Real-time scheduling over wireless networks: How should packets with hard deadlines be scheduled for transmission over unreliable nodes? What QoS guarantees can be provided with respect to latencies and throughputs? (iv) Clock synchronization over wireless networks: What are the ultimate limits to synchronization error? How should clocks be synchronized? (v) System level guarantees in networked control: How can one provide overall guarantees on of networked control systems that take into account hybrid behavior, real-time interactions, and distributed aspects? (vi) Abstractions and architecture: What are appropriate abstractions, and what is an appropriate architecture, to simplify networked control system design and deployment?
It is widely recognized that many of the most important challenges faced by control engineers involve the development of methods to design and analyze systems having components most naturally described by differential equations interacting with components best modeled using sequential logic. This situation can arise both in the development of high volume, cost sensitive, consumer products and in the design and certification of one of a kind, complex and expensive systems. The response of the control community to this challenge includes work on limited communication control, learning control, control languages, and various efforts on hybrid systems. This work has led to important new ideas but progress has been modest and the more interesting results seem to lack the kind of unity that would lead to a broadly inclusive theory. In this talk we describe an approach to problems of this type based on sample path descriptions of finite state Markov processes and suitable adaptations of known results about linear systems. The result is an insightful design technique yielding finite state controllers for systems governed by differential equations. We illustrate with concrete examples.
Networked embedded sensing and control systems are increasingly becoming ubiquitous in applications from manufacturing, chemical processes and autonomous robotic space, air and ground vehicles, to medicine and biology. They offer significant advantages, but present serious challenges to information processing, communication and decision-making. This area, called cyber-physical systems, which has been brought to the forefront primarily because of advances in technology that make it possible to place computational intelligence out of the control room and in the field, is the latest challenge in systems and control, where our quest for higher degrees of autonomy has brought us, over the centuries, from the ancient water clock to autonomous spacecrafts. Our quest for autonomy leads to consideration of increasingly complex systems with ever more demanding performance specifications, and to mathematical representations beyond time-driven continuous linear and nonlinear systems, to event-driven and to hybrid systems; and to interdisciplinary research in areas at the intersection of control, computer science, networking, driven by application needs in physics, chemistry, biology, finance. After an introduction to some of the main research and education issues we need to address and a brief description of lessons learned in hybrid systems research, we shall discuss recent methodologies we are currently working on to meet stability and performance specifications in networked control systems, which use passivity, model-based control and intermittent feedback control.
A gray-box model is one that has a known structure (generally constrained to a strict subset of the class of models it is drawn from) but has unknown parameters. Such models typically embody or reflect the underlying physical or mechanistic understanding we have about the system, as well as structural features such as the delineation of subsystems and their interconnections. The unknown parameters in the gray-box model then become the focus of our system identification efforts.
In a variety of application domains, ranging from biology and medicine to power systems, the gray-box models that practitioners accept --- as plausible representations of the reality they deal with every day --- have been built up over decades of study, and are large, detailed and complex. In addition to being difficult to simulate or compute or design with, a significant feature of these models is the uncertainty associated with many or most of the parameters in the model. The data that one collects from the associated system is rarely rich enough to allow reliable identification of all these parameters, yet there are good reasons to not be satisfied with direct black-box identification of a reduced-order model. The challenge then is to develop meaningful reduced-order gray-box models that reflect the detailed, hard-won knowledge one has about the system, while being better suited to identification and simulation and control design than the original large model.
Practitioners generally seem to have an intuitive understanding of what aspects of the original model structure, and which variables and parameters, should be retained in a physically or mechanistically meaningful reduced-order model for whatever aspect of the system behavior they are dealing with at a particular time. Can we capture and perhaps improve on what they are doing when they develop their (often informal) reduced models?
This talk will illustrate and elaborate on the above themes. Examples will be presented of approaches and tools that might be used to explore and expose structure in a detailed gray-box model, to guide gray-box reduction.
Most individuals form their opinions about the quality of products, social trends and political issues via their interactions in social and economic networks. While the role of social networks as a conduit for information is as old as humanity, recent social and technological developments, such as Facebook, Blogs and Tweeter, have added further to the complexity of network interactions. Despite the ubiquity of social networks and their importance in communication, we know relatively little about how opinions form and information is transmitted in such networks. For example, does a large social network of individuals holding disperse information aggregate it efficiently? Can falsehoods, misinformation and rumors spread over networks? Do social networks, empowered by our modern communication means, support the wisdom of crowds or their ignorance? Systematic analysis of these questions necessitate a combination of tools and insights from game theory, the study of multiagent systems, and control theory. Game theory is central for studying both the selfish decisions and actions of individuals and the information that they reveal or communicate. Control theory is essential for a holistic study of networks and developing the tools for optimization over networks. In this talk, I report recent work on combining game theoretic and control theoretic approaches to the analysis of social learning over networks.
Pursuit phenomena in nature have a vital role in survival of species. In addition to prey-capture and mating behavior, pursuit phenomena appear to underlie territorial battles in certain species of insects. In this talk we discuss the geometric patterns in certain pursuit and prey capture phenomena in nature, and suggest sensorimotor feedback laws that explain such patterns. Our interest in this problem first arose from the study of a motion camouflage (stealth) hypothesis due to Srinivasan and his collaborators, and an investigation of insect capture behavior in the FM bat Eptesicus fuscus, initiated by Cynthia Moss. Models of interacting particles, developed in collaboration with Eric Justh, prove effective in formulating and deriving biologically plausible steering laws that lead to observed patterns.
The echolocating bat E. fuscus perceives the world around it in the dark, primarily through the information it gathers rapidly and dependably by probing the environment through controlled streams of pulses of frequency modulated ultrasound. The returning echoes from scatterers such as obstacles (cave walls, trees), predators (barn owls) and prey (insects), are captured and transduced into neuronal spike trains by the highly sensitive auditory system of the bat, and processed in the sensorimotor pathways of the brain to steer the bat’s flight in purposeful behavior. In joint work with Kaushik Ghose, Timothy Horiuchi, Eric Justh, Cynthia Moss, and Viswanadha Reddy, we have begun to understand the control systems guiding the flight. The effectiveness of the bat in coping with, attenuation and noise, uncertainty of the environment, and sensorimotor delay, makes it a most interesting model system for engineers concerned with goal-directed and resource-constrained information processing in robotics. The bat’s neural realizations of auditory-motor feedback loops may serve as models for implementations of algorithms in robot designs. While the primary focus of this talk is on pursuit, the results suggest ways to synthesize interaction laws that yield cohesion in collections of particles, treating pursuit as a building block in mechanisms for flocking in nature and in engineered systems.
Over the past decade, game theorists have made substantial progress in identifying simple learning heuristics that lead to equilibrium behavior without making unrealistic demands on agents information or computational abilities, as is the case in the perfect rationality approach to game theory. Recent research shows that very complex, interactive systems can equilibrate even when agents have virtually no knowledge of the environment in which they are embedded. This talk will survey different approaches to the problem of learning in games, show the various senses in which learning rules converge to equilibrium, and sketch the theoretical limits to what is achievable.
The analysis of signals into constituent harmonics and the estimation of their power distribution are considered fundamental to systems engineering. Due to its significance in modeling and identification, spectral analysis is in fact a "hidden technology" in a wide range of application areas, and a variety of sensor technologies, ranging from radar to medical imaging, rely critically upon efficient ways to estimate the power distribution from recorded signals. Robustness and accuracy are of at most importance, yet there is no universal agreement on how these are to be quantified. Thus, in this talk, we will motivate the need for ways to compare power spectral distributions.
Metrics, in any field of scientific endeavor, must relate to physically meaningful properties of the objects under consideration. In this spirit, we will discuss certain natural notions of distance between power spectral densities. These will be motivated by problems in prediction theory and related properties of time-series. Analogies will be drawn with an old subject of a similar vein, that of quantifying distances between probability distributions, which has given rise to information geometry. The contrast and similarities between metrics will be highlighted by analyzing mechanical vibrations, speech, and visual tracking.
Fifty years ago, when control was emerging as a scientific discipline fueled by developments in dynamic, recursive decision making, dynamical systems and stability theory, a separate discipline, differential games, was being born in response to the need to develop a framework and associated solution tools for strategic dynamic decision making in adversarial environments, as an outgrowth of game theory. The evolution of the two disciplines-control theory, and particularly optimal control, and the theory of differential games-initially followed somewhat different paths, but soon a healthy interaction between the two developed. Differential games, in both zero-sum and nonzero-sum settings, enabled the embedding of control into a broader picture and framework, and enriched the set of conceptual tools available to it. One of its essential ingredients-information structures (who knows what and when)-became a mainstay in control research, particularly in the context of large-scale systems. In the other direction, the rich set of mathematical tools developed for control, such as viscosity solutions, directly impacted progress in solvability of differential games. Such interactions between the two disciplines reached a climax when robustness became a prevalent issue in control, leading among others to a comprehensive treatment of H¥ control of both linear and nonlinear uncertain systems.
This Bode Lecture will dwell on the parallel developments in the two fields to the extent they influenced each other over the past half century, talk about the present, and embark on a journey into the future.
The input to state stability (ISS) paradigm is motivated as a generalization of classical linear systems concepts under coordinate changes. A summary is provided of the main theoretical results concerning ISS and related notions of input/output stability and detectability. A bibliography is also included, listing extensions, applications, and other current work.
More information provided here.