IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Control theory is hardly alone among scientific communities experiencing some obsolescence anxiety in the face of machine learning, where decades - or centuries - of building first-principles models and designs are supplanted by data. While ML real-time feedback is unlikely to attain the adaptive control's closed-loop guarantees for unstable plants that lack persistency of excitation, our community, adept at harnessing new ideas, has generated in a few years many other adroit ways to incorporate ML-from lightening methodological complexities to circumventing difficult constructions.
Rather than walking away from certificate-bearing control tools built by generations of control researchers, in this lecture I seek game-changing supporting roles for ML, in control implementation. I present the emerging subject of employing the latest breakthrough in deep learning approximations of not functions but function-to-function mappings (nonlinear operators) in the complex field of PDE control. With neural operators, entire PDE control methodologies are encoded into what amounts to a function evaluation, leading to a thousandfold speedup and enabling PDE control implementations. Deep neural operators, such as DeepONet, mathematically guaranteed to provide an arbitrarily close accuracy in rapidly computing control inputs, preserve the stabilization guarantees of the existing PDE backstepping controllers. Applications range from traffic and epidemiology to manufacturing, energy generation, and supply chains.
More information provided here.
Snake robots are motivated by the slender and flexible body of biological snakes, which allows them to move in virtually any environment on land and in water. Since the snake robot is essentially a manipulator arm that can move by itself, it has a number of interesting applications including firefighting and search-and-rescue operations. In water, the robot is a highly flexible and dexterous manipulator arm that can swim by itself like a sea snake. This highly flexible snake-like mechanism has excellent accessibility properties, and not only can the snake robot access narrow openings and confined areas, it can also carry out highly complex manipulation tasks at this location since manipulation is an inherent capability of the system. This talk presents research results on modelling, analysis and control of snake robots, including both theoretical and experimental results. Ongoing efforts are described for bringing the results from university research towards industrial use.
The notion of what constitutes a robot has evolved considerably over the past five decades, from simple manipulator arms to large networks of interconnected autonomous and semi-autonomous agents. A constant in this evolutionary development has been the central nature of control theory in robotics to enable a vast array of applications in manufacturing automation, field and service robotics, medical robotics and other areas. In this talk we will present an historical perspective of control in robotics together with specific results in passivity-based control and control of underactuated robots. Finally, we will speculate about the future role of control theory in robotics in the era of human-robot interaction, machine learning, and big data analytics.
Feedback is as ubiquitous in nature as it is in design. So control theory can help us understand both natural and designed systems. Even better, generalized models abstracted from nature give us a mathematical means to connect control theoretic explanations of nature with opportunities in control design. Control theory is enriched by the language, questions, and perspectives of fields as diverse as animal behavior, cognitive science, and dance. I will present a model for multi-agent dynamics that is informed by these fields. The model derives from principles of symmetry and bifurcation, which exploit instability to recover the remarkable capacity of natural groups to trade off flexibility and stability.
The field of control provides the principles and methods used to design physical, biological and information systems that maintain desirable performance by sensing and automatically adapting to changes in the environment. The opportunities to apply control principles and methods are exploding. In this talk I will briefly review some of the past predictions for future directions in control (including some of my own) and provide some thoughts on how well the field is doing in terms of living up to its past promises of future success. The ultimate goal of the talk is to help inspire the next generation of controls researchers, balance theory with application, provide a view into the possible futures of control, give credit where it is due, and let the guard down and talk about personal stuff a bit.
High-gain observers play an important role in the design of feedback control for nonlinear systems. This lectures overviews the essentials of this technique. After a brief historical background, a motivating example is used to illustrate the main features of high-gain observers, with emphasis on the peaking phenomenon and the role of control saturation in dealing with it. The use of the observer in feedback control is discussed and a nonlinear separation principle is presented. The use of an extended high-gain observer as a disturbance estimator is covered. Challenges in implementing high-gain observers are discussed, with the effect of measurement noise as the most serious one. Techniques to cope with measurement noise are presented. The lecture ends by listing the speaker's experience with experimental testing of high-gain observers.
Distributed robotics refers to the control of, and design methods for, a system of mobile robots that 1) are autonomous, that is, have only sensory inputs---no outside direct commands, 2) have no leader, and 3) are under decentralized control. The subject of distributed robotics burst onto the scene in the late twentieth century and became very popular very quickly. The first problems studied were flocking and rendezvous. The most highly cited IEEE TAC paper in the subject is by Jadbabaie, Lin, and Morse (2003). This lecture gives a classroom-style presentation of the rendezvous problem. It is the most basic coordination task for a network of mobile robots. The robots in the rendezvous problem in the literature are most frequently kinematic points, modeled as simple integrators, dx/dt = u. Of course, a real wheeled robot has dynamics and is nonholonomic, and the first part of the lecture looks at this discrepancy. The second part reviews the solution to the rendezvous problem. The final part of the lecture concerns infinitely many robots. The lecture is aimed at non-experts.
My answer is "yes." In this lecture, I will make the case that there are some important open problems in finance which are ideally suited for researchers who are well versed in control theory. To this end, I will begin the presentation by quickly explaining what is meant by the notion of "technical analysis" in the stock market. Then I will address, from a control-theoretic point of view, a longstanding conundrum in finance: Why is it that so many asset managers, hedge funds and individual investors trade stock using technical analysis techniques despite the existence of a significant body of literature claiming that such methods are of questionable worth with little or no theoretical rationale? In fact, detractors describe such stock trading methods as "voodoo" and an "anathema to the academic world." To date, in the finance literature, the case for "efficacy" of such stock-trading strategies is based on statistics and empirical back-testing using historical data. With these issues providing the backdrop, my main objective in this lecture is to describe a new theoretical framework for stock trading - based on technical analysis and involving some simple ideas from robust and adaptive control. In contrast to the finance literature, where conclusions are drawn based on statistical evidence from the past, our control-theoretic point of view leads to robust certification theorems describing various aspects of performance. To illustrate how such a formal theory can be developed, I will describe results obtained to date on trend following, one of the most well-known technical analysis strategies in use. Finally, it should be noted that the main point of this talk is not to demonstrate that control-theoretic considerations lead to new "market beating" algorithms. It is to argue that strategies which have heretofore been analyzed via statistical processing of empirical data can actually be studied in a formal theoretical framework.
Dynamic models for bipedal robots contain both continuous and discrete elements, with switching events that are spatially driven by unilateral constraints at ground contact and impulse-like forces that occur at foot touchdown. The complexity of the models has led to a host of ad hoc trial-and-error feedback designs. This presentation will show how nonlinear feedback control methods are providing model-based solutions that also enhance the ability of bipedal robots to walk, run, and recover from stumbles. The talk addresses both theoretical and experimental aspects of bipedal locomotion. Videos of the some of the experiments have been covered in the popular press, bringing feedback control to the attention of the general public.
The year 1948 was auspicious for information science and technology. Norbert Wiener's book Cybernetics was published by Wiley, the transistor was invented (and given its name), and Shannon's seminal paper "A Mathematical Theory of Communication" was published in the Bell System Technical Journal. In the years that followed, important ideas of Shannon, Wiener, Von Neumann, Turing and many others changed the way people thought about the basic concepts of control systems. Hendrik Bode himself was a Shannon collaborator in a paper on smoothing and prediction published in the Proceedings of the IRE in 1950. It is thus not surprising that by the time the earliest direct predecessor of CDC (the Discrete Adaptive Processes Symposium) was held in New York in June, 1962, concepts from machine intelligence and information theory were not at all foreign to the control community. This talk will examine the interwoven evolution of control and information over the past fifty years during which time the IEEE Conference on Decision and Control went from infancy to maturity. The talk will also discuss two new areas in information based control. In collaboration with W.S. Wong, some recent work on control communication complexity has been aimed at a new class of optimal control problems in which distributed agents communicate through the dynamics of a control system in such a way that the control cost is minimized over many messages. Applications of the theory to robot communication through relative motions (e.g. robot dancing and team sports) and to distributed control of semi-classical models of quantum systems will be discussed. The talk will also discuss some recently discovered links between information and the differential topology of smooth random fields (joint work with D. Baronov). The latter work has been applied to a problem of rapid information acquisition in robotic reconnaissance, and it has suggested metrics by which to assess the tradeoff between speed and accuracy.
Twenty years ago I delivered a plenary lecture with the same title at the ACC in Boston. I will go back and reflect on the successes and failures, on what we have learned and which problems remain open. The focus will be on robust and constrained control and the real time implementation of control algorithms. I will comment on the progress we have made on the control of hybrid systems and how our vastly more powerful computational resources have affected the design tools we have at our disposal. Throughout the lecture, industrial examples from the automotive and power electronics domains and the industrial energy sector will illustrate the arguments.
This talk presents the Mean Field (or Nash Certainty Equivalence (NCE)) methodology initiated with Min-Yi Huang and Roland Malhamé for the analysis and control of large population stochastic dynamic systems. Optimal control problems for multi-agent stochastic systems, in particular those with non-classical information patterns and those with intrinsic competitive behavior, are in general intractable. Inspired by mean field approximations in statistical mechanics, we analyse the common situation where the dynamics and rewards of any given agent are influenced by certain averages of the mass multi-agent behavior. The basic result is that classes of such systems possess game theoretic (Nash) equilibria wherein each agent employs feedback control laws depending upon both its local state and the collectively generated mass effect. In the infinite population limit the agents become statistically independent, a phenomenon related to the propagation of chaos in mathematical physics. Explicit solutions in the linear quadratic Gaussian (LQG) - NCE case generalize classical LQG control to the massive multi-agent situation, while extensions of the Mean Field notion enable one to analyze a range of problems in systems and control. Specifically, generalizations to nonlinear problems may be expressed in terms of controlled McKean-Vlasov Markov processes, while localized (or weighted) mean field extensions, the effect of possible major players and adaptive control generalizations permit applications to microeconomics, biology and communications; furthermore, the standard equations of consensus theory, which are of relevance to flocking behavior in artificial and biological systems, have been shown to be derivable from the basic LQG - NCE equations. In the distinct point process setting, the Mean Field formulation yields call admission control laws which realize competitive equilibria for complex communication networks. In this talk we shall motivate the Mean Field approach to stochastic control, survey the current results in the area by various research groups and make connections to physics, biology and economics. This talk presents joint work with Minyi Huang and Roland Malhamé, and Arman Kizilkale, Arthur Lazarte, Zhongjing Ma and Mojtaba Nourian.
Pursuit phenomena in nature have a vital role in survival of species. In addition to prey-capture and mating behavior, pursuit phenomena appear to underlie territorial battles in certain species of insects. In this talk we discuss the geometric patterns in certain pursuit and prey capture phenomena in nature, and suggest sensorimotor feedback laws that explain such patterns. Our interest in this problem first arose from the study of a motion camouflage (stealth) hypothesis due to Srinivasan and his collaborators, and an investigation of insect capture behavior in the FM bat Eptesicus fuscus, initiated by Cynthia Moss. Models of interacting particles, developed in collaboration with Eric Justh, prove effective in formulating and deriving biologically plausible steering laws that lead to observed patterns.
The echolocating bat E. fuscus perceives the world around it in the dark, primarily through the information it gathers rapidly and dependably by probing the environment through controlled streams of pulses of frequency modulated ultrasound. The returning echoes from scatterers such as obstacles (cave walls, trees), predators (barn owls) and prey (insects), are captured and transduced into neuronal spike trains by the highly sensitive auditory system of the bat, and processed in the sensorimotor pathways of the brain to steer the bat’s flight in purposeful behavior. In joint work with Kaushik Ghose, Timothy Horiuchi, Eric Justh, Cynthia Moss, and Viswanadha Reddy, we have begun to understand the control systems guiding the flight. The effectiveness of the bat in coping with, attenuation and noise, uncertainty of the environment, and sensorimotor delay, makes it a most interesting model system for engineers concerned with goal-directed and resource-constrained information processing in robotics. The bat’s neural realizations of auditory-motor feedback loops may serve as models for implementations of algorithms in robot designs. While the primary focus of this talk is on pursuit, the results suggest ways to synthesize interaction laws that yield cohesion in collections of particles, treating pursuit as a building block in mechanisms for flocking in nature and in engineered systems.
Fifty years ago, when control was emerging as a scientific discipline fueled by developments in dynamic, recursive decision making, dynamical systems and stability theory, a separate discipline, differential games, was being born in response to the need to develop a framework and associated solution tools for strategic dynamic decision making in adversarial environments, as an outgrowth of game theory. The evolution of the two disciplines-control theory, and particularly optimal control, and the theory of differential games-initially followed somewhat different paths, but soon a healthy interaction between the two developed. Differential games, in both zero-sum and nonzero-sum settings, enabled the embedding of control into a broader picture and framework, and enriched the set of conceptual tools available to it. One of its essential ingredients-information structures (who knows what and when)-became a mainstay in control research, particularly in the context of large-scale systems. In the other direction, the rich set of mathematical tools developed for control, such as viscosity solutions, directly impacted progress in solvability of differential games. Such interactions between the two disciplines reached a climax when robustness became a prevalent issue in control, leading among others to a comprehensive treatment of H¥ control of both linear and nonlinear uncertain systems.
This Bode Lecture will dwell on the parallel developments in the two fields to the extent they influenced each other over the past half century, talk about the present, and embark on a journey into the future.
The input to state stability (ISS) paradigm is motivated as a generalization of classical linear systems concepts under coordinate changes. A summary is provided of the main theoretical results concerning ISS and related notions of input/output stability and detectability. A bibliography is also included, listing extensions, applications, and other current work.
Modern controller design packages often fall short of offering what is truly practical: low order controllers, discrete-time controllers operating in a sampled-data loop, and finite word length realizations of controllers with the FWL property minimally impacting closed-loop performance. The lecture describes how to achieve these objectives.
An understanding of fundamental limitations is an essential element in all engineering. Shannon’s early results on channel capacity have always had center court in signal processing. Strangely, the early results of Bode were not accorded the same attention in control. It was therefore highly appropriate that the IEEE Control Systems Society created the Bode Lecture Award, an honor which also came with the duty of delivering a lecture. Gunter Stein gave the first Hendrik W. Bode Lecture at the IEEE Conference on Decision and Control in Tampa, Florida, in December 1989. In his lecture he focused on Bode’s important observation that there are fundamental limitations on the achievable sensitivity function expressed by Bode’s integral. Gunter has a unique position in the controls community because he combines the insight derived from a large number of industrial applications at Honeywell with long experience as an influential adjunct professor at the Massachusetts Institute of Technology from 1977 to 1996. In his lecture, Gunter also emphasized the importance of the interaction between instability and saturating actuators and the consequences of the fact that control is becoming increasingly mission critical. After more than 13 years I still remember Gunter’s superb lecture. I also remember comments from young control scientists who had been brought up on state-space theory who said: “I believed that controllability and observability were the only things that mattered.” At Lund University we made Gunter’s lecture a key part of all courses in control system design. Gunter was brought into the classroom via videotapes published by the IEEE Control Systems Society and the written lecture notes. It was a real drawback that the lecture was not available in more archival form. I am therefore delighted that IEEE Control Systems Magazine is publishing this article. I sincerely hope that this will be followed by a DVD version of the videotape. The lecture is like really good wine; it ages superbly.
—Karl J Åström, Professor Emeritus, Lund University, Lund, Sweden (2003) Support Files: An article based on Gunter Stein’s Bode Lecture was published in Control Systems Magazine in August 2003 and is available on IEEE Xplore at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1213600&isnumber=27285.
(This introduction is from the article cited above.)