IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Modern controller design packages often fall short of offering what is truly practical: low order controllers, discrete-time controllers operating in a sampled-data loop, and finite word length realizations of controllers with the FWL property minimally impacting closed-loop performance. The lecture describes how to achieve these objectives.
An understanding of fundamental limitations is an essential element in all engineering. Shannon’s early results on channel capacity have always had center court in signal processing. Strangely, the early results of Bode were not accorded the same attention in control. It was therefore highly appropriate that the IEEE Control Systems Society created the Bode Lecture Award, an honor which also came with the duty of delivering a lecture. Gunter Stein gave the first Hendrik W. Bode Lecture at the IEEE Conference on Decision and Control in Tampa, Florida, in December 1989. In his lecture he focused on Bode’s important observation that there are fundamental limitations on the achievable sensitivity function expressed by Bode’s integral. Gunter has a unique position in the controls community because he combines the insight derived from a large number of industrial applications at Honeywell with long experience as an influential adjunct professor at the Massachusetts Institute of Technology from 1977 to 1996. In his lecture, Gunter also emphasized the importance of the interaction between instability and saturating actuators and the consequences of the fact that control is becoming increasingly mission critical. After more than 13 years I still remember Gunter’s superb lecture. I also remember comments from young control scientists who had been brought up on state-space theory who said: “I believed that controllability and observability were the only things that mattered.” At Lund University we made Gunter’s lecture a key part of all courses in control system design. Gunter was brought into the classroom via videotapes published by the IEEE Control Systems Society and the written lecture notes. It was a real drawback that the lecture was not available in more archival form. I am therefore delighted that IEEE Control Systems Magazine is publishing this article. I sincerely hope that this will be followed by a DVD version of the videotape. The lecture is like really good wine; it ages superbly.
—Karl J Åström, Professor Emeritus, Lund University, Lund, Sweden (2003) Support Files: An article based on Gunter Stein’s Bode Lecture was published in Control Systems Magazine in August 2003 and is available on IEEE Xplore at http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1213600&isnumber=27285.
(This introduction is from the article cited above.)
More information provided here.
As humans look to explore the solar system beyond low Earth orbit, the technology advancements required point heavily towards autonomy. The operation of complex human spacecraft has thus far been solved with heavy human involvement- full ground control rooms and nearly constantly inhabited spacecraft. As the goal of space exploration moves to beyond the International Space Station, the physical and budgetary constraints of business as usual become overwhelming. A new paradigm of delivering spacecraft and other assets capable of self-maintenance and self-operation prior to launching crew solves many problems- and at the same time, it opens up an array of interesting control problems. This talk will focus on robotic and autonomous vehicle system
Since 1987 I have highlighted how attempts to deploy autonomous capabilities into complex, risky worlds of practice have been hampered by brittleness — descriptively, a sudden collapse in performance when events challenge system boundaries. This constraint has been downplayed on the grounds that the next advance in AI, algorithms, or control theory will lead to the deployment of systems that escape from brittle limits. However, the world keeps providing examples of brittle collapse such as the 2003 Columbia Space Shuttle accident or this years’ Texas energy collapse. Resilience Engineering, drawing on multiple sources including safety of complex systems, biological systems, & joint human-autonomy systems, discovered that (a) brittleness is a fundamental risk and (b) all adaptive systems develop means to mitigate that risk through sources for resilient performance.
The fundamental discovery, covering biological, cognitive, and human systems, is that all adaptive systems at all scales have to possess the capacity for graceful extensibility. Viability of a system, in the long run, requires the ability to gracefully extend or stretch at the boundaries as challenges occur. To put the constraint simply, viability requires extensibility, because all systems have limits and regularly experience surprise at those boundaries due to finite resources and continuous change (Woods, 2015; 2018; 2019).
The problem is that development of automata consistently ignores this constraint. As a result, we see repeated demonstrations of the empirical finding: systems-as-designed are more brittle than stakeholders realize, but fail less often as people in various roles adapt to fill shortfalls and stretch system performance in the face of smaller & larger surprises. (Some) people in some roles are the ad hoc source of the necessary graceful extensibility.
The promise comes from the science behind Resilience Engineering which highlights paths to build systems with graceful extensibility, especially systems that utilize new autonomous capabilities. Even better, designing systems with graceful extensibility draws on basic concepts in control engineering, though these are reframed substantially when combined with findings on adaptive systems from biology, cognitive work, organized complexity, and sociology.
In this talk, I will discuss the problem of interactive learning by discussing how we can actively learn objective functions from human feedback capturing their preferences. I will then talk about how the value alignment and reward design problem can have solutions beyond active preference-based learning by tapping into the rich context available from large language models. In the second section of the talk, I will more generally talk about the role of large pretrained models in today’s robotics and control systems. Specifically, I will present two viewpoints: 1) pretraining large models for downstream robotics tasks, and 2) finding creative ways of tapping into the rich context of large models to enable more aligned embodied AI agents. For pretraining, I will introduce Voltron, a language-informed visual representation learning approach that leverages language to ground pretrained visual representations for robotics. For leveraging large models, I will talk about a few vignettes about how we can leverage LLMs and VLMs to learn human preferences, allow for grounded social reasoning, or enable teaching humans using corrective feedback. Finally, I will conclude the talk by discussing some preliminary results on how large models can be effective pattern machines that can identify patterns in a token invariant fashion and enable pattern transformation, extrapolation, and even show some evidence of pattern optimization for solving control problems.