IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
More information provided here.
Reachability analysis, which considers computing or approximating the set of future states attainable by a dynamical system over a time horizon, is receiving increased attention motivated by new challenges in, e.g., learning-enabled systems, assured and safe autonomy, and formal methods in control systems. Such challenges require new approaches that scale well with system size, accommodate uncertainties, and can be computed efficiently for in-the-loop or frequent computation. In this talk, we present and demonstrate a suite of tools for efficiently over-approximating reachable sets of nonlinear systems based on the theory of mixed monotone dynamical systems. A system is mixed monotone if its vector field or update map is decomposable into an increasing component and a decreasing component. This decomposition allows for constructing an embedding system with twice the states such that a single trajectory of the embedding system provides hyperrectangular over-approximations of reachable sets for the original dynamics. This efficiency can be harnessed, for example, to compute finite abstractions for tractable formal control verification and synthesis or to embed reachable set computations in the control loop for runtime safety assurance. We demonstrate these ideas on several examples, including an application to safe quadrotor flight that combines runtime reachable set computations with control barrier functions implemented on embedded hardware.
The fields of adaptive control and machine learning have evolved in parallel over the past few decades, with a significant overlap in goals, problem statements, and tools. Machine learning as a field has focused on computer-based systems that improve and learn through experience. Oftentimes the process of learning is encapsulated in the form of a parameterized model such as a neural network, whose weights are trained in order to approximate a function. The field of adaptive control, on the other hand, has focused on the process of controlling engineering systems in order to accomplish regulation and tracking of critical variables of interest. Learning is embedded in this process via online estimation of the underlying parameters. In comparison to machine learning, adaptive control often focuses on limited-data problems where fast, on-line performance is critical. Whether in machine learning or adaptive control, this learning occurs through the use of input-output data. In both cases, the approach used for updating the parameters is often based on gradient descent-like and other iterative algorithms. Related tools of analysis, convergence, and robustness in both fields have a tremendous amount of similarity. As the scope of problems in both topics increases, the associated complexity and challenges increase as well. In order to address learning and decision-making in real time, it is essential to understand these similarities and connections to develop new methods, tools, and algorithms.
This talk will examine the similarities and interconnections between adaptive control and optimization methods commonly employed in machine learning. Concepts in stability, performance, and learning, common to both fields will be discussed. Building on the similarities in update laws and common concepts, new intersections and opportunities for improved algorithm analysis will be explored. High-order tuners and time-varying learning rates have been employed in adaptive control leading to very interesting results in dynamic systems with delays. We will explore how these methods can be leveraged to lead to provably correct methods for learning in real-time with guaranteed fast convergence. Examples will be drawn from a range of engineering applications.
Control policies that involve the real-time solution of one or more convex optimization problems include model predictive (or receding horizon) control, approximate dynamic programming, and optimization-based actuator allocation systems. They have been widely used in applications with slower dynamics, such as chemical process control, supply chain systems, and quantitative trading, and are now starting to appear in systems with faster dynamics. In this talk I will describe a number of advances over the last decade or so that make such policies easier to design, tune, and deploy. We describe solution algorithms that are extremely robust, even in some cases division free, and code generation systems that transform a problem description expressed in a high-level domain-specific language into source code for a real-time solver suitable for control. The recent development of systems for automatically differentiating through a convex optimization problem can be used to efficiently tune or design control policies that include embedded convex optimization.
Recent radical evolution in distributed sensing, computation, communication, and actuation has fostered the emergence of cyber-physical network systems. Examples cut across a broad spectrum of engineering and societal fields. Regardless of the specific application, one central goal is to shape the network collective behavior through the design of admissible local decision-making algorithms. This is nontrivial due to various challenges such as the local connectivity, imperfect communication, model and environment uncertainty, and the complex intertwined physics and human interactions. In this talk, I will present our recent progress in formally advancing the systematic design of distributed coordination in network systems. We investigate the fundamental performance limit placed by these various challenges, design fast, efficient, and scalable algorithms to achieve (or approximate) the performance limits, and test and implement the algorithms on real-world applications.
Electrification of mobility and transport is a global megatrend that has been underway for decades. The mobility sector encompasses cars, trucks, busses, and aircraft. These systems exhibit complex interactions of multiple modes of power flow. These modes can be thermal, fluid, electrical, or mechanical. A key challenge in working across various modes of power flow is the widely varying time scales of the subsystems which makes centralized control efforts challenging. This talk will present a particular distributed controller architecture for managing the flow of power based on on-line optimization. A hierarchical approach allows for systems operating on different time scales to be coordinated in a controllable manner. It also allows for different dynamic decision-making tools to be used at different levels of the hierarchy based on the needs of the physical systems under control. Additional advantages include the modularity and scalability inherent in the hierarchy. Additional modules can be added or removed without changing the basic approach.
In addition to the hierarchical control, a particularly useful graph-based approach will be introduced for the purpose of modeling the system interactions and performing early-stage design optimization. The graph approach, like the hierarchy, has the benefits of modularity and scalability along with being an efficient framework for representing systems of different time scales. The graph allows design optimization tools to be implemented and optimize the physical system design for the purpose of control. Recent results will be presented representing both generic interconnected complex systems as well as specific examples from the aerospace and automotive application domains.
To ensure safety, reliability, and productivity of industrial processes, artificial intelligence (AI) and machine learning techniques have been widely used in process industries for decades. The benefits of process monitoring and control are well documented and employed routinely in manufacturing. This talk will go over historical perspective and recent AI and machine learning successes in the areas of real-time analytics, deep learning, reinforcement learning, visualization, and feature engineering. Complex interaction between human decision and automated control will be discussed. Humans grow expertise by quickly adapting to abnormal conditions and using domain knowledge to generate creative solutions. However, reproducing human decisions across the enterprise is a challenge. A common misconception is that AI is to replace human decision. The talk will emphasize how AI and control systems must be complementary to make human decisions as efficient and consistent as possible. Human decision will remain a center piece of how to operate industrial processes in a safe, reliable, and productive manner.
Networked and robotic systems in emerging applications are required to operate safely, adaptively, and degrade gracefully while coordinating a large number of nodes. Distributed algorithms have consolidated as a means for robust coordination, overcoming the challenges imposed by the limited capabilities of each agent. However, plenty of problems still exist to break down the barriers of fast computation, make effective use of measured data, and understand large-scale limit effects. In this talk, I will present ongoing work in the control of infrastructure networks and large-swarm coordination, along with a discussion on modeling approaches, analysis tools, and architectural trade-offs going from small to large-sized robotic networks.
In September 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) initiated the era of gravitational wave astronomy (a new window on the universe) with the first direct detection of gravitational waves (ripples in the fabric of space-time) resulting from the merger of a pair of black holes into a single larger black hole. In August 2017 the LIGO and VIRGO collaborations announced the first direct detection of gravitational waves associated with a gamma ray burst and the electromagnetic emission (visible, infrared, radio) of the afterglow of a kilonova — the spectacular collision of two neutron stars. This marks the beginning of multi-messenger astronomy. The kilonova discovery was made using the U.S.-based LIGO; the Europe-based Virgo detector; and 70 ground- and space-based observatories.
The Advanced LIGO gravitational wave detectors are second generation instruments designed and built for the two LIGO observatories in Hanford, WA and Livingston, LA. These two identically designed instruments employ coupled optical cavities in a specialized version of a Michelson interferometer with 4 kilometer long arms. Resonant optical cavities are used in the arms to increase the interaction time with a gravitational wave, power recycling is used to increase the effective laser power and signal recycling is used to improve the frequency response. In the most sensitive frequency region around 100 Hz, the displacement sensitivity is 10^-19 meters rms, or about 10 thousand times smaller than a proton. In order to achieve this unsurpassed measurement sensitivity Advanced LIGO employs a wide range of cutting-edge, high performance technologies, including an ultra-high vacuum system; an extremely stable laser source; multiple stages of active vibration isolation; super-polished and ion milled optics, high performance multi-layer dielectric coatings; wavefront sensing; active thermal compensation; very low noise analog and digital electronics; complex, nonlinear multi-input, multi-output control systems; a custom, scalable and easily re-configurable data acquisition and state control system; and squeezed light. The principles of operation, the numerous control challenges and future directions in control will be discussed. More information is available at https://www.ligo.caltech.edu/.
Optimal controllers for linear or nonlinear dynamic systems with known dynamics can be designed by using Riccati and Hamilton-Jacobi-Bellman (HJB) equation respectively. However, optimal control of uncertain linear or nonlinear dynamic systems is a major challenge. Moreover, controllers designed in discrete-time have the important advantage that they can be directly implemented in digital form using modern-day embedded hardware. Unfortunately, discrete-time design using Lyapunov stability analysis is far more complex than the continuous-time counterpart since the first difference in Lyapunov function is quadratic in the states and not linear as in the case of continuous-time. By incorporating learning features with the feedback controller design, optimal adaptive control of such uncertain dynamical systems in discrete-time can be solved.
In this talk, an overview of first and second-generation feedback controllers with a learning component in discrete-time will be discussed. Subsequently, the discrete-time learning-based optimal adaptive control of uncertain nonlinear dynamic systems will be presented in a systematic manner using a forward in time approach based on reinforcement learning (RL)/approximate dynamic programming (ADP). Challenges in developing and implementing the three generations of learning controllers will be addressed using practical examples such as automotive engine emission control, robotics, and others. We will argue that discrete-time controller development is preferred for transitioning the developed theory to practice. Today, the application of learning controllers can be found in areas as diverse as process control, energy or smart grids, civil infrastructure, healthcare, manufacturing, automotive, transportation, entertainment, and consumer appliances. The talk will conclude with a short discussion of open research problems in the area of learning control.
Security and privacy are of growing concern in many control applications. Cyber attacks are frequently reported for a variety of industrial and infrastructure systems. For more than a decade the control community has developed techniques for how to design control systems resilient to cyber-physical attacks. In this talk, we will review some of these results. In particular, as cyber and physical components of networked control systems are tightly interconnected, it is argued that traditional IT security focusing only on the cyber part does not provide appropriate solutions. Modeling the objectives and resources of the adversary together with the plant and control dynamics is shown to be essential. The consequences of common attack scenarios, such as denial-of-service, replay, and bias injection attacks, can be analyzed using the framework presented. It is also shown how to strengthen the control loops by deriving security- and privacy-aware estimation and control schemes. Applications in building automation, power networks, and automotive systems will be used to motivate and illustrate the results. The presentation is based on joint work with several students and colleagues at KTH and elsewhere.
Advances in computing and networking technologies have connected manufacturing systems from the lowest levels of sensors and actuators, across the factory, through the supply chain, and beyond. Large amounts of data have always been available to these systems, with currents and velocities sampled at regular intervals and used to make control decisions, and throughputs tracked hourly or daily. The ability to collect and save this detailed low-level data, send it to a central repository, and store it for days, months, and years, enables better insight into the behavior – and misbehavior – of complex manufacturing systems. The output from high-fidelity models and/or reams of historical data can be compared with streams of data coming off the plant floor to identify anomalies. Early identification of anomalies, before they lead to poor quality products or machine failure, can result in significant productivity improvements. We will discuss multiple approaches for harnessing this data, leveraging both physics-based and data-driven models, and how automation can enable timely responses. Both simulation and experimental results will be presented.
Cyber-physical systems are the basis for the new industrial revolution. Growing energy demand and environmental concerns require a large number of renewable energy resources, efficient energy consumption, and energy storage devices and demand responses. Cyber-physical energy systems (CPES) in a broader sense provide a desirable infrastructure for efficient energy production and consumption with uncertain energy resources. This speech will focus on the structure of CPES, and the problem of security-constrained planning and scheduling of CPES, including new renewable energy sources with high levels of uncertainties. The newly developed analytical conditions are discussed for fast identifying the security bottlenecks in a complex power grid when new renewable energy sources coordinate with storable energy sources such as hydro and pumped storages. A new method is introduced to solve the well-knows N-k contingency security assessment problem with 2-3 orders of reduced computational complexity. Production, storage and transportation, and utilization of hydrogen as the main energy source are also introduced. It is shown that the hydrogen-based CPES will provide an ideal infrastructure for energy supply and consumption with almost no pollution, and it will likely lead to the energy revolution anticipated in the new century.
In the six decades of conventional TUNING-BASED adaptive control, the unattained fundamental goals, in the absence of detrimental artificial excitation, have been (1) exponential regulation, as with robust controllers, and (2) perfect learning of the plant model. More than a quarter-century since I started my career by extending conventional adaptive controllers from linear to nonlinear systems, I reach those decades-old goals with a new non-tuning paradigm: regulation-triggered batch identification. The parameter estimate in the controller is held constant and, only once the regulation error grows “too large,” a parameter estimate update, based on the data since the last update, is “triggered.” Such a simple parameter estimator provably, and remarkably terminates updating after a number of state growth-triggered updates which is no greater than the number of unknown parameters. This yields exponential regulation and perfect identification except for zero-measure initial conditions. I present a design for a more general class of nonlinear systems than ever before, an extension to adaptive PDE control, a flight control example (the “wing rock” instability), and, time permitting, a simple robotics example. This is joint work with Iasson Karafyllis from the Mathematics Department of the National Technical University of Athens.
Probability theory has had a significant impact on systems and controls. In this talk, we will visit three developments in control theory that has a close connection to, and impacted by, the results in probability theory. We discuss Perron-Frobenius Theorem and its relation to the results in distributed computation and optimization, and its generalization to time-varying chains. We will discuss a result in controllability of random networks and show how the recent developments in random matrix theory and inverse Littlewood-Offord Theory shed light on such problems. Lastly, we discuss controllability of safety-critical stochastic systems and how Martingale theory leads to design and analysis of control policies for stochastic systems.
Recent years have seen a great progress in the area of robotics. Communication signals are also ubiquitous these days. In this talk, I will explore the opportunities and challenges at this intersection, for robotic sensing and communication. In the first part of the talk, I will focus on robotic sensing, and ask the following question "Can everyday communication signals, such as WiFi signals, give new sensing capabilities to unmanned vehicles?" For instance, imagine two unmanned vehicles arriving behind thick concrete walls. Can they image every square inch of the invisible area through the walls with only WiFi signals? I will show that this is indeed possible, and discuss how our methodology for the co-optimization of path planning and communication has enabled the first demonstration of 3D imaging through walls with only drones and WiFi. I will also discuss other new sensing capabilities that have emerged from our approach, such as occupancy estimation and crowd analytics with only WiFi signals. In the second part of the talk, I will focus on communication-aware robotics, a term coined to refer to robotic systems that explicitly take communication issues into account in their decision making. This is an emerging area of research that not only allows a team of unmanned vehicles to attain the desired connectivity during their operation, but can also extend the connectivity of the existing communication systems through the use of mobility. I will then discuss our latest theoretical and experimental results along this line. I will show how each robot can go beyond the over-simplified but commonly-used disk model for connectivity, and realistically model the impact of channel uncertainty for the purpose of path planning. I will then show how the unmanned vehicles can properly co-optimize their communication, sensing and navigation objectives under resource constraints. This co-optimized approach can result in a significant performance improvement and resource saving, as we shall see. I will also discuss the role of human collaboration in these networks.
The goal of reinforcement learning is at the core of the CSS mission: computation of policies that are approximately optimal, subject to information constraints. From the beginning, control foundations have lurked behind the RL curtain: Watkins’ Q-function looks suspiciously like the Hamiltonian in Pontryagin’s minimum principle, and (since Van Roy’s thesis) it has been known that our beloved adjoint operators are the key to understanding what is going on with TD-learning. This talk will briefly survey the goals and foundations of RL, and present new work showing how to dramatically accelerate convergence based on a combination of control techniques. The talk will include a wish-list of open problems in both deterministic and stochastic control settings.
Computer and communication technologies are rapidly developing leading to an increasingly networked and wireless world. This raises new challenging questions in the context of networked control systems, especially when the computation, communication and energy resources for control are limited. To efficiently use the available resources it is desirable to limit the control actions to instances when the system really needs attention. Unfortunately, classical time-triggered control schemes are based on performing sensing and actuation actions periodically in time (irrespective of the state of the system) rather than when the system actually needs attention. This points towards the consideration of event-triggered control as an alternative and (more) resource-aware control paradigm, as it seems natural to trigger control actions by well-designed events involving the system's state, output or any other locally available information: "To act or not to act, that is the question in event-triggered control." The objectives of this talk are to introduce the basics in the field of resource-aware control for distributed and multi-agent systems and to discuss recent advances and open questions. The focus will be on event-triggered control, although we will also touch upon self-triggered control as an alternative paradigm for resource-aware feedback control. We will show that various forms of hybrid systems, combining continuous and discrete dynamics, play instrumental roles in the analysis and the design of event-triggered and self-triggered controllers. The main developments will be illustrated in the context of cooperative driving exploiting wireless communication. The effects of delays, packet losses and (denial-of-service) attacks on the event-triggered cooperative adaptive cruise control (CACC) strategies for vehicle platooning will be discussed and experimental results will be presented.
Feedback is a key element of regulation, as it shapes the sensitivity of a process to its environment. Positive feedback up-regulates, negative feedback down-regulates. Many regulatory processes involve a mixture of both, whether in nature or in engineering. This paper revisits the mixed feedback paradigm, with the aim of investigating control across scales. We propose that mixed feedback regulates excitability and that excitability plays a central role in multi-scale signalling. We analyse this role in a multi-scale network architecture inspired from neurophysiology. The nodal behavior defines a meso-scale that connects actuation at the micro-scale to measurements at the macro-scale. We show that mixed-feedback control at the nodal scale provides regulatory principles at the network scale, with a nodal resolution. In this sense, the mixed feedback paradigm is a control principle across scales.