IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
Recent results in deep learning have left no doubt that it is amongst the most powerful modeling tools that we possess. The real question is how can we utilize deep learning for control without losing stability and performance guarantees. Even though recent successes in deep reinforcement learning (DRL) have shown that deep learning can be a powerful value function approximator, several key questions must be answered before deep learning enables a new frontier in robotics. DRL methods have proven difficult to apply to real-world robotic systems where stability matters and safety is critical. In this talk, I will present our recent work in bringing deep learning-based methods to provably stable adaptive control and expand upon possibilities of using concepts from adaptive control to create safe and stable reinforcement learning algorithms. I will put our theoretical work in context by discussing several applications in flight control and agricultural robotics. I will also bring to light our recent work in understanding how the octopus brain works and how it can inspire future learning and distributed control tools.
Swarm robotics, a subfield of both robotics and artificial swarm intelligence, focuses on the development of teams composed of large numbers of autonomous robotic agents. Like swarm intelligence, swarm robotics arises from the study of the phenomenology of biological systems in which large numbers of individuals collaborate in joint collective actions for the benefit of the community as a whole. However, whereas swarm intelligence often utilizes the means and mechanisms of bio-inspired swarms for numerical optimization, the goals of bio-inspired robot swarms are generally concerned with the use of large numbers of low-cost physically embodied agents, acting together in a real-world environment, to achieve a common purpose. This talk will discuss key methods and bio-inspired algorithms for use in programming and controlling robotic swarms, and potential applications of these swarms.
We will illustrate the essential intuition behind the so-called "Model Recovery Anti-windup" scheme for handling input saturation in control systems design. The talk will mostly focus on the qualitative aspects of the core feature of the scheme: storage and recovery of the unconstrained response that would have occurred without saturation. This goal and the ensuing (model recovery) anti-windup solutions will be discussed and clarified by way of a number of simulated and experimental application studies, ranging from vibration isolation, open water channels, flight control systems, robotic arms, and brake-by-wire systems for motorcycles.
Model reference adaptive control is a powerful tool that has a capability to suppress the effect of system uncertainties for achieving a desired level of closed-loop system performance. Yet, for a wide array of applications including unmodeled dynamics such as coupled rigid body systems with flexible interconnection links, airplanes with high aspect ratio wings, and high speed vehicles with strong rigid body and flexible dynamics coupling, the closed-loop system stability with model reference adaptive control laws can be challenged. In this seminar, we will focus on the stability interplay between a class of unmodeled dynamics and system uncertainties for model reference adaptive control laws, and proposed a robustifying term to relax the resulting interplay. The presented system-theoretical findings will be also supported by experimental results in order to bridge the theory-practice gap, where we use a benchmark mechanical system setup involving an inverted pendulum on a cart coupled with another cart through a spring in the presence of unknown frictions.
Reachability analysis is the problem of evaluating the set of all states that can be reached by a system starting from a given set of initial states. Since the reachable set can rarely be computed exactly, a standard approach is to over-approximate this set as tightly as possible. Various set representations and methods have been proposed for finding over-approximations; however, they are computationally expensive and do not scale well to high dimensional systems. This is a particularly important shortcoming for “symbolic control,” where the designer must first generate a finite state transition system from a continuous state model with repeated reachability computations. In this talk we present a suite of methods that offer computational efficiency using a simpler set representation in the form of multi-dimensional intervals. These methods leverage nonlinear systems concepts, such as monotonicity and its variants, sensitivity of trajectories to initial conditions and parameters, and contraction properties. We further introduce data-driven approaches for problems where probabilistic guarantees are appropriate. As we demonstrate with examples interval representation and the associated methods are particularly well suited to symbolic control, but of independent interest as well.
Urban mobility in Transportation is witnessing a transformation due to the emergence of new concepts in Mobility on Demand, where new modes of transportation other than private individual cars and public mass transit are being investigated. With a projection of a total number of 2 billion vehicles on roads by the year 2050, such innovations in transportation are urgently needed. One such paradigm is the notion of shared mobility on demand, which consists of customized dynamic routing for multi-passenger transport. A solution to this problem consists of a host of challenges that ranges from distributed optimization, behavioral modeling of passengers, traffic flow modeling, and distributed control. Recent efforts in our group have made some inroads into this problem and form the focus of this talk. A socio-technical model that combines behavioral models of passengers based on Cumulative Prospect Theory and traffic models will be discussed. The solution to dynamic routing is presented in the form of an optimization problem solved via an Alternating Minimization based approach. The model together with the optimization framework is then used to propose a dynamic tariff that can be viewed as a model-based control strategy based on Transactive Control, a methodology that is being explored in power grids for incentivizing flexible consumption.
The well-functioning of our modern society rests on the reliable and uninterrupted operation of large scale complex infrastructures, which are more and more exhibiting a network structure with a high number of interacting components/agents. Energy and transportation systems, communication and social networks are a few, yet prominent, examples of such large scale multi-agent networked systems. Depending on the specific case, agents may act cooperatively to optimize the overall system performance or compete for shared resources. Based on the underlying communication architecture, and the presence or not of a central regulation authority, either decentralized or distributed decision making paradigms are adopted. In this seminar, we address the interacting and distributed nature of cooperative multi-agent systems arising in the energy application domain. More specifically, we present our recent results on the development of a unifying distributed optimization framework to cope with the main complexity features that are prominent in such systems, i.e.: heterogeneity, as we allow the agents to have different objectives and physical/technological constraints; privacy, as we do not require agents to disclose their local information; uncertainty, as we take into account uncertainty affecting the agents locally and/or globally; and combinatorial complexity, as we address the case of discrete decision variables. (This is a joint work with Alessandro Falsone, Simone Garatti, and Kostas Margellos.)
Automated and connected road vehicles enable large-scale control and optimization of the transport system with the potential to radically improve energy efficiency, decrease the environmental footprint, and enhance safety. Freight transportation accounts for a significant amount of all energy consumption and greenhouse gas emissions. In this talk, we will discuss the potential future of road goods transportation and how it can be made more robust and efficient, from the automation of individual long-haulage trucks to the optimization of fleet management and logistics. Such an integrated cyber-physical transportation system benefits from having trucks traveling together in vehicle platoons. From the reduced air drag, platooning trucks traveling close together can save more than 10% of their fuel consumption. In addition, by automating the driving, it is possible to change driver regulations and thereby increase the efficiency even more. Control and optimization problems on various level of this transportation system will be presented. It will be argued that a system architecture utilizing vehicle-to- vehicle and vehicle-to-infrastructure communication enable robust and safe control of individual trucks as well as optimized vehicle fleet collaborations and new market opportunities. Furthermore, feedback control of individual platoons utilizing the cellular communication infrastructure can be used to improve the overall traffic conditions by reducing the variation of traffic density. Extensive experiments done on European highways will illustrate system performance and safety requirements. The presentation will be based on joint work over the last ten years with collaborators at KTH and at the truck manufacturers Scania and Volvo.
Autonomous systems use closed-loop feedback of sensed or communicated information to meet desired objectives. Meeting such objectives is more challenging when autonomous systems are tasked with operating in uncertain complex environments with intermittent feedback. This presentation explores different analysis methods that quantify the effects of intermittent feedback with respect to stability and performance of the autonomous agent. Various scenarios are considered where the intermittency results from natural phenomena or adversarial actors, including purposeful intermittency to enable new capabilities. Specific examples include intermittency due to occlusions in image-based feedback and intermittency resulting from various network control problems.
In cooperative multi-robot systems, there is a group of robots that seek to achieve a collective task as a team. Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots. The challenge is to design local protocols that result in desired global outcomes. In contrast to a traditional centralized control paradigm, both measurements and decisions are distributed among multiple actors. This talk surveys various results for cooperative robotics based on methods drawn from game theory and distributed optimization, with applications to area coverage, cooperative pursuit, and self-assembly.
This seminar presents a survey of some of the main results in the theory of negative imaginary systems. The seminar also presents some applications of negative imaginary systems theory in the design of robust controllers. In particular, the seminar concentrates on the application of negative imaginary systems theory in the area of control of atomic force microscopes.
This talk will present models for the evolution of opinions, interpersonal influences, and social power in a group of individuals. I will present empirical data and mathematical models for the opinion formation process in deliberative groups, including concepts of self-weight and social power. I will then focus on groups who discuss and form opinions along sequences of judgmental, intellective, and resource allocation issues. I will show how the natural dynamical evolution of interpersonal influence structures is shaped by the psychological phenomenon of reflected appraisal. Multi-agent models and analysis results are grounded in influence networks from mathematical sociology, replicator dynamics from evolutionary games, and transactive memory systems from organization science. (Joint work with: Noah E. Friedkin, Peng Jia, and Ge Chen)
The interactions of dynamical systems communicating over a networked environment lead to intriguing synchronization behaviors with applications in Internet of Things, formations, satellite control, and human societal behaviors. This talk studies the relation between local controls design and communication graph restrictions. The distinctions between stability and optimality on graphs are explored. An optimal design method for local feedback controllers is given that decouples the control design from the graph structural properties. In the case of continuous-time systems, the optimal design method guarantees synchronization on any graph with suitable connectedness properties. In the case of discrete-time systems, a condition for synchronization is that the Mahler measure of unstable eigenvalues of the local systems be restricted by the condition number of the graph. Thus, graphs with better topologies can tolerate a higher degree of inherent instability in the individual node dynamics. A theory of duality between controllers and observers on communication graphs is given, including methods for cooperative output feedback control based on cooperative regulator designs. In second part of the talk, we discuss graphical games. Standard differential multi-agent game theory has a centralized dynamics affected by the control policies of multiple agent players. We give a new formulation for games on communication graphs. Standard definitions of Nash equilibrium are not useful for graphical games since, though in Nash equilibrium, all agents may not achieve synchronization. A strengthened definition of Interactive Nash equilibrium is given that guarantees that all agents are participants in the same game, and that all agents achieve synchronization while optimizing their own value functions.
This tutorial will describe the design of stable observers for nonlinear systems. The design methodology utilizes tools that include Lyapunov analysis, the Circle Criterion and the S-procedure Lemma. The observer stability conditions are typically obtained as linear or bilinear matrix inequalities from which the observer gains can be computed. The tutorial will start with a dynamic system in which the process dynamics has Lipschitz nonlinearities. This will later be generalized to allow for either Lipschitz, bounded Jacobian or sector bounded nonlinearities in both the process dynamics and the measurement equations. Simple programs to solve LMIs in Matlab and obtain the observer gains will also be presented. The lecture will conclude with the application of the developed methodology to automotive slip angle estimation in the presence of nonlinear tire force models.
With the increasing trend towards system downsizing and the growing stringency of requirements, constraint handling and limit protection are becoming increasingly important for engineered systems. Constraints can reflect actuator limits, safety requirements (e.g., process temperatures and pressures must not exceed safe values) or obstacle avoidance requirements. Reference governors are control schemes that can be augmented to already existing control systems in order to provide constraint handling/limit protection capabilities. These add-on schemes exploit prediction and optimization or invariance/strong returnability properties to supervise and minimally modify operator (e.g., pilot or driver) commands, or other closed-loop signals, whenever there is a danger of future constraint violations. The presentation will introduce the basic reference governor schemes along with the existing theory. Several recent extensions and new variants of these schemes will be highlighted. Selected aerospace and automotive applications will be described. Opportunities for future research will be mentioned.
High-gain observers play an important role in the design of feedback control for nonlinear systems. This lecture overviews the essentials of this technique. A motivating example is used to illustrate the main features of high-gain observers, with emphasis on the peaking phenomenon and the role of control saturation in dealing with it. The use of the observer in feedback control is discussed and a nonlinear separation principle is presented. The use of an extended high-gain observer as a disturbance estimator is covered. Challenges in implementing high-gain observers are discussed, with the effect of measurement noise as the most serious one. Techniques to cope with measurement noise are presented. The lecture ends by listing examples of experimental testing of high-gain observers.
In this talk we address the problem of designing nonlinear observers that possess robustness to output measurement errors. To this end, we introduce a novel concept of quasi-Disturbance-to-Error Stable (qDES) observer. In essence, an observer is qDES if its error dynamics are input-to-state stable (ISS) with respect to the disturbance as long as the plant's input and state remain bounded. We develop Lyapunov-based sufficient conditions for checking the qDES property for both full-order and reduced-order observers. This relates to a novel "asymptotic ratio" characterization of ISS which is of interest in its own right. When combined with a state feedback law robust to state estimation errors in the ISS sense, a qDES observer can be used to achieve output feedback control design with robustness to measurement disturbances. As an application of this idea, we treat a problem of stabilization by quantized output feedback. Applications to synchronization of electric power generators and of chaotic systems in the presence of measurement errors will also be discussed.
During the past decades model predictive control (MPC) has become a preferred control strategy for the control of a large number of industrial processes. Computational issues, application aspects and systems theoretic properties of MPC (like stability and robustness) are rather well understood by now. For many application disciplines a significant shift in the typical control tasks to be solved can, however, be witnessed at present. This concerns for example robot control, autonomous mobility, or industrial production processes. This will be examplarily discussed with the vision of the smart factory of the future, often termed Industry 4.0, where the involved control tasks, are undergoing a fundamental new orientation. In particular the stabilization of predetermined setpoints does not play the same role as it has in the past. In this talk we will first give an introduction to and an overview over the field of model predictive control. Then new challenges and opportunities for the field of control are discussed with Industry 4.0 as an example. We will in particular investigate the potential impact of Model Predictive Control for the fourth industrial revolution and will argue that some new developments in MPC, especially connected to distributed and economic model predictive control, appear to be ideally suited for addressing some of the new challenges.
Geometric mechanics is useful in developing a compact description of the motion of a rigid body in three-dimensional space which is singularity-free, unique, does not limit the motion to small angles, and enables a single control law to be obtained even in the presence of translational/rotational coupling. Such a description, which is based on the Lie group SE(3) and its corresponding "exponential coordinates", is especially useful for spacecraft and other types of autonomous vehicles undergoing fast rotations and tumbling motions. This talk will explore various coordinates for rigid body attitude along with their pros and cons (including the phenomenon of unwinding when using a quaternion attitude description) as well as the use of the SE(3) framework in multi-vehicle consensus control design in which it is desired to achieve leader-follower formations along with attitude synchronization. The case of four formation flying spacecraft in a Molniya orbit will serve as an illustrative example.
What is model reference adaptive control? Why does one prefer using a model reference adaptive controller? How can we design and analyze a model reference adaptive controller? In this FoRCE video, we answer these fundamental questions related to model reference adaptive control theory and beyond.