IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
A multi-agent system should be capable of fast and flexible decision-making if it is to successfully manage the uncertainty, variability, and dynamic change encountered when operating in the real world. Decision-making is fast if it breaks indecision as quickly as indecision becomes costly. This requires fast divergence away from indecision in addition to fast convergence to a decision. Decision-making is flexible if it adapts to signals important to successful operations, even if they are weak or rare. This requires tunable sensitivity to input for modulating regimes in which the system is ultra-sensitive and in which the system is robust. Nonlinearity and feedback in the multi-agent decision-making dynamics are necessary to meet these requirements.
I will present theoretical principles, analytical results, and applications of a general model of decentralized, multi-agent, and multi-option, nonlinear opinion dynamics that enables fast and flexible decision-making. I will explain how the critical features of fast and flexible multi-agent decision-making depend on nonlinearity, feedback, and the structure of the inter-agent communication network and a belief system network. And I will show how the theory and results provide a principled and systematic means for designing and analyzing multi-agent decision-making in systems ranging from multi-robot teams to social networks.
A typical multi-agent system is composed of a follower system consisting of multiple subsystems called followers and a leader system whose output is to be tracked by the followers. What makes the control of a multi-agent system challenging is that the control law needs to be distributed in the sense that it must satisfy time-varying communication constraints. A special case of distributed control is where all the followers can access the information of the leader. For this special case, one can design, for each follower, a conventional control law based on the information of the leader. The collection of these conventional control laws constitutes the so-called purely decentralized control law for the multi-agent system. Nevertheless, the purely decentralized control law is not feasible due to the communication constraints. In this talk, we will introduce a framework for designing a distributed control law by cascading a purely decentralized control law and a so-called distributed observer for the leader system, which is a dynamic compensator that estimates and transmits the leader’s information to each follower over a communication network. Such a control law is called the distributed observer-based control law and has found its applications to such problems as consensus, synchronization, flocking, formation, and distributed Nash equilibrium seeking. The core of this design framework is the distributed observer for a linear leader system, which was initiated in 2010 for dealing with the cooperative output regulation problem, and has experienced three phases of developments. In the first phase, the distributed observer is only capable of estimating and transmitting the leader’s state to every follower assuming every follower knows the dynamics of the leader. In the second phase which started in 2015, the distributed observer is rendered the capability of estimating and transmitting not only the leader’s state but also the dynamics of the leader to every follower provided that the leader’s children know the information of the leader. Such a dynamic compensator is called an adaptive distributed observer for a known leader system. The distributed observer was further developed in 2017 for linear leader systems containing unknown parameters, thus entering its third phase of the development. Such a dynamic compensator is called an adaptive distributed observer for an unknown leader as it not only estimates the state but also the unknown parameters of the leader. We will start with an overview on the development of the distributed observer and then highlight the recent results on establishing an output-based adaptive distributed observer for an unknown leader system over jointly connected communication networks. Extensions, variants and applications of the distributed observer will also be touched.
Wind farms comprise a network of dynamical systems that operate within a continuous space, i.e., the turbulent atmospheric boundary layer (ABL). Viewing the turbines as actuators that adjust the flow field to collectively produce a desired overall power output, wind farms are an excellent prototype for flow control in which the actuators are well-defined and located in the region of interest. In this talk we introduce models and control strategies that adopt this viewpoint. We first demonstrate that taking into account both the challenges and opportunities arising through interactions with the ABL can enable wind farms to participate in markets that support the grid with improved efficiency. We then focus on the dynamic interconnections within the farm, which we formulate in terms of a graph with time-varying edge connectivity that accounts for changes in the incoming wind direction and turbine yaw angles. An example implementation of this simplified graph model within a combined pitch and yaw controller demonstrates the potential and limitations of yaw for augmenting pitch control in power tracking applications. In the final part of the talk, we discuss new approaches for developing similar types of control oriented models that focus on the critical flow features in other types of wall-bounded shear flows.
Diffusion processes refer to a class of stochastic processes driven by Brownian motion. They have been widely used in various applications, ranging from engineering to science to finance. In this talk, I will discuss my experiences with diffusion and how this powerful tool has shaped our research programs. I will go over several research projects in the area of control, inference, and machine learning, where we have extensively utilized tools from diffusion processes. In particular, I will present our research on four topics: i) covariance control in which we aim to regulate the uncertainties of a dynamic system; ii) distribution control where we seek to herd population dynamics; iii) Monte Carlo Markov chain sampling for general inference tasks; iv) and diffusion models for generative modeling in machine learning.
In everyday driving, many traffic maneuvers such as merges, lane changes, passing through an intersection, require negotiation between independent actors/agents. The same is true for mobile robots autonomously operating in a space open to other agents (humans, robots, etc.). Negotiation is an inherently difficult concept to code into a software algorithm. It has been observed in computer simulations that some “decentralized” algorithms produce gridlocks while others never do. It has turned out that gridlocking algorithms create locally stable equilibria in the joint inter-agent space, while, for those that don’t gridlock, equilibria are unstable – hence the title of the talk.
We use Control Barrier Function (CBF) based methods to provide collision avoidance guarantees. The main advantage of CBFs is that they provide easier to solve convex programs even for nonlinear systems and inherently non-convex obstacle avoidance problems. Six different CBF-based control policies were compared for collision avoidance and liveness (fluidity of motion, absence of gridlocks) on a 5-agent, holonomic-robot system. The outcome was then correlated with stability analysis on a simpler, yet representative problem. The results are illustrated by extensive simulations including an intersection example where the (in)stability insights are used to explain otherwise difficult to understand vehicle behaviors.
Bob Behnken’s journey from science and engineering student to Ph.D. candidate, to test pilot school student, and NASA astronaut culminated with the opportunity to be a part of the team that recreated a capability to transport humans to and from low earth orbit. He’ll share his experience, insight, and perspective on being a part of the NASA / SpaceX team’s endeavor to accomplish that mission in 2020 and take questions on his experience flying into space and living and working aboard the International Space Station.
Control theory and control technology have received renewed interests from applications involving service robots during the last two decades. In many scenarios, service robots are employed as networked mobile sensing platforms to collect data, sometimes in extreme environments in unprecedented ways. These applications post higher goals for autonomy that have never been achieved before, triggering new developments towards convergence of sensing, control, and communication.
Identifying mathematical models of spatial-temporal processes from collected data along trajectories of mobile sensors is a baseline goal for active perception in complex environment. The controlled motion of mobile sensors induces information dynamics in the measurements taken for the underlying spatial-temporal processes, which are typically represented by models that have two major components: the trend model and the variation model. The trend model is often described by deterministic partial differential equations, and the variation model is often described by stochastic processes. Hence, information dynamics are constrained by these representations. Based on the information dynamics and the constraints, learning algorithms can be developed to identify parameters for spatial-temporal models.
Certain designs of active sensing algorithms are inspired by animal and human behaviors. Our research designed the speed-up and speeding strategy (SUSD) that is inspired by the extraordinary capabilities of phototaxis from swarming fish. SUSD is a distributed active sensing strategy that reduces the need for information sharing among agents. Furthermore, SUSD leads to a generic derivative free optimization algorithm that has been applied to solve optimization problems where gradients are not well-defined, including mixed integer programing problems.
A perceivable trend in the control community is the rapid transition of fundamental discoveries to swarm robot applications. This is enabled by a collection of software, platforms, and testbeds shared across research groups. Such transition will generate significant impact to address the growing needs of robot swarms in applications including scientific data collection, search and rescue, aquaculture, intelligent traffic management, as well as human-robot teaming.
This work describes how machine learning may be used to develop accurate and efficient nonlinear dynamical systems models for complex natural and engineered systems. We explore the sparse identification of nonlinear dynamics (SINDy) algorithm, which identifies a minimal dynamical system model that balances model complexity with accuracy, avoiding overfitting. This approach tends to promote models that are interpretable and generalizable, capturing the essential “physics” of the system. We also discuss the importance of learning effective coordinate systems in which the dynamics may be expected to be sparse. This sparse modeling approach will be demonstrated on a range of challenging modeling problems, for example in fluid dynamics, and we will discuss how to incorporate these models into existing model-based control efforts.
More information provided here.
In this seminar, Dr. L’Afflitto will present two recent advances in the state-of-the-art in model reference control systems design. The first of these results will concern the design of an adaptive control system that allows the user to impose both the rate of convergence on the closed-loop system during its transient stage and constraints on both the trajectory tracking error and the control input at all times, despite parametric and modeling uncertainties. Successively, our speaker will present the first extension of the model reference adaptive control architecture to switched dynamical systems within the Carathéodory and the Filippov framework. The applicability of these theoretical formulations will be shown by the results of numerical simulations and flight tests involving multi-rotor unmanned aerial systems such as tilt-rotor quadcopters and tailsitter UAVs.
The human hand is the pinnacle of dexterity – it has the ability to powerfully grasp a wide range of object sizes and shapes as well as delicately manipulate objects held within the fingertips. Current robotic and prosthetic systems, however, have only a fraction of that manual dexterity. My group attempts to address this gap in three main ways: examining the mechanics and design of effective hands, studying biological hand function as inspiration and performance benchmarking, and developing novel control approaches that accommodate task uncertainty. In terms of hand design, we strongly prioritize passive mechanics, including incorporating adaptive underactuated transmissions and carefully tuned compliance, and seek to maximize open-loop performance while minimizing complexity. In this talk, I will discuss how constraints imparted by external contacts in robotic manipulation and legged locomotion affect the mobility and control of the mechanism, introduce ways that these can be redressed through novel design approaches, and demonstrate how our group has been able to apply these concepts to produce simple and robust grasping and dexterous manipulation for tasks that are difficult or impossible to perform using traditional approaches.
Integrated systems are ubiquitous as more heterogeneous physical entities are combined to form functional platforms. New and “invisible” feedback loops and couplings are introduced with increased connectivity, leading to emerging dynamics and making the integrated systems more control-intensive. The multi-physics, multi-time scale, and distributed-actuation natures of integrated systems present new challenges for modeling and control. Understanding their operating environments, achieving sustained high performance, and incorporating rich but incomplete data also motivate the development of novel design tools and frameworks.
In this talk, I will use the integrated thermal and power management of connected and automated vehicles (CAVs) as an example to illustrate the challenges in the prediction, estimation, and control of integrated systems in the era of rapid advances in AI and data-driven control. While first-principle-based modeling is still essential in understanding and exploiting the underlying physics of the integrated systems, model-based control and optimization have to be used in a much richer context to deal with the emerging dynamics and inevitable uncertainties. For CAVs, we will show how model-based design, complemented by data-driven approaches, can lead to control and optimization solutions with a significant impact on energy efficiency and operational reliability, in addition to safety and accessibility.
I have thoroughly enjoyed teaching and research in the field of mechanical systems control over the past fifty years. This field has been full of new theory, new mechanical hardware and new tools for real time control, and is nothing but the world of mechatronics. In this talk, I would like to give a brief review of how this field has developed during the past fifty years and what my personal involvements have been in this field and what my current involvements are. Overall, the talk is a chronicle of my journey of exploration with my students in the forest of mechanical systems control.
Ensemble control deals with the problem of using a common control input to simultaneously steer a large population (in the limit, a continuum) of individual control systems. In this talk, we address a fundamental problem in ensemble control theory, namely, system controllability. A key factor in determining controllability of an ensemble system is its underlying parameterization space. Roughly speaking, the bigger the parameterization space is, the more difficult one can control the ensemble. Over the past two decades, significant progress has been made for understanding controllability of ensemble systems over one-dimensional parameterization spaces, yet little is known when the dimensions are greater than one. A major focus of this talk is to present recent advances in controllability of ensemble systems whose parameterization spaces are multi-dimensional. We will consider two classes of ensemble systems, namely, ensembles of linear control systems and ensembles of control-affine systems. We will first show that linear ensemble systems are problematic if their parameterization spaces are greater than one and, then, show how to resolve this controllability issue by using a special class of control-affine ensembles whose control vector fields are equipped with a fine structure.
Control systems with learning abilities could cost-effectively address societal issues like energy reliability, decarbonization, climate security and enable autonomous scientific discovery. Recent investigations focus on longstanding challenges such as robustness, uncertainty, and safety of complex engineered systems. But most importantly, innovation in deep learning methods, tools, and technology offers an unprecedented opportunity to transform the control engineering practice and bring much excitement to control systems theory research. In this talk, I will introduce recent results in modeling dynamic systems with deep learning representations that embed domain knowledge. I will also discuss differentiable predictive control, a data-driven approach that uses physics-informed deep learning representations to synthesize predictive control policies. I’ll illustrate the concepts with examples from various engineering applications. I’ll close by considering the implications of differentiable programming on the broader control systems context.