IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
In many problems in control, optimal and robust control, one has to solve global optimization problems of the form: P : f* = minx { f(x) : x ∈ K}, or, equivalently, f* = max{λ : f - λ ≥ 0 on K}, where f is a polynomial (or even a semi-algebraic function) and K is a basic semi-algebraic set. One may even need solve the "robust" version min{f(x) : x ∈ K; h(x; u) ≥ 0, ∀u ∈ U} where U is a set of parameters. For instance, some static output feedback problems can be cast as polynomial optimization problems whose feasible set K is dened by a polynomial matrix inequality (PMI). And robust stability regions of linear systems can be modeled as parametrized polynomial matrix inequalities (PMIs) where parameters u account or uncertainties and (decision) variables x are the controller coefficients.
Therefore, to solve such problems one needs tractable characterizations of polynomials (and even semi-algebraic functions) which are nonnegative on a set, a topic of independent interest and of primary importance because it also has implications in many other areas.
We will review two kinds of tractable characterizations of polynomials which are non- negative on a basic closed semi-algebraic set K ⊂ Rn. The rst type of characterization is when knowledge on K is through its dening polynomials, i.e., K = {x : gj(x) ≥ 0; j = 1, . . . ,m}, in which case some powerful certicates of positivity can be stated in terms of some sums of squares (SOS)-weighted representation. For instance, this allows to dene a hierarchy fo semidenite relaxations which yields a monotone sequence of lower bounds converging to f* (and in fact, nite convergence is generic). There is also another way of looking at nonnegativity where now knowledge on K is through moments of a measure whose support is K. In this case, checking whether a polynomial is nonnegative on K reduces to solving a sequence of generalized eigenvalue problems associated with a count- able (nested) family of real symmetric matrices of increasing size. When applied to P, this results in a monotone sequence of upper bounds converging to the global minimum, which complements the previous sequence of upper bounds. These two (dual) characterizations provide convex inner (resp. outer) approximations (by spectrahedra) of the convex cone of polynomials nonnegative on K.
There has been remarkable progress in sampled-data control theory in the last two decades. The main achievement here is that there exists a digital (discrete-time) control law that takes the intersample behavior into account and makes the overall analog (continuous-time) performance optimal, in the sense of H-infinity norm. This naturally suggests its application to digital signal processing where the same hybrid nature of analog and digital is always prevalent. A crucial observation here is that the perfect band-limiting hypothesis, widely accepted in signal processing, is often inadequate for many practical situations. In practice, the original analog signals (sounds, images, etc.) are neither fully band-limited nor even close to be band-limited in the current processing standards.
The present talk describes how sampled-data control theory can be applied to reconstruct the lost high-frequency components beyond the so-called Nyquist frequency, and how this new method can surpass the existing signal processing paradigm. We will also review some concrete examples for sound processing, recovery of high frequency components for MP3/AAC compressed audio signals, and super resolution for image (still/moving) processing. We will also review some crucial steps in leading this technology to the commercial success of 40 million sound processing chips.
Advanced motion systems like pick-and-place machine used in the semiconductor industry, challenge the frontiers of systems and control theory and practice. In the design phase, control oriented design of the electro-mechanics is necessary in order to achieve the tight performance specifications. Once realized, and since experimentation is fast, a machine in the loop procedure can be explored to close the design loop from experiment, using experimental model building,model-based control design, implementation and performance evaluation. Extension of linear modelling techniques towards some classes of nonlinear systems is relevant for improved control of specific motion systems, such as with friction. In the application field of medical robotics the experiences from high tech motion systems can be used successfully, and an eye surgical robot with haptics will be shown as an example.
The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Game theory is beginning to emerge as a valuable set of tools for achieving this goal as many popular multiagent systems can be modeled as games, e.g., sensor coverage, consensus, task allocation, among others. Game theory is a well-established discipline in the social sciences that is primarily used for modeling social behavior. Traditionally, the preferences of the individual agents' are modeled as utility functions and the resulting behavior is assumed to be an equilibrium concept associated with these modeled utility functions, e.g., Nash equilibrium. This is in stark contrast to the role of game theory in engineering systems where the goal is to design both the agents' utility functions and an adaptation rule such that the resulting global behavior is desirable. The transition of game theory from a modeling tool for social systems to a design tool for engineering promotes several new research directions that we will discuss in this talk. In particular, we will focus on the question of how to design admissible agent utility functions such that the resulting game possesses desirable properties, e.g., the existence and efficiency of pure Nash equilibria. Our motivation for considering pure Nash equilibria stems from the fact that adaptation rules can frequently be derived which guarantee that the collective behavior will converge to such pure Nash equilibria. Our first result focuses on ensuring the existence of pure Nash equilibria for a class of separable resource allocation problems that can model a wide array of applications including facility location, routing, network formation, and coverage problems. Within this class, we prove that weighted Shapley values completely characterize the space of local utility functions that guarantee the existence of a pure Nash equilibrium. That is, if a utility design cannot be represented as a weighted Shapley value, then there exists a game for which a pure Nash equilibrium does not exist. Another concern is distributed versus centralized efficiency. Once distributed agents have settled on an equilibrium, the resulting performance need not be the same as from a centralized design (cf., so-called "price-of-anarchy"). We compare different utility design methods and their resulting effect on efficiency. Finally, we briefly discuss online adaptation rules leading to equilibrium.
Healthcare in the U.S. is at a transition point. Costs for care have risen to unsustainable levels and are among the highest in the world without commensurate benefits. At the same time, new care models and the digitization of healthcare offer tremendous opportunities to improve health and health care while reducing costs. In this context, intelligent technologies are playing a growing role in providing better understanding and decision support in healthcare systems. Solutions range from population health management tools for organizations to predictive clinical decision support applications for individuals. Advanced technologies are also applied in administrative tasks such as insurance eligibility determination and fraud detection. Looking ahead, the advent of personalized medicine will bring the promise and need for intelligent technologies into even sharper focus. This talk will review current trends, discuss representative approaches, and show examples that demonstrate the value of intelligent monitoring and decision support in healthcare systems.
In many practical systems, such as engineering, social, and financial systems, control decisions are made only when certain events happen. This is either because of the discrete nature of sensor detection and digital computing equipment, or the limitation of computing power, which makes state-based control infeasible due to the huge state spaces involved. The performance optimization of such systems is generally different from traditional optimization approaches, such as Markov decision processes, or dynamic programming. In this talk, we introduce, in an intuitive manner, a new optimization framework called event-based optimization. This framework has a wide applicability to the aforementioned systems. With performance potential as building blocks, we develop optimization algorithms for event-based optimization problems. The optimization algorithms are first proposed based on intuition, and theoretical justifications are then given with a performance sensitivity based approach. Finally, we provide a few practical examples to demonstrate the effectiveness of the event-based optimization framework. We hope this framework may provide a new perspective to the optimization of the performance of event-triggered dynamic systems.
Advanced motion systems like pick-and-place machines used in the semiconductor industry challenge the frontiers of mechatronic design and systems and control theory and practice. In the design phase, control-oriented design of the electro-mechanics is necessary in order to achieve tight performance specifications. Once realized, a machine-in-the-loop procedure can be explored to close the design loop during experiments as well as for experimental model building, model-based control design, and implementation and performance evaluation. Nevertheless, reliable numerical tools are required to meet the challenges posed with respect to dimensionality and model complexity. Extension of linear modeling techniques towards some classes of nonlinear systems is relevant for improved control of specific motion systems, such as those with friction. Further, medical robotics can greatly benefit from the experiences of the high tech motion systems industry, and an eye surgical robot with haptics will be shown as an example. Other challenging applications in need of advanced design and modeling and control are fuel-efficient vehicles (including ultra-clean engines), vehicle electric and hybrid power trains, and plasma fusion processes. Finally, the 2012 World Champion Soccer Robots (midsize) will also be discussed as an example of advanced motion control for high tech systems.
The past few years have witnessed a revolution in data collection capabilities: The development of low cost, ultra low power sensors capable of harvesting energy from the environment has rendered ubiquitous sensing feasible. When coupled with a parallel growth in actuation capabilities, these developments open up the possibility of new control applications that can profoundly impact society, ranging from zero-emissions buildings to ``smart" grids and managed aquifers to achieve long term sustainable use of scarce resources. A major road-block to realizing this vision stems from the curse of dimensionality. To successfully operate in these scenarios, controllers will need to timely extract relevant, actionable information from the very large data streams generated by the ubiquitous sensors. However, existing techniques are ill-equipped to deal with this "data avalanche."
This talk discusses the central role that systems theory can play in developing computationally tractable, scalable methods for extracting actionable information that is very sparsely encoded in high dimensional data streams. The key insight is the realization that actionable information can be often represented with a small number of invariants associated with an underlying dynamical system. Thus, in this context, the problem of actionable information extraction can be reformulated as identifying these invariants from (high dimensional) noisy data, and thought of as a generalization of sparse signal recovery problems to a dynamical systems framework. While in principle this approach leads to generically nonconvex, hard to solve problems, computationally tractable relaxations (and in some cases exact solutions) can be obtained by exploiting a combination of elements from convex analysis and the classical theory of moments. These ideas will be illustrated using examples from several application domains, including autonomous vehicles, computer vision, systems biology and economics. We will conclude the talk by exploring the connection between hybrid systems identification, information extraction, and machine learning, and point out to new research directions in systems theory motivated by these problems.
More information provided here.
A hallmark of living cells is their inherent stochasticity. Stochastic molecular noise in individual cells manifests as cell-to-cell variability within a population of genetically identical cells. While experimental tools have enabled the measurement and quantification of variability of populations consisting of millions of cells, new modeling and analysis tools have lead to a substantial improvement in our understanding of the stochastic nature of living cell populations and its biological role. More recently, these developments came together to pave the way for the real-time control of living cells.
In this presentation, we describe novel analytical and experimental work that demonstrates how a computer can be interfaced with living cells and used to control their behavior. We discuss how computer controlled light pulses, in combination with a genetically encoded light-responsive module and a flow cytometer can be configured to achieve precise and robust set-point regulation of gene expression in the noisy environment of the cell. We then address the theoretical, computational, and practical issues concerning the feedback control of single cells as well as cell populations. Aside from its potential applications in biotechnology and therapeutics, this approach opens up exciting opportunities for the development of new control theoretic methods aimed at confronting the unique challenges of manipulating the dynamic behavior of living cells.
The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and in this talk, we summarize some of these developments. In particular, we will discuss how to go from local rules to global behaviors in a systematic manner in order to achieve distributed geometric objectives, such as achieving and maintaining formations, area coverage, and swarming behaviors. We will also investigate how users can interact with networks of mobile robots in order to inject new information and objectives. The efficacy of these interactions depends directly on the interaction dynamics and the structure of the underlying information-exchange network. We will relate these network-level characteristics to controllability and manipulability notions in order to produce effective human-swarm interaction strategies.
Pressing environmental problems, energy supply security issues, and nuclear power safety concerns drive the worldwide interest in renewable energy. Renewable energy sources such as wind and solar exhibit variability: they are not dispatchable, exhibit large fluctuations, and are uncertain. Variability is the most important obstacle to deep integration of renewable generation. The current approach is to absorbthis variability in operating reserves. This works at today’s modest penetration levels. But it will not scale. At deep penetration levels (>30%) the levels of necessary reserves are economically untenable, and defeat the net carbon benefit.
So how can we economically enable deep penetration of renewable generation? The emerging consensus is that much this new generation must be placed at hundreds of thousands of locations in the distribution system, and that the attendant variability can be absorbed by the coordinated aggregation and control of distributed resources such as storage, programmable loads, and smart appliances. Tomorrow’s grid will have an intelligent periphery.
We will explore the architectural and algorithmic components for managing this intelligent periphery. Clusters of distributed energy resources are coordinated to efficiently and reliably offer services (ex: bulk power, regulation) in theface of uncertainty (ex: renewables, consumers). We begin by formulating a general class of stochastic dynamic programming problems that arise in the context of coordinated aggregation. We then consider specific real-time scheduling problems for allocating power to resources. We show that no causal optimal policy exists that respects rate constraints (ex: maximum EV charging rates). Next, we explore the benefits of coordinated aggregation in the metric of operating reserves savings. We close by suggesting several challenging problems in monetizing and incentivizing resource participation.
Cells are chemical reactors where reactions among molecules such as DNA, RNA, and proteins implement the sophisticated programs that support life. Biological molecules undergo thermal motion, and even when they have the propensity to react together, they only do so probabilistically. Therefore, biological processes are fundamentally stochastic because of the very nature of these probabilistic biochemical reactions. This stochasticity, which manifests as cell to cell variability in mRNA and protein levels even in clonal populations of genetically identical cells, is accurately quantifiable with modern experimental techniques. In this talk, I illustrate using a number of examples of how an iterative cycle of rigorous computational modeling and quantitative experimentation can generate profound insights into the control mechanisms used by cells to dampen or exploit their stochastic fluctuations. I also discuss the challenges inherent in connecting measurements of stochastic biochemical circuits to their modeling and analysis, and highlight many exciting opportunities in this field.
Nanometer length scale analogues of most traditional control elements, such as sensors, actuators, and feedback controllers, have been enabled by recent advancements in device manufacturing and fundamental materials research. However, combining these new control elements in classical systems frameworks remains elusive. Methods to address the new generation of systems issues particular to nanoscale systems is termed here as systems nanotechnology. This presentation discusses some promising control strategies and theories that have been developed to address the challenges that arise in systems nanotechnology. A selection of novel nanoscale devices are reviewed, selected by their potential for broad application in nanoscale systems. Many of these devices use single-walled carbon nanotubes, which demonstrate the diversity of potential applications for a single type of nanoscale material. All of the elements necessary for the design of advanced control systems are available, including sensors to rapidly assess the physical characteristics and use for estimation of the states of a system, actuators to affect the system states, and feedback controllers to utilize the state estimates to determine the signals to send to the actuators to satisfy control objectives. Specific examples are provided where the identification, estimation, and control of complex nanoscale systems have been demonstrated in experimental implementations or in high-fidelity simulations. Some control theory problems are also described that, if resolved, would facilitate further applications. Some recent developments are described for addressing a major challenge that must be resolved for commercial manufacturing, which is improving the integration of nanoscale devices.
This talk will describe a control-enabled framework for enabling deployment of new hardware technologies (e.g., wind power plants, solar panels, responsive demand, smart wires) into power systems. We explain how the proposed control framework could evolve in synchrony with the existing utility control centers and their supervisory control and data acquisition (SCADA) systems. Much greater intelligence gets embedded into the new hardware technologies themselves for managing temporal complexities and uncertainties in a distributed way. Today's automation and control structure gets transformed into an interactive multi- layered system with information and intelligence distributed within and among the newly deployed hardware and SCADA applications. We describe how difficult spatial complexities (such as voltage support of the grid) can be coordinated by the SCADA system provided that it is not overwhelmed with managing inter-temporal correlations in distributed resources. We discuss how such a control-enabled approach could improve the performance of different evolving power grid architectures. In particular, we show how carefully architected automation enables electricity service at value and according to choice. This is done while maintaining continuity of services defined according to terms between service providers and users. We illustrate dynamic deployment of wind and solar power, responsive demand, and plug-in hybrid electric vehicles, according to the value they bring to those needing them. Applications to regulated and restructured bulk power systems and to micro-grids are also outlined. These examples will demonstrate how the overall operations and planning process becomes much more manageable and simpler when enabled by the right control and communication systems.
Traditionally, the control of Earth satellites has relied and still relies on human intelligence at the ground station instead of computer intelligence on-board the spacecraft. Recent developments in powerful space-qualified microcomputers, computer-aided software engineering tools and failure-detection-identification techniques have displaced the equilibrium point in this trade-off toward more autonomy. In this context, the European Space Agency (ESA) initiated in the 1990's the PRoject for On-Board Autonomy (PROBA) with the objective of demonstrating the benefits of on-board autonomy. The program also prepared the way for missions where autonomy is essential, such as planetary exploration missions. This presentation will describe recent developments in the autonomous guidance, navigation and control (GNC) of space vehicles achieved through the PROBA program. It will review the innovative work that enabled the migration of the GNC functions from the ground control station to the spacecraft on-board computer, leading to the concept of `satellite for the non-expert'. In addition to PROBA-1 and PROBA-2 accomplishments, which have accumulated more than 12 years of flight experience, the latest innovations in autonomous spacecraft control developed for PROBA-V and PROBA-3, both under development, will be briefly reviewed. The presentation will proceed with the recent extensions to the PROBA technology for application to robotic planetary exploration missions where autonomy is an enabling technology for Orbiter, Lander and Rover operations. The presentation will conclude with a review of the benefits and drawbacks of spacecraft autonomy so far observed via the PROBA program, and with an outlook on the remaining challenges to be addressed.
There are many success stories and major achievements in control, something remarkable for a field as young as 50 years. In this talk we will sample these accomplishments and speculate about the future prospects of control. While there are examples of feedback from ancient times, extensive use of feedback paralleled industrialization: steam, electricity, communication, transportation etc. Control was established as a field in the period 1940-1960, when the similarities of control in widely different fields were recognized. Control constituted a paradigm shift in engineering that cut across the traditional engineering disciplines: mechanical, electrical, chemical, aerospace. A holistic view of control systems with a unified theory emerged in the 1950s, triggered by military efforts during the Second World War. The International Federation of Automatic Control (IFAC) was formed in 1956. Education in control spread rapidly to practically all engineering disciplines. Conferences and journals also appeared. A second phase, driven by the space race and the emergence of computers, started around 1960. Theory developed dramatically as did industrial applications. A large number of sub-specialties appeared and perhaps due to this, the holistic view of the field was lost. In my opinion we are now entering a third phase driven by the ubiquitous use of control and a strong interest in feedback and control among fellow scientists in physics and biology. There are also new areas driven by networked systems, autonomy and safe design of large complex systems. What will happen next depends largely on how we respond to the new challenges and on how we manage to recapture the holistic view of systems. Key issues will be education and interaction with other disciplines.
This talk will focus on progress towards a more “unified” theory for complex networks involving several elements: hard limits on achievable robust performance ( “laws”), the organizing principles that succeed or fail in achieving them (architectures and protocols), the resulting high variability data and “robust yet fragile” behavior observed in real systems and case studies (behavior, data), and the processes by which systems evolve (variation, selection, design). Insights into what the potential universal laws, architecture, and organizational principles are can be drawn from three converging research themes. First, detailed description of components and a growing attention to systems in biology and neuroscience, the organizational principles of organisms and evolution are becoming increasingly. Biologists are articulating richly detailed explanations of biological complexity, robustness, and evolvability that point to universal principles and architectures. Second, while the components differ and the system processes are far less integrated, advanced technology’s complexity is now approaching biology’s and there are striking convergences at the level of organization and architecture, and the role of layering, protocols, and feedback control in structuring complex multiscale modularity. Third, new mathematical frameworks for the study of complex networks suggests that this apparent network-level evolutionary convergence within/between biology/technology is not accidental, but follows necessarily from their universal system requirements to be fast, efficient, adaptive, evolvable, and most importantly, robust to perturbations in their environment and component part. We have the beginnings of the underlying mathematical framework and also a series of case studies in classical problems in complexity from statistical mechanics, turbulence, cell biology, human physiology and medicine, neuroscience, wildfire, earthquakes, economics, the Internet, and smartgrid. A workshop preceding CDC will explore this in more detail (with Pablo Parrilo). Selected references: [1] M Chiang, SH Low, AR Calderbank, JC. Doyle (2006) Layering As Optimization Decomposition, PROCEEDINGS OF THE IEEE, Volume: 95 Issue: 1 Jan 2007 [2]Alderson DL, Doyle JC (2010) Contrasting views of complexity and their implications for network-centric infrastructures. IEEE Trans Systems Man Cybernetics—Part A: Syst Humans 40:839-852. [3]H. Sandberg, J. C. Delvenne, J. C. Doyle. On Lossless Approximations, the Fluctuation-Dissipation Theorem, and Limitations of Measurements, IEEE Trans Auto Control, Feb 2011 [4]Chandra F, Buzi G, Doyle JC (2011) Glycolytic oscillations and limits on robust efficiency. Science, Vol 333, pp 187-192. [5]JC Doyle, ME Csete (2011) Architecture, Constraints, and Behavior, P Natl Acad Sci USA, in press, available online [6]Gayme DF, McKeon BJ, Bamieh B, Papachristodoulou P, Doyle JC (2011) Amplification and Nonlinear Mechanisms in Plane Couette Flow, Physics of Fluids, in press (published online 17 June 2011)
Cyber-physical systems combine a cyber side (computing and networking) with a physical side (mechanical, electrical, and chemical processes). Such systems present the biggest challenges as well as the biggest opportunities in several large industries, including electronics, energy, automotive, defense and aerospace, telecommunications, instrumentation, industrial automation. Engineers today do successfully design cyber-physical systems in a variety of industries. Unfortunately, the development of systems is costly, and development schedules are difficult to stick to. The complexity of cyber-physical systems, and particularly the increased performance that is offered from interconnecting what in the past have been separate systems, increases the design and verification challenges. As the complexity of these systems increases, our inability to rigorously model the interactions between the physical and the cyber sides creates serious vulnerabilities. Systems become unsafe, with disastrous inexplicable failures that could not have been predicted. Distributed control of multi-scale complex systems is largely an unsolved problem. A common view that is emerging in research programs in Europe and the US is "enabling contract-based design (CBD)," which formulates a broad and aggressive scope to address urgent needs in the systems industry. We present a design methodology and a few examples in controller design whereby contract-based design can be merged with platform-based design to formulate the design process as a meet-in-the-middle approach, where design requirements are implemented in a subsequent refinement process using as much as possible elements from a library of available components. Contracts are formalizations of the conditions for correctness of element integration (horizontal contracts), for lower level of abstraction to be consistent with the higher ones, and for abstractions of available components to be faithful representations of the actual parts (vertical contracts).