IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
Pressing environmental problems, energy supply security issues, and nuclear power safety concerns drive the worldwide interest in renewable energy. Renewable energy sources such as wind and solar exhibit variability: they are not dispatchable, exhibit large fluctuations, and are uncertain. Variability is the most important obstacle to deep integration of renewable generation. The current approach is to absorbthis variability in operating reserves. This works at today’s modest penetration levels. But it will not scale. At deep penetration levels (>30%) the levels of necessary reserves are economically untenable, and defeat the net carbon benefit.
So how can we economically enable deep penetration of renewable generation? The emerging consensus is that much this new generation must be placed at hundreds of thousands of locations in the distribution system, and that the attendant variability can be absorbed by the coordinated aggregation and control of distributed resources such as storage, programmable loads, and smart appliances. Tomorrow’s grid will have an intelligent periphery.
We will explore the architectural and algorithmic components for managing this intelligent periphery. Clusters of distributed energy resources are coordinated to efficiently and reliably offer services (ex: bulk power, regulation) in theface of uncertainty (ex: renewables, consumers). We begin by formulating a general class of stochastic dynamic programming problems that arise in the context of coordinated aggregation. We then consider specific real-time scheduling problems for allocating power to resources. We show that no causal optimal policy exists that respects rate constraints (ex: maximum EV charging rates). Next, we explore the benefits of coordinated aggregation in the metric of operating reserves savings. We close by suggesting several challenging problems in monetizing and incentivizing resource participation.
Cells are chemical reactors where reactions among molecules such as DNA, RNA, and proteins implement the sophisticated programs that support life. Biological molecules undergo thermal motion, and even when they have the propensity to react together, they only do so probabilistically. Therefore, biological processes are fundamentally stochastic because of the very nature of these probabilistic biochemical reactions. This stochasticity, which manifests as cell to cell variability in mRNA and protein levels even in clonal populations of genetically identical cells, is accurately quantifiable with modern experimental techniques. In this talk, I illustrate using a number of examples of how an iterative cycle of rigorous computational modeling and quantitative experimentation can generate profound insights into the control mechanisms used by cells to dampen or exploit their stochastic fluctuations. I also discuss the challenges inherent in connecting measurements of stochastic biochemical circuits to their modeling and analysis, and highlight many exciting opportunities in this field.
Nanometer length scale analogues of most traditional control elements, such as sensors, actuators, and feedback controllers, have been enabled by recent advancements in device manufacturing and fundamental materials research. However, combining these new control elements in classical systems frameworks remains elusive. Methods to address the new generation of systems issues particular to nanoscale systems is termed here as systems nanotechnology. This presentation discusses some promising control strategies and theories that have been developed to address the challenges that arise in systems nanotechnology. A selection of novel nanoscale devices are reviewed, selected by their potential for broad application in nanoscale systems. Many of these devices use single-walled carbon nanotubes, which demonstrate the diversity of potential applications for a single type of nanoscale material. All of the elements necessary for the design of advanced control systems are available, including sensors to rapidly assess the physical characteristics and use for estimation of the states of a system, actuators to affect the system states, and feedback controllers to utilize the state estimates to determine the signals to send to the actuators to satisfy control objectives. Specific examples are provided where the identification, estimation, and control of complex nanoscale systems have been demonstrated in experimental implementations or in high-fidelity simulations. Some control theory problems are also described that, if resolved, would facilitate further applications. Some recent developments are described for addressing a major challenge that must be resolved for commercial manufacturing, which is improving the integration of nanoscale devices.
This talk will describe a control-enabled framework for enabling deployment of new hardware technologies (e.g., wind power plants, solar panels, responsive demand, smart wires) into power systems. We explain how the proposed control framework could evolve in synchrony with the existing utility control centers and their supervisory control and data acquisition (SCADA) systems. Much greater intelligence gets embedded into the new hardware technologies themselves for managing temporal complexities and uncertainties in a distributed way. Today's automation and control structure gets transformed into an interactive multi- layered system with information and intelligence distributed within and among the newly deployed hardware and SCADA applications. We describe how difficult spatial complexities (such as voltage support of the grid) can be coordinated by the SCADA system provided that it is not overwhelmed with managing inter-temporal correlations in distributed resources. We discuss how such a control-enabled approach could improve the performance of different evolving power grid architectures. In particular, we show how carefully architected automation enables electricity service at value and according to choice. This is done while maintaining continuity of services defined according to terms between service providers and users. We illustrate dynamic deployment of wind and solar power, responsive demand, and plug-in hybrid electric vehicles, according to the value they bring to those needing them. Applications to regulated and restructured bulk power systems and to micro-grids are also outlined. These examples will demonstrate how the overall operations and planning process becomes much more manageable and simpler when enabled by the right control and communication systems.
Traditionally, the control of Earth satellites has relied and still relies on human intelligence at the ground station instead of computer intelligence on-board the spacecraft. Recent developments in powerful space-qualified microcomputers, computer-aided software engineering tools and failure-detection-identification techniques have displaced the equilibrium point in this trade-off toward more autonomy. In this context, the European Space Agency (ESA) initiated in the 1990's the PRoject for On-Board Autonomy (PROBA) with the objective of demonstrating the benefits of on-board autonomy. The program also prepared the way for missions where autonomy is essential, such as planetary exploration missions. This presentation will describe recent developments in the autonomous guidance, navigation and control (GNC) of space vehicles achieved through the PROBA program. It will review the innovative work that enabled the migration of the GNC functions from the ground control station to the spacecraft on-board computer, leading to the concept of `satellite for the non-expert'. In addition to PROBA-1 and PROBA-2 accomplishments, which have accumulated more than 12 years of flight experience, the latest innovations in autonomous spacecraft control developed for PROBA-V and PROBA-3, both under development, will be briefly reviewed. The presentation will proceed with the recent extensions to the PROBA technology for application to robotic planetary exploration missions where autonomy is an enabling technology for Orbiter, Lander and Rover operations. The presentation will conclude with a review of the benefits and drawbacks of spacecraft autonomy so far observed via the PROBA program, and with an outlook on the remaining challenges to be addressed.
There are many success stories and major achievements in control, something remarkable for a field as young as 50 years. In this talk we will sample these accomplishments and speculate about the future prospects of control. While there are examples of feedback from ancient times, extensive use of feedback paralleled industrialization: steam, electricity, communication, transportation etc. Control was established as a field in the period 1940-1960, when the similarities of control in widely different fields were recognized. Control constituted a paradigm shift in engineering that cut across the traditional engineering disciplines: mechanical, electrical, chemical, aerospace. A holistic view of control systems with a unified theory emerged in the 1950s, triggered by military efforts during the Second World War. The International Federation of Automatic Control (IFAC) was formed in 1956. Education in control spread rapidly to practically all engineering disciplines. Conferences and journals also appeared. A second phase, driven by the space race and the emergence of computers, started around 1960. Theory developed dramatically as did industrial applications. A large number of sub-specialties appeared and perhaps due to this, the holistic view of the field was lost. In my opinion we are now entering a third phase driven by the ubiquitous use of control and a strong interest in feedback and control among fellow scientists in physics and biology. There are also new areas driven by networked systems, autonomy and safe design of large complex systems. What will happen next depends largely on how we respond to the new challenges and on how we manage to recapture the holistic view of systems. Key issues will be education and interaction with other disciplines.
The year 1948 was auspicious for information science and technology. Norbert Wiener's book Cybernetics was published by Wiley, the transistor was invented (and given its name), and Shannon's seminal paper "A Mathematical Theory of Communication" was published in the Bell System Technical Journal. In the years that followed, important ideas of Shannon, Wiener, Von Neumann, Turing and many others changed the way people thought about the basic concepts of control systems. Hendrik Bode himself was a Shannon collaborator in a paper on smoothing and prediction published in the Proceedings of the IRE in 1950. It is thus not surprising that by the time the earliest direct predecessor of CDC (the Discrete Adaptive Processes Symposium) was held in New York in June, 1962, concepts from machine intelligence and information theory were not at all foreign to the control community. This talk will examine the interwoven evolution of control and information over the past fifty years during which time the IEEE Conference on Decision and Control went from infancy to maturity. The talk will also discuss two new areas in information based control. In collaboration with W.S. Wong, some recent work on control communication complexity has been aimed at a new class of optimal control problems in which distributed agents communicate through the dynamics of a control system in such a way that the control cost is minimized over many messages. Applications of the theory to robot communication through relative motions (e.g. robot dancing and team sports) and to distributed control of semi-classical models of quantum systems will be discussed. The talk will also discuss some recently discovered links between information and the differential topology of smooth random fields (joint work with D. Baronov). The latter work has been applied to a problem of rapid information acquisition in robotic reconnaissance, and it has suggested metrics by which to assess the tradeoff between speed and accuracy.
This talk will focus on progress towards a more “unified” theory for complex networks involving several elements: hard limits on achievable robust performance ( “laws”), the organizing principles that succeed or fail in achieving them (architectures and protocols), the resulting high variability data and “robust yet fragile” behavior observed in real systems and case studies (behavior, data), and the processes by which systems evolve (variation, selection, design). Insights into what the potential universal laws, architecture, and organizational principles are can be drawn from three converging research themes. First, detailed description of components and a growing attention to systems in biology and neuroscience, the organizational principles of organisms and evolution are becoming increasingly. Biologists are articulating richly detailed explanations of biological complexity, robustness, and evolvability that point to universal principles and architectures. Second, while the components differ and the system processes are far less integrated, advanced technology’s complexity is now approaching biology’s and there are striking convergences at the level of organization and architecture, and the role of layering, protocols, and feedback control in structuring complex multiscale modularity. Third, new mathematical frameworks for the study of complex networks suggests that this apparent network-level evolutionary convergence within/between biology/technology is not accidental, but follows necessarily from their universal system requirements to be fast, efficient, adaptive, evolvable, and most importantly, robust to perturbations in their environment and component part. We have the beginnings of the underlying mathematical framework and also a series of case studies in classical problems in complexity from statistical mechanics, turbulence, cell biology, human physiology and medicine, neuroscience, wildfire, earthquakes, economics, the Internet, and smartgrid. A workshop preceding CDC will explore this in more detail (with Pablo Parrilo). Selected references: [1] M Chiang, SH Low, AR Calderbank, JC. Doyle (2006) Layering As Optimization Decomposition, PROCEEDINGS OF THE IEEE, Volume: 95 Issue: 1 Jan 2007 [2]Alderson DL, Doyle JC (2010) Contrasting views of complexity and their implications for network-centric infrastructures. IEEE Trans Systems Man Cybernetics—Part A: Syst Humans 40:839-852. [3]H. Sandberg, J. C. Delvenne, J. C. Doyle. On Lossless Approximations, the Fluctuation-Dissipation Theorem, and Limitations of Measurements, IEEE Trans Auto Control, Feb 2011 [4]Chandra F, Buzi G, Doyle JC (2011) Glycolytic oscillations and limits on robust efficiency. Science, Vol 333, pp 187-192. [5]JC Doyle, ME Csete (2011) Architecture, Constraints, and Behavior, P Natl Acad Sci USA, in press, available online [6]Gayme DF, McKeon BJ, Bamieh B, Papachristodoulou P, Doyle JC (2011) Amplification and Nonlinear Mechanisms in Plane Couette Flow, Physics of Fluids, in press (published online 17 June 2011)
Cyber-physical systems combine a cyber side (computing and networking) with a physical side (mechanical, electrical, and chemical processes). Such systems present the biggest challenges as well as the biggest opportunities in several large industries, including electronics, energy, automotive, defense and aerospace, telecommunications, instrumentation, industrial automation. Engineers today do successfully design cyber-physical systems in a variety of industries. Unfortunately, the development of systems is costly, and development schedules are difficult to stick to. The complexity of cyber-physical systems, and particularly the increased performance that is offered from interconnecting what in the past have been separate systems, increases the design and verification challenges. As the complexity of these systems increases, our inability to rigorously model the interactions between the physical and the cyber sides creates serious vulnerabilities. Systems become unsafe, with disastrous inexplicable failures that could not have been predicted. Distributed control of multi-scale complex systems is largely an unsolved problem. A common view that is emerging in research programs in Europe and the US is "enabling contract-based design (CBD)," which formulates a broad and aggressive scope to address urgent needs in the systems industry. We present a design methodology and a few examples in controller design whereby contract-based design can be merged with platform-based design to formulate the design process as a meet-in-the-middle approach, where design requirements are implemented in a subsequent refinement process using as much as possible elements from a library of available components. Contracts are formalizations of the conditions for correctness of element integration (horizontal contracts), for lower level of abstraction to be consistent with the higher ones, and for abstractions of available components to be faithful representations of the actual parts (vertical contracts).
More information provided here.
Fifty years ago, control and computing were part of a broader system science. After a long period of intra-disciplinary development which resulted in control and computing being distant from each other, embedded and hybrid systems have challenged us to unite the, now developed, theories of continuous control and discrete computing on a broader system theoretic basis. In this talk, we will present a notion of system approximation that applies to both discrete and continuous systems by developing notions of approximate language inclusion, approximate simulation, and approximate bisimulation relations. We define a hierarchy of approximation pseudo-metrics between two systems that quantify the quality of the approximation, and capture the established notions in computer science as zero sections. Algorithms are developed for computing the proposed pseudo-metrics, both exactly and approximately. The exact algorithms require the generalization of the fixed point algorithms for computing simulation and bisimulation relations, or dually, the solution of a static game whose cost is the so-called branching distance between the systems. Approximations for the pseudo-metrics can be obtained by considering Lyapunov-like functions called simulation and bisimulation functions. Our approximation framework will be illustrated in in problems such as safety verificaion problems for continuous systems, approximating nonlinear systems by discrete systems, and hierarchical control design.
A distributed systems consists of an interconnection of two or more subsystems. Control of such systems is structured by two or more controllers each receiving an observation stream from a local subsystem and providing an input to the local subsystem. The control objectives mostly refer to the interaction of the subsystems in the global system. Examples of distributed control systems include: The control of autonomous underwater vehicles with the problem of coordination of the activities of the vehicles. The control of road networks with a hierarchical-distributed system for coordination of different control measures. The control of automated guided vehicles on a container terminal for safety and for efficiency. Control of large complex machines with the problem of control of the parallel operations using several actuators and sensors. Control synthesis of distributed systems will be described for the following control architectures: Distributed control often leading to a game theoretic approach. Distributed control with communication between controllers in which the emphasis is on what, when, and to whom to communicate. Coordination control with attention for the coordination aspects between subsystems. Hierarchical control of a hierarchically structured system. A research program will be described for control of distributed systems and of hierarchical systems. The lecture is based on the project Control for Coordination of Distributed Systems (C4C; sponsored by the European Commission INFSO-ICT-223844).
We consider the NP-hard problem of finding a minimum norm vector in $n$-dimensional real or complex Euclidean space, subject to $m$ concave homogeneous quadratic constraints. We show that the semidefinite programming relaxation for this nonconvex quadratically constrained quadratic program (QP) provides an $O(m^2)$ approximation in the real case, and an $O(m)$ approximation in the complex case. Moreover, we show that these bounds are tight up to a constant factor. When the Hessian of each constraint function is of rank one (namely, outer products of some given so-called {\it steering} vectors) and the phase spread of the entries of these steering vectors are bounded away from $\pi/2$, we establish a certain "constant factor" approximation (depending on the phase spread but independent of $m$ and $n$) for both the SDP relaxation and a convex QP restriction of the original NP-hard problem. When the homogeneous quadratic constraints are separable and $m=n$, we show that the SDP relaxation is actually tight. All theoretical results will be illustrated through a transmit beamforming application in wireless communication.
Recent advances in experimental techniques have made it possible to generate an enormous amount of `raw' biological data, with cancer biology being no exception. The main challenge faced by cancer biologists now is the generation of plausible hypotheses that can be evaluated against available data and/or validated through further experimentation. For persons trained in control theory, there is now a significant opportunity to work with biologists to create a virtuous cycle of hypothesis generation and experimentation. In this talk, we discuss four specific problems in cancer biology that are amenable to study using probabilistic methods. These are: reverse engineering gene regulatory networks, constructing context-specific gene regulatory networks, analyzing the significance of expression levels for collections of genes, and discriminating between drivers (mutations that cause cancer) and passengers (mutations that are caused by cancer or have no impact). Some research problems that merit the attention of the controls community are also suggested.
Recent policies combined with potential for technological innovations and business opportunities, have attracted a high level of interest in smart grids. The potential for a highly distributed system with a high penetration of intermittent sources poses opportunities and challenges. Any complex dynamic infrastructure network typically has many layers, decision-making units and is vulnerable to various types of disturbances. Effective, intelligent, distributed control is required that would enable parts of the networks to remain operational and even automatically reconfigure in the event of local failures or threats of failure. A major challenge is posed by the lack of a unified mathematical framework with robust tools for modeling, simulation, control and optimization of time-critical operations in complex multicomponent and multiscaled networks. Mathematical models of such complex systems are typically vague (or may not even exist); moreover, existing and classical methods of solution are either not available, or are not sufficiently powerful. From a strategic R&D viewpoint, how to retrofit and engineer a stable, secure, resilient grid with large numbers of such unpredictable power sources? What roles will assets optimization, increased efficiency, energy storage, advanced power electronics, power quality, electrification of transportation, novel control algorithms, communications, cyber and infrastructure security play in the grid of the future? What are the emerging technologies to enable new products, services, and markets? In this presentation, we will present an overview of smart grids and recent advances in distributed sensing, modeling, and control, particularly at both the high-voltage power grid and at consumer level. Such advances may contribute toward the development of an effective, intelligent, distributed control of power system networks with a focus on addressing distributed sensing, computation, estimation, controls and dynamical systems challenges and opportunities ahead.
We live in a "distributed world" made by countless "nodes", being them cities, computers, people, etc., connected by a dense web of transportation, communication, or social ties. The term "network", describing such a collection of nodes and links, nowadays has become of common use thanks to our extensive reliance on "connections of interdependent systems" for our everyday life, for building complex technical systems, infrastructures and so on. In an increasingly "smarter" planet, it is expected that systems are safe, reliable, available 24/7, and possibly at low-cost maintenance. In this connection, monitoring and fault diagnosis are of customary importance to ensure high levels of safety, performance, reliability, dependability, and availability. In fact, faults and malfunctions can result, just referring to industrial plants, in off-specification production, increased operating costs, chance of line shutdown, danger for humans, detrimental environmental impact, and so on. Faults and malfunctions should be detected promptly and their source and severity should be diagnosed so that corrective actions can be possibly taken. This lecture deals with a on-line approximation-based distributed fault diagnosis approach for large-scale nonlinear systems, by exploiting a "divide et impera" approach in which the overall diagnosis problem is decomposed into smaller subproblems, simpler enough to be solved within the existing computation and communication architectures. The distributed detection, isolation and identification task is broken down and assigned to a network of "Local Diagnostic Units", each having a "different/local view" on the system: they are allowed to communicate with each other and also to cooperate on the diagnosis of system components that may be shared thus yielding a global diagnosis decision. In the lecture, issues and perspectives will be addressed as well in a paradigmatic industrial context of safety-critical process control systems.
Concept abstraction is an important component of intelligence. Scientists today still do not know how the brain accomplishes it. In this talk we compare some recent mathematical results about random walks on manifolds and graphs with the features of concept abstraction processes to seek understanding of the algorithms involved.
Switched systems with positivity constraints arise in various areas. They have been fruitfully employed to model consensus problems, biological systems dynamics, and recently, viral mutation dynamics under drug treatment. The theory of "positive switched systems" is rather challenging and offers quite a number of interesting open problems. In the talk, we will illustrate the main results available as far as stability, stabilizability, and controllability issues are concerned. Some open problems will be proposed, and some applications of this general theory in the area of biological systems will be illustrated.
Designs in systems and control are traditionally carried out through deterministic algorithms consisting of a sequence of steps set by deterministic rules. This approach, however, can be generalized by the introduction of randomization: a randomized algorithm is an algorithm where one or more steps are based on a random rule, that is – among many deterministic rules – one rule is selected according to a random scheme. Randomization has turned out to be a powerful tool for solving a number of problems deemed unsolvable with deterministic methods.
A crucial fact is that randomization permits one to introduce the notion of ``probabilistically successful algorithm''. In many cases, when deterministic successfulness cannot be achieved, probabilistic successfulness offers a valid alternative.
In the talk, the use of randomized algorithms will be discussed in relation to several problems: