IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
More Info
General Electric (GE) has embarked on a journey of merging controls and big data to create brilliant machines and systems that will unlock unprecedented performance through self-improvement and self-diagnosis. In this talk we will review the evolution of industrial controls, its complexity and increasing scale, ultimately leading to the systems of systems that GE refers to as the Industrial Internet. We will also consider challenges and opportunities related to interoperability, security, stability, and system resilience, and discuss specific development cases around infrastructure optimization including grid power, rail networks, and flight efficiency.
The number of devices connected to the Internet exceeded the number of people on the Internet in 2008, and is projected to reach 50 billion in 2020; worldwide smart power meter deployment is expected to grow from 130 million in 2011 to 1.5 billion in 2020, 90% of new vehicles sold in 2020 will have on-board connectivity platforms, as compared with 10% in 2012. The Industrial Internet will deliver new efficiency gains, accelerating productivity growth much the same way that the Industrial Revolution and the Internet Revolution did. Controls are at the heart of this new revolution.
Powerful model-based control tools have enabled the realization of modern clean and efficient automobiles. Our effort to minimize automotive pollution and fuel consumption at low cost is pushing us to control powertrain systems at their high efficiency limit where poorly understood phenomena occur. This semi-plenary story will highlight a few of these frontiers.
In the realm of internal combustion engines, a highly efficient gasoline engine with seemingly chaotic behavior will be controlled at the lean combustion stability limit. In the electrification realm, stressed-out batteries and dead-ended fuel cells will highlight the challenges and successes in understanding, modeling, and controlling highly efficient power conversion on-board a vehicle. With these highlights it will become clear that as we race to improve mileage by 50% over the next decade powertrain control engineers will take the driver's seat!
In the evolution of systems and control there has been an interesting interplay between the demands of new applications initiating the development of new methodologies and theories, and conversely theoretical or hardware developments enabling new applications. This talk will attempt to describe progress and interactions in this relationship. Also the role of design methodologies and theoretical results in the design of practical control systems will be discussed. Examples will be taken from flight control, automotive engine management together with some emerging areas.
My answer is "yes." In this lecture, I will make the case that there are some important open problems in finance which are ideally suited for researchers who are well versed in control theory. To this end, I will begin the presentation by quickly explaining what is meant by the notion of "technical analysis" in the stock market. Then I will address, from a control-theoretic point of view, a longstanding conundrum in finance: Why is it that so many asset managers, hedge funds and individual investors trade stock using technical analysis techniques despite the existence of a significant body of literature claiming that such methods are of questionable worth with little or no theoretical rationale? In fact, detractors describe such stock trading methods as "voodoo" and an "anathema to the academic world." To date, in the finance literature, the case for "efficacy" of such stock-trading strategies is based on statistics and empirical back-testing using historical data. With these issues providing the backdrop, my main objective in this lecture is to describe a new theoretical framework for stock trading - based on technical analysis and involving some simple ideas from robust and adaptive control. In contrast to the finance literature, where conclusions are drawn based on statistical evidence from the past, our control-theoretic point of view leads to robust certification theorems describing various aspects of performance. To illustrate how such a formal theory can be developed, I will describe results obtained to date on trend following, one of the most well-known technical analysis strategies in use. Finally, it should be noted that the main point of this talk is not to demonstrate that control-theoretic considerations lead to new "market beating" algorithms. It is to argue that strategies which have heretofore been analyzed via statistical processing of empirical data can actually be studied in a formal theoretical framework.
More information provided here.
Recent years have witnessed significant interest in the area of distributed architecture control systems, with applications ranging from autonomous vehicle teams to communication networks to smart grid. The general setup is a collection of multiple decision-making components interacting locally to achieve a common collective objective. While such architectures readily suggest game theory as a relevant formalism, game theory is better known for its traditional role as a "descriptive" modeling framework in social sciences rather than a "prescriptive" design tool for engineered systems. This talk begins with an overview of how game theory can be used as an effective design approach for distributed architecture control systems, with illustrative examples of distributed coordination. Inspired by new found connections, the talk continues with a discussion of how methods from systems and control can shed new light on more traditional questions in game theory, specifically regarding evolutionary games and agent based modeling.
Wind energy is recognized worldwide as cost-effective and environmentally friendly and is among the world's fastest-growing sources of electrical energy. Despite the amazing growth in global wind power installations in recent years, science and engineering challenges still exist. It is commonly reported that the variability of wind is a major obstacle to integrating large amounts of wind energy on the utility grid. Wind's variability creates challenges because power supply and demand must match in order to maintain a constant grid frequency. As wind energy penetration increases to higher levels in many countries, however, systems and control techniques can be used to actively control the power generated by wind turbines and wind farms to help regulate the grid frequency. In this talk, we will first provide an overview of wind energy systems by introducing the primary structural components and operating regions of wind turbines. The operation of the utility grid will be outlined by discussing the electrical system, explaining the importance of preserving grid reliability through controlling the grid frequency (which is a measure of the balance between electrical generation and load), and describing the methods of providing ancillary services for frequency support using conventional generation utilities. We will then present a vision for how wind turbines and wind farms can be controlled to help stabilize and balance the frequency of the utility grid, and we will highlight control methods being developed in industry, national laboratories, and academia for providing active power ancillary services with wind energy. Results of simulation studies as well as experimental field tests will be presented to show the promise of the techniques being developed. We shall close by discussing future research avenues to enable widespread adoption of active power control services provided by wind farms, and how advanced distributed capabilities can reduce the integration cost of wind energy and enable much higher wind energy penetrations while simultaneously maintaining and possibly increasing the reliability of the utility grid.
In many problems in control, optimal and robust control, one has to solve global optimization problems of the form: P : f* = minx { f(x) : x ∈ K}, or, equivalently, f* = max{λ : f - λ ≥ 0 on K}, where f is a polynomial (or even a semi-algebraic function) and K is a basic semi-algebraic set. One may even need solve the "robust" version min{f(x) : x ∈ K; h(x; u) ≥ 0, ∀u ∈ U} where U is a set of parameters. For instance, some static output feedback problems can be cast as polynomial optimization problems whose feasible set K is dened by a polynomial matrix inequality (PMI). And robust stability regions of linear systems can be modeled as parametrized polynomial matrix inequalities (PMIs) where parameters u account or uncertainties and (decision) variables x are the controller coefficients.
Therefore, to solve such problems one needs tractable characterizations of polynomials (and even semi-algebraic functions) which are nonnegative on a set, a topic of independent interest and of primary importance because it also has implications in many other areas.
We will review two kinds of tractable characterizations of polynomials which are non- negative on a basic closed semi-algebraic set K ⊂ Rn. The rst type of characterization is when knowledge on K is through its dening polynomials, i.e., K = {x : gj(x) ≥ 0; j = 1, . . . ,m}, in which case some powerful certicates of positivity can be stated in terms of some sums of squares (SOS)-weighted representation. For instance, this allows to dene a hierarchy fo semidenite relaxations which yields a monotone sequence of lower bounds converging to f* (and in fact, nite convergence is generic). There is also another way of looking at nonnegativity where now knowledge on K is through moments of a measure whose support is K. In this case, checking whether a polynomial is nonnegative on K reduces to solving a sequence of generalized eigenvalue problems associated with a count- able (nested) family of real symmetric matrices of increasing size. When applied to P, this results in a monotone sequence of upper bounds converging to the global minimum, which complements the previous sequence of upper bounds. These two (dual) characterizations provide convex inner (resp. outer) approximations (by spectrahedra) of the convex cone of polynomials nonnegative on K.
There has been remarkable progress in sampled-data control theory in the last two decades. The main achievement here is that there exists a digital (discrete-time) control law that takes the intersample behavior into account and makes the overall analog (continuous-time) performance optimal, in the sense of H-infinity norm. This naturally suggests its application to digital signal processing where the same hybrid nature of analog and digital is always prevalent. A crucial observation here is that the perfect band-limiting hypothesis, widely accepted in signal processing, is often inadequate for many practical situations. In practice, the original analog signals (sounds, images, etc.) are neither fully band-limited nor even close to be band-limited in the current processing standards.
The present talk describes how sampled-data control theory can be applied to reconstruct the lost high-frequency components beyond the so-called Nyquist frequency, and how this new method can surpass the existing signal processing paradigm. We will also review some concrete examples for sound processing, recovery of high frequency components for MP3/AAC compressed audio signals, and super resolution for image (still/moving) processing. We will also review some crucial steps in leading this technology to the commercial success of 40 million sound processing chips.
Advanced motion systems like pick-and-place machine used in the semiconductor industry, challenge the frontiers of systems and control theory and practice. In the design phase, control oriented design of the electro-mechanics is necessary in order to achieve the tight performance specifications. Once realized, and since experimentation is fast, a machine in the loop procedure can be explored to close the design loop from experiment, using experimental model building,model-based control design, implementation and performance evaluation. Extension of linear modelling techniques towards some classes of nonlinear systems is relevant for improved control of specific motion systems, such as with friction. In the application field of medical robotics the experiences from high tech motion systems can be used successfully, and an eye surgical robot with haptics will be shown as an example.
The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Game theory is beginning to emerge as a valuable set of tools for achieving this goal as many popular multiagent systems can be modeled as games, e.g., sensor coverage, consensus, task allocation, among others. Game theory is a well-established discipline in the social sciences that is primarily used for modeling social behavior. Traditionally, the preferences of the individual agents' are modeled as utility functions and the resulting behavior is assumed to be an equilibrium concept associated with these modeled utility functions, e.g., Nash equilibrium. This is in stark contrast to the role of game theory in engineering systems where the goal is to design both the agents' utility functions and an adaptation rule such that the resulting global behavior is desirable. The transition of game theory from a modeling tool for social systems to a design tool for engineering promotes several new research directions that we will discuss in this talk. In particular, we will focus on the question of how to design admissible agent utility functions such that the resulting game possesses desirable properties, e.g., the existence and efficiency of pure Nash equilibria. Our motivation for considering pure Nash equilibria stems from the fact that adaptation rules can frequently be derived which guarantee that the collective behavior will converge to such pure Nash equilibria. Our first result focuses on ensuring the existence of pure Nash equilibria for a class of separable resource allocation problems that can model a wide array of applications including facility location, routing, network formation, and coverage problems. Within this class, we prove that weighted Shapley values completely characterize the space of local utility functions that guarantee the existence of a pure Nash equilibrium. That is, if a utility design cannot be represented as a weighted Shapley value, then there exists a game for which a pure Nash equilibrium does not exist. Another concern is distributed versus centralized efficiency. Once distributed agents have settled on an equilibrium, the resulting performance need not be the same as from a centralized design (cf., so-called "price-of-anarchy"). We compare different utility design methods and their resulting effect on efficiency. Finally, we briefly discuss online adaptation rules leading to equilibrium.
Healthcare in the U.S. is at a transition point. Costs for care have risen to unsustainable levels and are among the highest in the world without commensurate benefits. At the same time, new care models and the digitization of healthcare offer tremendous opportunities to improve health and health care while reducing costs. In this context, intelligent technologies are playing a growing role in providing better understanding and decision support in healthcare systems. Solutions range from population health management tools for organizations to predictive clinical decision support applications for individuals. Advanced technologies are also applied in administrative tasks such as insurance eligibility determination and fraud detection. Looking ahead, the advent of personalized medicine will bring the promise and need for intelligent technologies into even sharper focus. This talk will review current trends, discuss representative approaches, and show examples that demonstrate the value of intelligent monitoring and decision support in healthcare systems.
In many practical systems, such as engineering, social, and financial systems, control decisions are made only when certain events happen. This is either because of the discrete nature of sensor detection and digital computing equipment, or the limitation of computing power, which makes state-based control infeasible due to the huge state spaces involved. The performance optimization of such systems is generally different from traditional optimization approaches, such as Markov decision processes, or dynamic programming. In this talk, we introduce, in an intuitive manner, a new optimization framework called event-based optimization. This framework has a wide applicability to the aforementioned systems. With performance potential as building blocks, we develop optimization algorithms for event-based optimization problems. The optimization algorithms are first proposed based on intuition, and theoretical justifications are then given with a performance sensitivity based approach. Finally, we provide a few practical examples to demonstrate the effectiveness of the event-based optimization framework. We hope this framework may provide a new perspective to the optimization of the performance of event-triggered dynamic systems.
Advanced motion systems like pick-and-place machines used in the semiconductor industry challenge the frontiers of mechatronic design and systems and control theory and practice. In the design phase, control-oriented design of the electro-mechanics is necessary in order to achieve tight performance specifications. Once realized, a machine-in-the-loop procedure can be explored to close the design loop during experiments as well as for experimental model building, model-based control design, and implementation and performance evaluation. Nevertheless, reliable numerical tools are required to meet the challenges posed with respect to dimensionality and model complexity. Extension of linear modeling techniques towards some classes of nonlinear systems is relevant for improved control of specific motion systems, such as those with friction. Further, medical robotics can greatly benefit from the experiences of the high tech motion systems industry, and an eye surgical robot with haptics will be shown as an example. Other challenging applications in need of advanced design and modeling and control are fuel-efficient vehicles (including ultra-clean engines), vehicle electric and hybrid power trains, and plasma fusion processes. Finally, the 2012 World Champion Soccer Robots (midsize) will also be discussed as an example of advanced motion control for high tech systems.
Dynamic models for bipedal robots contain both continuous and discrete elements, with switching events that are spatially driven by unilateral constraints at ground contact and impulse-like forces that occur at foot touchdown. The complexity of the models has led to a host of ad hoc trial-and-error feedback designs. This presentation will show how nonlinear feedback control methods are providing model-based solutions that also enhance the ability of bipedal robots to walk, run, and recover from stumbles. The talk addresses both theoretical and experimental aspects of bipedal locomotion. Videos of the some of the experiments have been covered in the popular press, bringing feedback control to the attention of the general public.
The past few years have witnessed a revolution in data collection capabilities: The development of low cost, ultra low power sensors capable of harvesting energy from the environment has rendered ubiquitous sensing feasible. When coupled with a parallel growth in actuation capabilities, these developments open up the possibility of new control applications that can profoundly impact society, ranging from zero-emissions buildings to ``smart" grids and managed aquifers to achieve long term sustainable use of scarce resources. A major road-block to realizing this vision stems from the curse of dimensionality. To successfully operate in these scenarios, controllers will need to timely extract relevant, actionable information from the very large data streams generated by the ubiquitous sensors. However, existing techniques are ill-equipped to deal with this "data avalanche."
This talk discusses the central role that systems theory can play in developing computationally tractable, scalable methods for extracting actionable information that is very sparsely encoded in high dimensional data streams. The key insight is the realization that actionable information can be often represented with a small number of invariants associated with an underlying dynamical system. Thus, in this context, the problem of actionable information extraction can be reformulated as identifying these invariants from (high dimensional) noisy data, and thought of as a generalization of sparse signal recovery problems to a dynamical systems framework. While in principle this approach leads to generically nonconvex, hard to solve problems, computationally tractable relaxations (and in some cases exact solutions) can be obtained by exploiting a combination of elements from convex analysis and the classical theory of moments. These ideas will be illustrated using examples from several application domains, including autonomous vehicles, computer vision, systems biology and economics. We will conclude the talk by exploring the connection between hybrid systems identification, information extraction, and machine learning, and point out to new research directions in systems theory motivated by these problems.
A hallmark of living cells is their inherent stochasticity. Stochastic molecular noise in individual cells manifests as cell-to-cell variability within a population of genetically identical cells. While experimental tools have enabled the measurement and quantification of variability of populations consisting of millions of cells, new modeling and analysis tools have lead to a substantial improvement in our understanding of the stochastic nature of living cell populations and its biological role. More recently, these developments came together to pave the way for the real-time control of living cells.
In this presentation, we describe novel analytical and experimental work that demonstrates how a computer can be interfaced with living cells and used to control their behavior. We discuss how computer controlled light pulses, in combination with a genetically encoded light-responsive module and a flow cytometer can be configured to achieve precise and robust set-point regulation of gene expression in the noisy environment of the cell. We then address the theoretical, computational, and practical issues concerning the feedback control of single cells as well as cell populations. Aside from its potential applications in biotechnology and therapeutics, this approach opens up exciting opportunities for the development of new control theoretic methods aimed at confronting the unique challenges of manipulating the dynamic behavior of living cells.
The last few years have seen significant progress in our understanding of how one should structure multi-robot systems. New control, coordination, and communication strategies have emerged and in this talk, we summarize some of these developments. In particular, we will discuss how to go from local rules to global behaviors in a systematic manner in order to achieve distributed geometric objectives, such as achieving and maintaining formations, area coverage, and swarming behaviors. We will also investigate how users can interact with networks of mobile robots in order to inject new information and objectives. The efficacy of these interactions depends directly on the interaction dynamics and the structure of the underlying information-exchange network. We will relate these network-level characteristics to controllability and manipulability notions in order to produce effective human-swarm interaction strategies.