IEEE.org | IEEE Xplore Digital Library | IEEE Standards | IEEE Spectrum | More Sites
Call for Award Nominations
Wed, August 21, 2019
Optimal controllers for linear or nonlinear dynamic systems with known dynamics can be designed by using Riccati and Hamilton-Jacobi-Bellman (HJB) equation respectively. However, optimal control of uncertain linear or nonlinear dynamic systems is a major challenge. Moreover, controllers designed in discrete-time have the important advantage that they can be directly implemented in digital form using modern-day embedded hardware. Unfortunately, discrete-time design using Lyapunov stability analysis is far more complex than the continuous-time counterpart since the first difference in Lyapunov function is quadratic in the states and not linear as in the case of continuous-time. By incorporating learning features with the feedback controller design, optimal adaptive control of such uncertain dynamical systems in discrete-time can be solved.
In this talk, an overview of first and second-generation feedback controllers with a learning component in discrete-time will be discussed. Subsequently, the discrete-time learning-based optimal adaptive control of uncertain nonlinear dynamic systems will be presented in a systematic manner using a forward in time approach based on reinforcement learning (RL)/approximate dynamic programming (ADP). Challenges in developing and implementing the three generations of learning controllers will be addressed using practical examples such as automotive engine emission control, robotics, and others. We will argue that discrete-time controller development is preferred for transitioning the developed theory to practice. Today, the application of learning controllers can be found in areas as diverse as process control, energy or smart grids, civil infrastructure, healthcare, manufacturing, automotive, transportation, entertainment, and consumer appliances. The talk will conclude with a short discussion of open research problems in the area of learning control.