Control and Its Applications

Control methods are used whenever some quantity, such as temperature, altitude or speed, must be made to behave in some desirable way over time. For example, control methods are used to make sure that the temperature in our homes stays within acceptable levels in both winter and summer; so that airplanes maintain desired heading, speed and altitude; and so that automobile emissions meet specifications.

The thermostat that regulates the operation of the furnace in a typical home is an example of a device that controls the heating system, so that the temperature is maintained at a specified level. The autopilot in a passenger aircraft that maintains speed, altitude and heading is an example of a more sophisticated automatic control system. The cruise control in a car, which maintains constant speed independently of road inclines, is yet another example of a control system. Control methods in biomedical applications make possible the use of electrical nerve signals to control prosthetics, and precision robots for cutting holes in bone for implanting artificial joints, resulting in much tighter fits than previously thought possible.

Control is All Around Us

Control is a common concept, since there always are variables and quantities, which must be made to behave in some desirable way over time.

In addition to the engineering systems, variables in biological systems such as the blood sugar and blood pressure in the human body, are controlled by processes that can be studied by the automatic control methods. Similarly, in economic systems variables such as unemployment and inflation, which are controlled by government fiscal decisions can be studied using control methods.

Our technological demands today impose extremely challenging and widely varying control problems. These problems range from aircraft and underwater vehicles to automobiles and space telescopes, from chemical processes and the environment to manufacturing, robotics and communication networks.

The Practice of Control

A large fraction of engineering design involves automatic control features. Frequently, control operations are implemented in an embedded microprocessor that observes signals from sensors and provides command signals to electromechanical actuators. Applications may range from washing machines to high performance jet engines. Designers frequently use computer-aided-design (CAD) software that embodies theoretical design algorithms, and permits tradeoff comparisons among various performance measures such as speed of response, operating efficiency and sensitivity to uncertainties in the model of the system. Proposed control designs, especially those for complex and expensive applications, are usually tested using computerbased simulations.

Control engineering experts keep up with the latest theoretical developments. Most control systems are put together by practical minded engineers who have a thorough understanding of application areas such as automotive engines, factory automation, robot dynamics, heating, ventilating and air conditioning.

Methodology

The first step in understanding the main ideas of control methodology is realizing that we apply control in our everyday life; for instance, when we walk, lift a glass of water, or drive a car. The speed of a car can be maintained rather precisely, by carefully observing the speedometer and appropriately increasing or decreasing the pressure on the gas pedal. Higher accuracy can perhaps be achieved by looking ahead to anticipate road inclines that affect the speed. This is the way the average driver actually controls speed. If the speed is controlled by a machine instead of the driver, then one talks about automatic speed control systems, commonly referred to as cruise control systems. An automatic control system, such as the cruise control system in an automobile, implements in the controller a decision process, also called the control law, that dictates the appropriate control actions to be taken for the speed to be maintained within acceptable tolerances. These decisions are taken based on how different the actual speed is from the desired, called the error, and on the knowledge of the car's response to fuel increases and decreases. This knowledge is typically captured in a mathematical model. Information about the actual speed is fed back to the controller by sensors, and the control decisions are implemented via a device, the actuator, that increases or decreases the fuel flow to the engine.

Foundations and Methods

Central in the control systems area is the study of dynamical systems. In the control of dynamical systems, control decisions are expected to be derived and implemented over real time. Feedback is used extensively to cope with uncertainties about the system and its environment.

Feedback is a key concept. The actual values of system variables are sensed, fed back and used to control the system. Hence the control law decision process is based not only on predictions about the plant behavior derived from the system model (as in open-loop control), but also on information about the actual system behavior (closed-loop feedback control).

The theory of control systems is based on firm mathematical foundations. The behavior of the system variables to be controlled is typically described by differential or difference equations in the time domain; by Laplace, Z and Fourier transforms in the transform (frequency) domain. There are well understood methods to study stability and optimality. Mathematical theories from partial differential equations, topology, differential geometry and abstract algebra are sometimes used to study particularly complex phenomena.

Control system theory research also benefits other areas, such as Signal Processing, Communications, Biomedical Engineering and Economics.

Challenges in Control

The ever increasing technological demands of society impose needs for new, more accurate, less expensive and more efficient control solutions to existing and novel problems. Typical examples are the control demands for passenger aircraft and automobiles. At the same time, the systems to be controlled often are more complex, while less information may be available about their dynamical behavior; for example such is the case in large flexible space structures. The development of control methodologies to meet these challenges will require novel ideas and interdisciplinary approaches, in addition to further developing and refining existing methods.

Emerging Control Areas

The increasing availability of vast computing power at low cost, and the advances in computer science and engineering, are influencing developments in control. For instance, planning and expert systems can be seen as decision processes serving purposes analogous to control systems and so lead naturally to interdisciplinary research and to intelligent control methods. There is significant interest in better understanding and controlling manufacturing processes typically studied in disciplines such as Operations Research, and this has led to interdisciplinary research to study the control of discrete-event systems (DES) that cannot be described by traditional differential or difference equations; and to the study of hybrid control systems that deal with the control of systems with continuous dynamics by sequential machines. Fuzzy control logic and neural networks are other examples of methodologies control engineers are examining to address the control of very complex systems.

Future Control Goals

What does the future hold? The future looks bright. We are moving toward control Systems that are able to cope and maintain acceptable performance levels under significant unanticipated uncertainties and failures, systems that exhibit considerable degrees of autonomy. We are moving toward autonomous underwater, land, air and space vehicles; highly automated manufacturing; intelligent robots; highly efficient and fault-tolerant voice and data networks; reliable electric power generation and distribution; seismically tolerant structures; and highly efficient fuel control for a cleaner environment.

Control systems are decision-making systems where the decisions are based on predictions of future behavior derived via models of the systems to be controlled, and on sensor-obtained observations of the actual behavior that are fed back. Control decisions are translated into control actions using control actuators. Developments in sensor and actuator technology influence control methodology, which is also influenced by the availability of low-cost computational resources.

Put Control in Your Future

The area of controls is challenging and rewarding as our world faces increasingly complex control problems that need to be solved. Immediate needs include control of emissions for a cleaner environment, automation in factories, unmanned space and underwater exploration, and control of communication networks. Control is challenging since it takes strong foundations in engineering and mathematics, uses extensively computer software and hardware and requires the ability to address and solve new problems in a variety of disciplines, ranging from aeronautical to electrical and chemical engineering, to chemistry, biology and economics.

We are very proud to be in control. Join us, and together we will face future challenges.

Brief History of Control       

Automatic control Systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient water clock of Ktesibios in Alexandria Egypt around the third century B.C. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in ~Baghdad when the Mongols captured the city in 1258 A.D. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply to just entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788.

In his 1868 paper "On Governors", J. C. Maxwell (who discovered the Maxwell electromagnetic field equations) was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis.

Control theory made significant strides in the next 150 years. New mathematical techniques made it possible to control, more accurately, significantly more complex dynamical systems than the original flyball governor. These techniques include developments in internal state variable descriptions, in robust, adaptive, stochastic and optimal control methods for continuous, discrete event and hybrid dynamical systems, followed by progress in networked and multi-agent systems. Applications of control methodology have helped make possible space travel and communication satellites, safer and more efficient aircraft, cleaner auto engines, cleaner and more efficient chemical processes, to mention but a few. For more information on applications of control methods, see here http://ieeecss.org/general/IoCT2-report.