Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Linear Feedback Controls: The Essentials
Linear Feedback Controls: The Essentials
Linear Feedback Controls: The Essentials
Ebook534 pages3 hours

Linear Feedback Controls: The Essentials

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The design of control systems is at the very core of engineering. Feedback controls are ubiquitous, ranging from simple room thermostats to airplane engine control. Helping to make sense of this wide-ranging field, this book provides a new approach by keeping a tight focus on the essentials with a limited, yet consistent set of examples. Analysis and design methods are explained in terms of theory and practice. The book covers classical, linear feedback controls, and linear approximations are used when needed. In parallel, the book covers time-discrete (digital) control systems and juxtaposes time-continuous and time-discrete treatment when needed. One chapter covers the industry-standard PID control, and one chapter provides several design examples with proposed solutions to commonly encountered design problems.

The book is ideal for upper level students in electrical engineering, mechanical engineering, biological/biomedical engineering, chemical engineering and agricultural and environmental engineering and provides a helpful refresher or introduction for graduate students and professionals

  • Focuses on the essentials of control fundamentals, system analysis, mathematical description and modeling, and control design to guide the reader
  • Illustrates the theory and practical application for each point using real-world examples
  • Strands weave throughout the book, allowing the reader to understand clearly the use and limits of different analysis and design tools
LanguageEnglish
Release dateJul 25, 2013
ISBN9780124055131
Linear Feedback Controls: The Essentials
Author

Mark A. Haidekker

Mark A. Haidekker is Professor at College of Engineering in the University of Georgia, Athens, GA, USA

Related to Linear Feedback Controls

Related ebooks

Mechanical Engineering For You

View More

Related articles

Reviews for Linear Feedback Controls

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Linear Feedback Controls - Mark A. Haidekker

    1

    Introduction to Linear Feedback Controls

    Abstract

    Automation and controls date back thousands of years, and likely begun with the desire to keep water levels for irrigation constant. Much later, the industrial revolution brought a need for methods and systems to regulate machinery, for example the speed of a steam engine. Since about two centuries, engineers have found methods to describe control systems mathematically, with the result that the system behavior could be more accurately predicted and control systems more accurately designed. Feedback controls are control systems where a sensor monitors the property of the system to be controlled, such as motor speed, pressure, position, voltage, or temperature. Common to all feedback control systems is the comparison of the sensor signal to a reference signal, and the existence of a controller that influences the system to minimize the deviation between the sensor and reference signals. Feedback control systems are designed to meet specific goals, such as keeping a temperature or speed constant, or to accurately follow the reference signal. In this chapter, some of the fundamental principles of feedback control systems are introduced, and some common terms defined.

    Feedback control systems have a long history. The first engineered feedback control systems, invented nearly 2500 years before our time, were intended to keep fluid levels constant. One application were early water clocks. With constant hydrostatic pressure, the flow rate of water can be kept constant, and the fill time of a container can be used as a time reference. A combination of a floater and a valve served as the control unit to regulate the water level and thus the hydrostatic pressure.

    During the early industrial revolution, the wide adoption of steam engines was helped with the invention of the governor. A steam engine suffers from long delays (for example, when coal is added to the furnace), making them hard to control manually. The governor is an automated system to control the rotational speed of a steam engine in an unsupervised fashion. Float regulators were again important, this time to keep the water level in a boiler constant (Figure 1.1).

    Figure 1.1 Schematic of a water-level regulator with a floater. As steam is produced, the water level sinks, and so does the floater. A valve opens as a consequence, and new water is allowed to enter the boiler, causing the floater to rise and close the valve.

    About 200 years ago, engineers made the transition from intuitively-designed control systems to mathematically-defined systems. Only with the help of mathematics to formally describe feedback control systems it became possible to accurately predict the response of a system. This new paradigm also allowed control systems to grow more complex. The first use of differential equations to describe a feedback control system is attributed to George Biddell Airy, who attempted to compensate a telescope’s position for the rotation of the earth with the help of a feedback control system. Airy also made the discovery that improperly designed controls may lead to large oscillatory responses, and thus described the concept of instability. The mathematical treatment of control systems—notably stability theory—was further advanced by James Clerk Maxwell, who discovered the characteristic equation and found that a system is stable if the characteristic equation has only roots with a negative real part.

    Another important step was the frequency-domain description of systems by Joseph Fourier and Pierre-Simon de Laplace. The frequency-response description of systems soon became a mainstream method, driven by progress in the telecommunications. Around 1940, Hendrik Wade Bode introduced double logarithmic plots of the frequency response (today known as Bode-plots) and the notion of phase and gain margins as a metric of relative stability. Concurrently, progress in automation and controls was also driven by military needs, with two examples being artillery aiming and torpedo and missile guidance. The PID controller was introduced in 1922 by Nicolas Minorsky to improve the steering of ships.

    With the advent of the digital computer came another revolutionary change. The development of digital filter theory found immediate application in control theory by allowing to replace mechanical controls or controls built with analog electronics by digital systems. The algorithmic treatment of control problems allowed a new level of flexibility, specifically for nonlinear systems. Modern controls, that is, both the numerical treatment of control problems and the theory of nonlinear control systems complemented classical control theory, which was limited to linear, time-invariant systems.

    In this book, we will attempt to lay the foundations for understanding classical control theory. We will assume all systems to be linear and time-invariant, and make linear approximations for nonlinear systems. Both analog and digital control systems are covered. The book progresses from the theoretical foundations to more and more practical aspects. The remainder of this chapter (Chapter 1) introduces the basic concepts and terminology of feedback controls, and it includes a first example: two-point control systems.

    A brief review of linear systems, their differential equations and the treatment in the Laplace domain follows in -transform.

    A simple first-order system is then introduced in detail in -domain, thus establishing the relationship between time-domain and Laplace-domain treatment. A second-order example introduced in Chapter 9 allows to more closely examine the dynamic response of a system and how it can be influenced with feedback control.

    Subsequent chapters can be seen as introducing tools for the design engineer’s toolbox: the formal description of linear systems with block diagrams (Chapter 7), the treatment of nonlinear components (Chapter 8), stability analysis and design (Chapter 10), frequency-domain methods (Chapter 11), and finally the very powerful root locus design method (Chapter 12). A separate chapter (Chapter 13) covers the PID controller, which is one of the most commonly used control systems.

    Chapter 14 is entirely dedicated to providing practical examples of feedback controls, ranging from temperature and motor speed control to more specialized applications, such as oscillators and phase-locked loops. The importance of Chapter 14 lies in the translation of the theoretical concepts to representative practical applications. These applications allow to demonstrate how the mathematical concepts of this book relate to practical design goals.

    -domain correspondences, an introduction to operational amplifiers as control elements, and an overview of key commands for the simulation software Scilab. Scilab (www.scilab.org) is free, open-source software, which any reader can freely download and install. Furthermore, Scilab is very similar to MATLAB, and readers can easily translate their knowledge to MATLAB if needed.

    1.1 What are Feedback Control Systems?

    A feedback control system continuously monitors a process and influences the process in such a manner that one or more process parameters (the output variables) stay within a prescribed range. Let us illustrate this definition with a simple example. Assume an incubator for cell culture as sketched in Figure 1.2. Its interior temperature needs to be kept at 37 °C. To heat the interior, an electric heating coil is provided. The incubator box with the heating coil can be seen as the process for which feedback control will be necessary, as will become evident soon. For now, let us connect the heating coil to a rheostat that allows us to control the heat dissipation of the heating coil. We can now turn up the heat and try to relate a position of the rheostat to the interior temperature of the incubator. After some experimentation, we’ll likely find a position where the interior temperature is approximately 37 °C. Unfortunately, after each change of the rheostat, we have to wait some time for the incubator temperature to equilibrate, and the adjustment process is quite tedious. Even worse, equilibrium depends on two factors: (1) the heat dissipation of the heater coil and (2) the heat losses to the environment. Therefore, a change in the room temperature will also change the incubator’s temperature unless we compensate by again adjusting the rheostat. If somebody opens the incubator’s door, some heat escapes and the temperature drops. Once again, it will take some time until equilibrium near 37 °C is reached.

    Figure 1.2 Schematic representation of an incubator. The interior of the chamber is supposed to be kept at a constant temperature. A rheostat can be used to adjust the heater power (a) and thus influence the temperature inside the chamber. The temperature inside the chamber equilibrates when energy introduced by the heater balances the energy losses to the environment. However, the energy losses change when the outside temperature changes, or when the door to the incubator is opened. This may require readjustment of the rheostat. The system in (a) is an open-loop control system. To keep the temperature inside the chamber within tighter tolerances, a sensor can be provided (b). By measuring the actual temperature and comparing it to a desired temperature, adjustment of the rheostat can be automated. The system in (b) is a feedback control system with feedback from the sensor to the heater.

    Although the rheostat allows us to control the temperature, it is not feedback control. Feedback control implies that the controlled variable is continuously monitored and compared to the desired value (called the setpoint). From the difference between the controlled variable and the setpoint, a corrective action can be computed that drives the controlled variable rapidly toward the value predetermined with the setpoint. Therefore, a feedback control system requires at least the following components:

    • A process. The process is responsible for the output variable. Furthermore, the process provides a means to influence the output variable. In the example in Figure 1.2, the process is the chamber together with the heating coil. The output variable is the temperature inside the chamber, and the heating coil provides the means to influence the output variable.

    • A sensor. The sensor continuously measures the output variable and converts the value of the output variable into a signal that can be further processed, such as a voltage (in electric control systems), a position (in mechanical systems), or a pressure (in pneumatic systems).

    • A setpoint, or more precisely, a means to adjust a setpoint. The setpoint is related to the output variable, but it has the same units as the output of the sensor. For example, if the controlled variable is a temperature, and the sensor provides a voltage that is proportional to the temperature, then the setpoint will be a voltage as well.

    • A controller. The controller measures the deviation of the controlled variable from the setpoint and creates a corrective action. The corrective action is coupled to the input of the process and used to drive the output variable toward the setpoint.

    The act of computing a corrective action and feeding it back into the process is known as closing the loop and establishes the closed-loop feedback control system. A closed-loop feedback control system that follows our example is shown schematically in Figure 1.3. Most feedback control systems follow the example in Figure 1.3, and it is important to note that almost without exception the control action is determined by the controller from the difference between the setpoint and the measured output variable. This difference is referred to as control deviation.

    Figure 1.3 Block diagram schematic of a closed-loop feedback system. In the example of the incubator, the process would be the incubator itself with the heater coil, the sensor would be a temperature sensor that provides a voltage that is proportional to the temperature, and the controller is some sort of power driver for the heating coil that provides heating current when the temperature inside the incubator drops below the setpoint. In almost all cases, the measurement output is subtracted from the setpoint to provide the control deviation. This error signal is then used by the actual controller to generate the corrective action. The gray shaded rectangle highlights the subtraction operation.

    Not included in Figure 1.3 are disturbances. A disturbance is any influence other than the control input that causes the process to change its output variable. In the example of the incubator, opening the door constitutes a disturbance (transient heat energy loss through the open door), and a change of the room temperature also constitutes a disturbance, because it changes the heat loss from the incubator to the environment.

    1.2 Some Terminology

    We introduced the concept of closed-loop feedback control in Figure 1.3. We now need to define some terms. Figure 1.4 illustrates the relationship of these terms to a closed-loop feedback system.

    • Process: Also referred to as plant—the process is a system that has the controlled variable as its property. The process has some means to influence the controlled variable. Therefore, the process can be interpreted as a linear system with one output and one (or more) inputs.

    • Sensor: The sensor is an apparatus to measure the controlled variable and make the measurement result available to the controller. The sensor itself may have its own transfer function, such as a gain or delay function.

    • Controller: The controller is a device that evaluates the control deviation and computes an appropriate control action. In many cases, the controller is the only part of the feedback control system that can be freely designed by the design engineer to meet the design goals of the closed-loop system.

    • Disturbance: can often be modeled as an additive input to the process.

    • Noise: as an additive input to the sensor.

    • Setpoint: Also referred to as reference signal. This signal determines the operating point of the system, and, under consideration of the sensor output signal, directly influences the controlled variable.

    • Control deviation: is the time-dependent difference between setpoint and sensor output. The control deviation is often also referred to as the error variable or error signal.

    • Control action: Also termed corrective action. This is the output signal of the controller and serves to actuate the process and therefore to move the controlled variable toward the desired value.

    • Controlled variable: This is the output of the process. The design engineer specifies the controlled variable in the initial stages of the

    Enjoying the preview?
    Page 1 of 1