Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Sensitivity Methods in Control Theory: Proceedings of an International Symposium Held at Dubrovnik, August 31–September 5, 1964
Sensitivity Methods in Control Theory: Proceedings of an International Symposium Held at Dubrovnik, August 31–September 5, 1964
Sensitivity Methods in Control Theory: Proceedings of an International Symposium Held at Dubrovnik, August 31–September 5, 1964
Ebook749 pages5 hours

Sensitivity Methods in Control Theory: Proceedings of an International Symposium Held at Dubrovnik, August 31–September 5, 1964

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Sensitivity Methods in Control Theory is a collection of manuscripts presented as the Third International Symposium of Sensitivity Analysis, held at Dubrovnik, Yugoslavia on August 31-September 5, 1964, sponsored by The Theory Committee of International Federation of Automatic Control. Sensitivity theory or sensitivity analysis concerns the solution of problems associated with parameter variations within the general scope of control theory. This book is organized into five parts encompassing 30 chapters. Part I presents some basic approaches of sensitivity analysis, such the Lyapunov's theory of stability, invariant imbedding, nonlinear sampled data, and linear time-varying systems. This part also looks into the preliminary steps towards the development of game theory and some general applications of sensitivity analysis. Part II treats the problem of accuracy, reliability, self-adjustment, and optimization of sensitivity of automatic control systems, while Part III deals with the functional derivative technique of sensitivity analysis and its applications for designing self-adjusting control systems. Part IV describes the task of synthesizing control systems for linear plants with variable parameters satisfying specified performance criteria. Part V considers the association between sensitivity and optimality in various control systems. This book will prove useful to design and other specialized fields in engineering.
LanguageEnglish
Release dateJun 28, 2014
ISBN9781483223254
Sensitivity Methods in Control Theory: Proceedings of an International Symposium Held at Dubrovnik, August 31–September 5, 1964

Related to Sensitivity Methods in Control Theory

Related ebooks

Technology & Engineering For You

View More

Related articles

Reviews for Sensitivity Methods in Control Theory

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Sensitivity Methods in Control Theory - L. Radanović

    Belgrade

    Part I

    BASIC APPROACHES

    Outline

    Chapter 1: SENSITIVITY ANALYSIS AND LYAPUNOV STABILITY

    Chapter 2: SENSITIVITY ANALYSIS AND INVARIANT IMBEDDING

    Chapter 3: STABILITY AND SENSITIVITY OF NONLINEAR SAMPLED DATA SYSTEMS

    Chapter 4: SENSITIVITY OPERATORS FOR LINEAR TIME-VARYING SYSTEMS

    Chapter 5: OPTIMALITY, INSENSITIVITY, AND GAME THEORY

    Chapter 6: THE ROLE OF SENSITIVITY ANALYSIS IN ENGINEERING PROBLEMS

    SENSITIVITY ANALYSIS AND LYAPUNOV STABILITY

    I. Gumowski,     Université de Toulouse, Toulouse, France and Université Laval, Quebec, Canada

    Publisher Summary

    This chapter focuses on sensitivity analysis and Lyapunov stability. Sensitivity analysis is an extension and development of a rather old idea, which became known in the theory of partial differential equations under the name of a correctly set problem. A correctly set problem is a problem admitting a solution y0 not only for an isolated set of parameters λ0, but also in at least a sufficiently small neighborhood of λ0. Furthermore, in the domain where the problem has a meaning, it is required that solutions y(λ), existing for parameter values λ other than λ0, be qualitatively of the same type as y0 and differ little from y0 when λ differs little from λ0. This chapter also discusses Lyapunov’s theory of stability. It highlights that Lyapunov stability is only meaningful for solutions of ordinary differential equations. However, an extension to solutions of difference equations and mixed differential-difference equations is readily obtainable. For solutions of algebraic equations Lyapunov stability has no direct meaning, but such a meaning can be provided indirectly by imbedding the algebraic equations in a system of differential equations.

    Introduction

    Sensitivity analysis is an extension and development of a rather old idea, which became known in the theory of partial differential equations under the name of a correctly set problem. A correctly set problem is a problem admitting a solution y0 not only for an isolated set of parameters λ0, but also in at least a sufficiently small neighbourhood of λ0. Furthermore, in the domain where the problem has a meaning, it is required that the solutions y(λ), existing for parameter values λ other than λ0, be qualitatively of the same type as y0 and differ little from y0 when λ differs little from λ0. In other words, a partial differential equation boundary-value problem is said to be correctly set if this problem admits a parametric family of solutions y(λ) in which the reference solution y0 is imbedded. This family must be such that any neighbouring solution y(λ1) approaches y0 as λ1 approaches λ0. Since by tradition the objective of the theory of partial differential equations was limited to the determination of representative solutions y0, the study of correctly set problems remained essentially qualitative.

    In sensitivity analysis a quantitative aspect is added by asking how fast the reference solution y0 varies when one or more parameters of the set λ0 are given slightly different values. Since this question remains legitimate for problems not necessarily associated with partial differential equations, the scope of sensitivity analysis appears to be larger than the scope of the theory of correctly set problems. However, the key to sensitivity analysis remains the imbedding of the reference solution y0 in an appropriate parametric family y(λ). Depending on the nature of the imbedding process, the resulting sensitivity coefficients will be valid in the large or only in the small. Since the nature of the original problem conditions the type of imbedding which will turn out to be successful, the limits of validity of a specific parametric family y(λ) shed light on the extent of the domain, in the parameter space, where a solution of a specific qualitative type can exist. The limits of validity of a specific y(λ) appear thus to be related to the singular or bifurcation parameter values of the original problem. A parameter set λ0 is said to be singular if y(λ) undergoes a qualitative change for λ=λ0. The nature of the detectable qualitative change depends of course on the nature of the imbedding process symbolized by y(λ).

    Parametric Imbedding

    To examine some possible types of y(λ) consider a system of ordinary differential equations, written for convenience in a vectorial form. Let

    (1)

    be this system, defined in some domain F(x, y, λ, μ), where λ and μ are parameters, not necessarily distinct. Suppose that f satisfies some general conditions in F, such that equations (1) admit a unique solution in some specified function apace G. Let

    (2)

    be this solution. Having fixed λ and μ it is of interest to determine a family of solutions in G which is close in some sense to a particular solution of form (2). It μ = μ0 + Δμ, the desired family of solutions can be written in the form

    (3)

    where αi are the formal coefficients of the approximating set of functions φi(Δμ), i = 1, 2, ¨, n, and εn is the error of the approximation.

    Constructing solutions (3) by some more or lees ingenious method appears to be the main objective of sensitivity analysis, the functions αi being the sensitivity coefficients. The domain of validity of expression (3) depends of course on the magnitude of |εn| for the range of μ considered. This domain depends considerably on the appropriate choice of φi and of n. It is natural to suppose φ0(0) = 1 and φi(0) = 0 for i > 0, because then

    (4)

    and the first sensitivity coefficient of interest becomes α1.

    Two methods can be used to construct solutions (3):

    (a) Certain operations are carried out on equations (1) in order to deduce equations satisfied by the αi, or at least by their approximations. The form of the φi will thus be fixed by the choice of the operations on equations (1).

    (b) A particular form of expression (3) is chosen, and the αi are determined by what is essentially the method of undetermined coefficients.

    These two methods are not practically, or even theoretically, equivalent, because it is possible to consider as expansion parameter some parameter of solution (2), corresponding to a combination of λ and μ which does not occur explicitly in equations (1), as for instance the amplitude of a periodic solution.

    The following particular expansions suggest themselves more or less naturally:

    (I) If f in equations (1) is regular in μ, then for φi (z) = zi the error εn → 0 as n → ∞, and expression (3) becomes a Mc Laurin series. The domain of validity of expression (3) is given by the radius of convergence R of this Mc Laurin series. R can frequently be determined from the properties of f. However, for computational purposes the usefulness of the resulting expansion (3) is mainly determined by the rate of convergence of its first few terms and not by the value of R, because only a small number of these terms can be determined with a reasonable effort.

    (II) If f in equations (1) is not regular in μ, but admits one or more continuous derivatives, then φi(z) = zi can still be used, provided expression (3) is considered as an asymptotic expansion valid for |Δμ| sufficiently small. As in case (I), the computational usefulness of expression (3) is determined by the rate of decrease of the first terms.

    (III) If f in equations (1) does not admit a continuous first derivative with respect to μ, but is otherwise well behaved, then expression (3) can be considered as a rearrangement of the result of n successive iterations of the form

    (5)

    If the successive iterations yi converge, expansion (III) can be considered as a generalization of expansion (I), and if they do not converge but max |yi - yi-1| passes through a minimum, they can be considered as a generalization of expansion (II).

    (IV) If in the range of interest of μ the solution of equations (1) is of integrable square in μ, then instead of φi(z) = zi, one may choose as φi(z) any set of orthogonal functions.

    (V) If the initial convergence of expression (3) is too slow, or if for some other reason expansions (I)–(IV) are not convenient, a still more general expansion can be used. For example, for solutions which vary rapidly with μ exponentials or Bernstein polynomials may be found useful.

    Most of the work done at present in the field of sensitivity analysis is based directly or indirectly on expansions (I) and (II). In spite of the fact that expansions (III) and (IV) can be used in the analysis of systems which involve parameter discontinuities, whereas expansions (I) and (II) cannot, except in a piece-by-piece manner, their use has been rather infrequent. Malkin has used expansion (III) in an essentially theoretical approach /1/. Expansion (IV) seems to appear only as a byproduct of the Ritz-Galerkin method /2, 3/. Expansions of type (V) were quite rare in the past /4/, but their potential is now beginning to be exploited /5/.

    The reason for the preference given to expansions (I) and (II) is easy enough to find. In fact, if the formal expansion (3) is substituted into equations (1), and the latter are rearranged so that the functions φi are made to occur explicitly, simple or multiple products of the φi will generally be encountered. If the resulting system of equations in the dependent variables αi is to be recursive, i.e. if the calculation of each αi is to be possible from the knowledge of the αj, j = 0, 1, …, i − 1, then the φi must satisfy a functional relationship of the form

    (5a)

    inhere, βm are real constants. The simplest case of relationship (5a) is

    (5b)

    which is obviously satisfied by the general term of a Mc Laurin series

    (6)

    A more general solution of equation (5b) is given by the exponential

    (6a)

    where ψ(z) is an arbitrary real-valued function. Solution (6a) reduces to form (6) when ψ(z) = lnz. As even a very casual examination will show, expansions of type (IV) are basically incompatible with relationship (5a). This fact accounts for the general lack of popularity of orthogonal functions in the theory of nonlinear systems.

    Some Definitions of Lyapunov’s Theory of Stability

    Lyapunov stability was inspired from mechanics and was defined originally for a system of k second-order ordinary differential equations in the following manner /6/.

    , j=1, 2, …, k. Let

    (7)

    designate a particular motion of the system. To stress the fact that motion (7) is compared with other possible motions of the system, it is called the unperturbed motion, whereas the other possible motions are called perturbed motions. Let t0 be the initial time, then equation (7) satisfies the initial conditions

    (8)

    The initial conditions satisfied by the perturbed motions are

    (8a)

    where εj, ε′j are some real constants. These constants, desribing the deviations from the unperturbed initial conditions will be close during some finite time interval (to, t). However, when t > to is made sufficiently large, this closeness may no longer

    Enjoying the preview?
    Page 1 of 1