Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Optimal Control in Bioprocesses: Pontryagin's Maximum Principle in Practice
Optimal Control in Bioprocesses: Pontryagin's Maximum Principle in Practice
Optimal Control in Bioprocesses: Pontryagin's Maximum Principle in Practice
Ebook410 pages3 hours

Optimal Control in Bioprocesses: Pontryagin's Maximum Principle in Practice

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Optimal control is a branch of applied mathematics that engineers need in order to optimize the operation of systems and production processes. Its application to concrete examples is often considered to be difficult because it requires a large investment to master its subtleties.

The purpose of Optimal Control in Bioprocesses is to provide a pedagogical perspective on the foundations of the theory and to support the reader in its application, first by using academic examples and then by using concrete examples in biotechnology. The book is thus divided into two parts, the first of which outlines the essential definitions and concepts necessary for the understanding of Pontryagin’s maximum principle – or PMP – while the second exposes applications specific to the world of bioprocesses.

This book is unique in that it focuses on the arguments and geometric interpretations of the trajectories provided by the application of PMP.

LanguageEnglish
PublisherWiley
Release dateMar 1, 2019
ISBN9781119597230
Optimal Control in Bioprocesses: Pontryagin's Maximum Principle in Practice

Related to Optimal Control in Bioprocesses

Related ebooks

Science & Mathematics For You

View More

Related articles

Reviews for Optimal Control in Bioprocesses

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Optimal Control in Bioprocesses - Jérôme Harmand

    Introduction

    Applying optimal control theory to concrete examples is often considered a difficult task as mastering the nuances of this theory requires considerable investment. In literature in this field, there are many books that discuss optimal control theory (e.g. [LEE 67, VIN 00]) illustrated using examples (e.g. [BRY 75] or [TRÉ 05]), and books dedicated to families of applied problems (e.g. [LIM 13]). The objective of the current book is to present a pedagogic view of the fundamental tenets of this theory, a little in the style of Liberzon (see [LIB 12]), and to guide the reader in the application of the theory, first using academic examples (the swing problem, a driver in a hurry, also known as the double integrator or the landing moon problem), and then moving on to concrete examples in biotechnology, which will form the central part of the book. Special emphasis has been laid on the geometric arguments and interpretations of the trajectories given by Pontryagin’s maximum principle (PMP).

    While this book studies optimal control, it is not, strictly speaking, a book on optimal control. It is, first and foremost, an introduction – and only an introduction – to PMP. It is seen that PMP is one of the tools used in optimal control theory. Optimal control aims to determine a control signal (or action signal), which will make it possible to minimize (or maximize) an integral performance criterion that brings in the state of a dynamic system (with constraints if required) and doing so within a fixed time period or with a free terminal time. In many situations, when PMP is applied we can comprehensively characterize the properties of this control, understand all the nuances of its synthesis and may even have the value for the control to be applied at any point depending on the system state.

    At a point in time where a basic computer makes it possible to envisage the use of optimization techniques that are said to be direct¹ for a large number of problems encountered in engineering, it is valid to wonder about the benefits to be gained by turning to a method that enables the computation of optimal analytical solutions. On the one hand, to do this would be to forget that using a numerical optimization procedure requires taking into account the specific initial conditions of the dynamic system under consideration, which limits how generic the computed control can be. On the other hand, when an optimal control is available, it makes it possible to compute the minimal (or maximal) value for the optimization criterion, which is not possible with a numerical approach (except for some very particular cases). By doing this, and doing this independently of the practical constraints that may lead a user to apply a control that deviates, however minimally, from the theoretical optimal control, we have a means of quantifying the distance between the theoretically optimal trajectories and those observed in reality based on experiments carried out on the real system.

    Over the past few years, the control of bioprocesses has seen a startling growth; this is, notably, due to the extraordinary development in the world of sensors. Until quite recently, only physical quantities (such as temperature, pressure or flow rates) could be precisely measured. Today, however, it is possible to take online measurements of system variables that can be called functional, such as the substrate concentration or the concentration of bacteria in the reaction environment. Consequently, many technologists will state that control in biological systems – which often consists of keeping certain values constant – no longer poses a major problem. However, in our opinion, this view tends to forget that control theory not only seeks to stabilize a system and reject disturbances, but also tries to calculate the set-point trajectory. In other words, it attempts to establish around what state the system must be operated, both in terms of optimality, as well as to effectively control it so that the values of the variables of interest can, as far as possible, stay close to this set-point.

    The first part of the book titled Learning to use Pontryagin’s Maximum Principle indicates that it offers an approach that is based on learning procedures to resolve equations (rather than focusing on a theoretical discussion of fundamental results), which are usually rather difficult to access in existing literature. In Chapter 1, we revisit concepts as basic as the minimization of a function, which, by extension, allows the minimization of a functional through the calculus of variations. After having presented the limitations, which relate specifically to the function classes to which the control must belong, Chapter 2 will present the terminology used in optimal control and PMP. Chapter 3 presents several academic applications and problems that highlight some nuances of PMP, especially the importance that must be accorded to questions of regularity of control.

    The second part of the book, Applications in Process Engineering, is comprised of three distinct chapters that focus on problems that are specific to process engineering and biotechnology. In Chapter 4, we describe a problem of the optimal startup of a biological reactor. We will see that in order to maximize the performance of the bioreactor (that is minimize the time in which a resource – here, a pollutant – reaches a given threshold), the control is highly dependent on the type of growth function under consideration.

    In Chapter 5, we go on to examine the optimization of biogas production. More specifically, we propose – given particular initial conditions of the system, which is in two dimensions – a solution to the problem of maximizing biogas production within a given time range. We can show that the constraints on the control (typically applied to the minimum and maximum acceptable limits) greatly constrains the proposed solution.

    Finally, in Chapter 6, we will discuss the optimization of a membrane filtration system. These systems are being used more and more in biotechnology. Filtering through a membrane consists of maintaining a pressure difference, called the transmembrane pressure (TMP) across a membrane immersed in a fluid. The force created results in the movement of fluid from the side where pressure is greater to the side where pressure is lower. As this happens, elements in the fluid that are larger than the pore size are retained by this membrane, allowing these elements to be filtered out of the fluid. Over time, these elements will then clog up the membrane. At this point, we must either increase the TMP to maintain a constant flow across the membrane or accept that there will be a continuous decrease in the flow across the membrane until such time as all pores are clogged. To limit this phenomenon, we can regularly unclog the membrane, for example using a backwash fluid. If the membrane performance is defined as the quantity of fluid filtered over time, the question may arise as to which backwash strategy would be most appropriate in order to maximize the quantity of fluid filtered over a given time period. In practice, this is the same as determining at what time instants, and how often the backwash must be applied, keeping in mind that clear water is used during the backwash; the volume of this water will then be subtracted from the performance criterion. We thus find ourselves faced with an inevitable compromise: unclogging is essential to keep the membrane as clean as possible, but this must be carried out with the lowest possible frequency so that the filtration performance is not affected. If there is no model of the installation, we have little choice but to proceed through trial and error. We take a grid of the time instants, and we fix the duration of these washes; the backwash is then carried out in proceeding with experiments, while keeping in mind that the initial state of the membrane may play a large role here.

    If we are able to obtain a model of the filtration membrane, we can then ask whether control tools may or may not be used. It is important to note here that this type of model is generally nonlinear. With a direct approach we may – depending on the initial conditions given – obtain a specific filtration/backwash sequence to be applied in order to maximize the system’s performance. But how can we find out whether the algorithm that we are using is a global solution? As the model used is not linear, it is certainly possible that another policy will make it possible to obtain identical, or even better, performances. In this book, we will see that characterizing the optimal control using PMP will make it possible to completely resolve the same problem even if the applicability of the optimal solution may pose practical problems that must then be resolved. In effect, while the real solution requires the calculation of the time instants when the backwash must be applied, applying PMP requires that the controls must be constrained such that they belong to sets that are much larger in order to guarantee the existence of this optimal control. In reality, these controls may take values that make no physical sense. However, this is not the point to focus on here, as, in practice, several strategies may allow us to find approximations for these values (see, for instance, the theory proposed in [KAL 17]). The essential result that must be kept in mind is that the precise values of the control to be applied can only be found by using PMP.

    1 By this we mean a purely numerical approach that can be distinguished from indirect optimization approaches in which we first write the optimality condition that are resolved analytically and/or numerically.

    PART 1

    Learning to use Pontryagin’s Maximum Principle

    1

    The Classical Calculus of Variations

    1.1. Introduction: notations

    The theory of optimal control has its roots in the calculus of variations.

    The calculus of variations is an ancient branch of mathematics and one of its earliest famous problems was that of Queen Dido. According to legend, in the 9th Century BC, Queen Dido marked the borders of the town of Carthage using the skin of a single bull. She is said to have cut this skin into fine strips to form a very long leather cord with which she formed a semicircle, the two ends of which touched the coast.

    Since the 18th Century, using the calculus of variations and the Euler–Lagrange equations, it has been known that if the coast is straight, then the semicircle with a fixed length is indeed the curve that maximizes an integral criterion (the area bordered by the curve and the side that is assumed to be straight, in the Dido problem), with an integral constraint (the length of the curve, in the Dido problem) and whose ends belong to a given set (the coast, assumed to be straight, in the Dido problem) or are fixed.

    The term calculus of variations refers to a mathematical technique that considers local modifications of a solution curve and that characterizes its (local) optimality if every admissible variation (i.e. such that the disturbed solution passes through the same points A and B and verifies the same integral constraints, if any) has a criterion that is not as good. This is not exactly a control problem, but this very technique of variations is used to obtain the necessary conditions of optimality in the theory of optimal control, and would go on to give rise to the maximum principle, stated in the 1950s.

    In the 19th Century, Hamilton laid the foundations of mechanics through which it was possible to calculate the trajectories, over time, of mechanical systems with n degrees of freedom. This was based on the principle of least action, which is similar to a problem of calculus of variations and which was reflected in the formalism that Hamilton introduced, i.e. the minimization of the Hamiltonian. This approach proved to be very powerful as it made it possible to characterize the movements of mechanical systems using a system with n first-order differential equations; the Hamiltonian was interpreted to be the total energy of the system.

    The Hamiltonian considered in the maximum principle may be seen as a generalization of the Hamiltonian for mechanics for more general dynamic systems, which do not necessarily belong to mechanics and for which we also seek to characterize, over time, the trajectories that would minimize a criterion.

    The aim of this chapter is to show how optimal control problems and the conditions required for the optimality of the maximum principle follow the calculus of variations (especially the Euler–Lagrange equation, which is also a necessary condition for optimality) and in what way they are more general and, therefore, more difficult.

    In order to do this, we will begin by going back to the well-known concept of the derivative of a function to revise in detail how the existence of a minimum can be related to the nullity of partial derivatives when the minimum is in the domain of interest, and what happens to this condition when the minimum is attained on a boundary point. We will then see that the classic Euler–Lagrange conditions may be seen as the necessary conditions for optimality when the optimum is attained at a point in the interior of the domain of interest, while the maximum principle (which will be briefly explained) corresponds to the case where the optimum is attained on a boundary point of the domain. We will use usual writing conventions. The reader can refer to section A1.1 (Appendix 1) for more information on these.

    1.2. Minimizing a function

    1.2.1. Minimum of a function of one variable

    Let be a differentiable function whose minimum is attained at a point x0. We then know that the derivative of f is null at this point.

    [1.1]

    where denotes the derivative of the function f, evaluated at the point x0. Let us review the demonstration of this fact. According to the definition of the derivative at x0, we have:

    [1.2]

    Let us write:

    [1.3]

    Upon multiplying by h we arrive at the relationship:

    [1.4]

    The left-hand side is either a positive value or 0 as x0 realizes the minimum of f and thus:

    Consequently, since ε(h) tends to 0, = 0.

    In this demonstration, we must recall that it is essential that h can take positive or negative values. This is clearly possible as the function is defined over all integers, ℝ. This would be possible even if f were only defined on an open interval.

    Let us assume that f is always defined and differentiable over ℝ, but we are only interested in a closed interval [a, b]. We assume that:

    [1.5]

    The condition then becomes:

    [1.6]

    In effect, for the interval limits the increase, h, can no longer be given values that are both positive and negative. Thus, there remains only one sign condition in the derivative (see Figure 1.1). Therefore, the necessary condition for optimality takes a different form depending on whether the minimum is attained at a point in the interior or on the boundary of the domain of definition.

    Figure 1.1. Minimum of a function. For a color version of this figure, see www.iste.co.uk/harmand/bioprocesses.zip

    One final remark: what we just saw remains true if x0 is a local minimum instead of a global minimum.

    1.2.2. Minimum of a function of two variables

    Let D be a domain of ℝ² and (x, y) ↦ f(x, y) be a function of class C¹ over D. Let:

    be its gradient vector. The solutions of the differential equation:

    [1.7]

    are the curves of f with the steepest descent. A contour line of f

    is a curve such that f(ξ(t), η(t)) is constant, and thus has a null derivative. If we derive this, we arrive at:

    [1.8]

    As the vector:

    is the vector that is tangent to the contour line, we can read from [1.8] that the level curves are orthogonal to the gradient vectors and, thus, to the lines with the steepest descent curves.

    An IGN map represents the level curves or the contour lines for the function altitude above the point. In Figure 1.2, we have represented, on a portion of the map at 1:25,000, four lines with the greatest gradient. We see here that other than the point quoted 1397, which is a local peak, at all other points the gradient is non-null and, locally, the contour lines appear as a network of lines that are

    Enjoying the preview?
    Page 1 of 1