Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Computational Modeling for Fluid Flow and Interfacial Transport
Computational Modeling for Fluid Flow and Interfacial Transport
Computational Modeling for Fluid Flow and Interfacial Transport
Ebook887 pages7 hours

Computational Modeling for Fluid Flow and Interfacial Transport

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Practical applications and examples highlight this treatment of computational modeling for handling complex flowfields. A reference for researchers and graduate students of many different backgrounds, it also functions as a text for learning essential computation elements.
Drawing upon his own research, the author addresses both macroscopic and microscopic features. He begins his three-part treatment with a survey of the basic concepts of finite difference schemes for solving parabolic, elliptic, and hyperbolic partial differential equations. The second part concerns issues related to computational modeling for fluid flow and transport phenomena. In addition to a focus on pressure-based methods, this section also discusses practical engineering applications. The third and final part explores the transport processes involving interfacial dynamics, particularly those influenced by phase change, gravity, and capillarity. Case studies, employing previously discussed methods, demonstrate the interplay between the fluid and thermal transport at macroscopic scales and their interaction with the interfacial transport.
LanguageEnglish
Release dateJun 10, 2014
ISBN9780486150017
Computational Modeling for Fluid Flow and Interfacial Transport

Related to Computational Modeling for Fluid Flow and Interfacial Transport

Related ebooks

Technology & Engineering For You

View More

Related articles

Related categories

Reviews for Computational Modeling for Fluid Flow and Interfacial Transport

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Computational Modeling for Fluid Flow and Interfacial Transport - Wei Shyy

    INDEX

    PART I

    BASIC CONCEPTS OF FINITE DIFFERENCE METHODS

    This part summarizes the basic concepts of computational methods applicable to a single partial differential equation. It presents the theory and techniques useful for understanding finite difference equations and suitable methods for solving parabolic, elliptic, and hyperbolic equations. Basic error analysis, stability, and consistency properties of different numerical schemes, and the physical realizability of solutions yielded by these schemes are emphasized. The emphasis of this part is on the development of tools for analyzing and assessing finite difference schemes for different type of equations. In this part we provide the foundation for more sophisticated techniques to be developed in Part II and Part III.

    CHAPTER I

    INTRODUCTION TO FINITE DIFFERENCE METHODS

    Due to the advanced capabilities of modern computers the numerical solutions of differential equations is very commonplace and in fact necessary in order to solve a variety of problems of practical interest which defy any analytical treatment. Several techniques to solve the differential equations numerically have become popular such as finite difference methods, finite element methods, spectral methods and boundary element methods. A wealth of literature is available in the form of books and journals on these methods. We confine ourselves to the essential basis of finite difference methods in this part; we start with the basic concepts of finite difference schemes.

    1BASIC CONCEPTS OF FINITE DIFFERENCE SCHEMES

    A differential equation defines the variation of a function, say U, with respect to one or many continuous independent variables. In the following we shall restrict ourselves to two independent variables, namely x and t, where x represents, say, the space coordinate in one dimension and t denotes, say, the time coordinate. While the definition of the independent variables depend on the physics of the problems encountered, the application of the finite difference methods are not dictated by them.

    1.1Finite Difference Operators

    . For example,

    1) Note that for the central difference operator given above, um±1/2 need to be further defined since they are not located at the discrete grid points. This can be achieved via appropriate interpolations, e.g.,

    Figure 1.Geometric interpretation of the three operators.

    2) Ux|m can be approximated as Δum/h, ∇um/h or δum/h.

    3) Similarly, Ut|n can be approximated as Δun/k, ∇un/k or δun/k.

    In general, terms such as Ux and Uxx can be approximated with any number of the discrete u values at the neighboring grid points. The formal order of accuracy (to be defined later) of these numerical approximations generally increases with the number of grid points involved. A natural question to ask then is the following–does this mean that if we desire a high accuracy finite difference operator to approximate a differential equation, can we simply use more grid points? The involvement of more and more grid points in the approximation is associated with some problems:

    (i) The effort of obtaining the numerical solution usually increases with the number of grid points which leads to a need to optimize the numerical accuracy on a per (computer) operation count basis.

    (ii) Formulas involving more than two grid points in general need a starting procedure, i.e., near the boundary points.

    (iii) More critically, it often turns out that even in the sense of absolute accuracy, i.e., completely ignoring the consideration of the computing cost, more number of grid points in the numerical approximation may NOT yield higher accuracy after all ! Such situations can appear when the solutions do not vary smoothly and modestly with respect to the independent variables. A case in point is the fluid dynamics problem with strong convective effects. In an extreme but frequently encountered case, the solution variable may exhibit a jump in its profile; under such a circumstance, a nominally high–order difference scheme is fundamentally unsuitable since only a weak form of the solution can be produced. By weak form we mean that instead of dealing directly with the equation containing the solution discontinuity, we first integrate the equation to eliminate the jump and then proceed with the integral form of the equation. The point is that in situations where discontinuities exist, since the derivatives at the discontinuous locations are not defined, those high–order schemes based on the notion of containing higher–order derivatives in the truncation error term are no longer effective.

    (iv) Another problem is related to numerical stability, i.e., if an initial value problem is solved by marching in time, then some schemes will simply fail because error grows with time and becomes unbounded, no matter what is done. For some other schemes, success depends on the way one conducts the computation, i.e., the choice of time and space steps.

    For the finite difference methods, there are many choices of schemes, many of them not too difficult to construct. However, one must determine

    (a) whether these schemes can be reduced to the original differential operators as, say, h, k →0,

    (b) whether they are good approximations to the original differential equation in the sense that the numerical solutions remain stable as t → ∞, where t designates either the physical time in initial value problems or the number of steps in iterative methods often used to solve boundary value problems, and

    c) whether they are accurate (with due consideration to computing cost).

    The above considerations lead to the following issues associated with any finite difference scheme:

    A) consistency, i.e, whether a difference operator L(uas h, k →0 and convergence, as h, k →0.

    B) stability, is bounded as n→ ∞, and

    C) accuracy, approximates U(xm, tn) as h and k vary.

    These issues are the central themes of the analysis of finite difference schemes and are all interrelated. The accuracy of a scheme is often characterized by the order of accuracy which is formally defined next.

    1.2Order of Accuracy

    The order of accuracy of a scheme is defined as the power of the mesh size (both in space and time) with which the truncation error of the scheme tends to zero. For example, consider the operator Δ

    The truncation error of the above scheme goes to zero like the first power in h; therefore, it is a first order scheme. Similarly,

    A pertinent question at this point is that what happens to the order of accuracy of the above schemes if the grid spacing h is not a constant. This issue will be addressed later.

    As another example, the second derivative Uxx can be approximated by repeated applications of the schemes for the first derivatives, as follows:

    Approach (i):

    where

    . Thus it is a second order scheme. Approach (ii):

    . Thus, in this approach, by repeatedly applying two first order schemes, that are of opposite bias, the leading truncation error terms are cancelled out and the resulting scheme becomes second–order accurate. Again, this is possible here due to the constant grid spacing, h.

    It must be emphasized that the order of accuracy referred here is called the local order of accuracy since it does not consider the propagation and accumulation of errors outside the stencil of the grid points directly utilized by the scheme applied at the grid point xm. In other words, all the values of the discrete variables at the grid points are assumed to be exact; the only issue dealt with here is how good a job those finite difference operators can do under such a circumstance. As expected, the global order of accuracy is usually lower than the local one, since the inaccuracy of the finite difference operators at the upstream locations will propagate downstream. We will define and discuss the global order of accuracy later.

    So far we have discussed the basic issues related to the finite difference concept and looked at the idea of the local order of accuracy of finite difference schemes. Next we look at the finite difference equations obtained from the differential equations by applying finite difference operators to the derivatives, and show how to solve these difference equations.

    2SOLUTION OF FINITE DIFFERENCE EQUATIONS

    A finite difference equation of the form

    which can also be written in the following recursive form

    is called a kth–order difference equation. The general solution of this equation contains k arbitrary constants. Hence, its solution is uniquely determined when k initial values, namely, u0, u1, ....., uk–1, are given, and these grid locations can be arbitrarily chosen. This, as one may recall, is similar to the case of a kth–order differential equation.

    Consider a homogeneous, linear difference equation of kth–order with constant coefficients

    Its fundamental solution can be obtained by solving the characteristic equation

    whose general solution is given by

    where λ1, λ2, ......., λk are the different roots of the characteristic equation. This is, again analogous to the differential equation case where a fundamental solution is eλx, instead of λn (note that xn=nh).

    Example 1 Difference equation:

    The characteristic equation is

    and its roots are λ1 = 3, λ2 =2. Thus, the general solution of the difference equation is

    Applying the initial conditions, we get the following:

    Thus the constants are c1=l, c2=−l and the final solution to the difference equation is

    A pertinent question at this juncture is the following: what happens when the multiple roots of the difference equation are identical? To illustrate this case, consider a second–order equation of the form

    Let the two roots of the characteristic equation be denoted by λ1, λ2 Then the solution is

    where α, β are determined by any two values of un, e.g., u0 and u1. Then we have

    The above procedure breaks down if λ1 = λ2. Recall that for a differential equation of the form

    with two roots λ1 and λ2 of the characteristic equation, the general solution is given as follows:

    Here, in the case of λ1 = λ2 for the difference equation, we write the first solution as

    and the second solution as

    where yn is yet to be determined. Substituting (un)2 into the original difference equation, we get

    Also from the characteristic equation

    and b/c = – 2λ1. Thus Eq. (2.10) becomes

    i.e., any arbitrary arithmetic progression is a solution to yn, e.g., yn=n (or yn=n+1, etc.). Hence the general solution is

    As already discussed, the first derivative Ux can be approximated by many different finite difference operators, each involving a different number of grid points. Hence the order of the finite difference equation may be greater than the order of the differential equation it approximates. Consequently, the requirement of the initial conditions and the boundary conditions for the difference equation may not be compatible to that for the differential equation. This means that some artificial measures must be taken since there is not enough information available to obtain the general solution to the difference equation. Usually, one may need to resort to a lower order scheme to start a computation, then use this lower order solution to supplement the need for initial conditions for higher–order schemes.

    3ORDER OF ACCURACY: GLOBAL AND LOCAL

    In this section, we will consider a simple model problem to illustrate the concepts of order of accuracy of a difference scheme in both a local and a global sense. Consider the following differential equation along with an initial condition

    where A is a constant of order 1. The exact solution is given by

    Now, if we plug the finite difference operators discussed earlier into the above Equation (3.1), then we can obtain the exact solution of the difference equation as well.

    3.1Scheme I : Δ–operator

    For the Δ–operator, we have the following:

    Solution :

    Hence

    The question is now to estimate the size of the error associated with this operator. We define the error as follows:

    Thus, for the Δ–operator, the error is

    The main interest here is not only the values of the error E(xn) but also the rate at which E(xn) decreases as h decreases. Note that

    Also note that

    Thus Eq. (3.7) becomes

    i.e., the normalized error becomes

    Here Enorm(xn) tends to zero as h tends to zero; furthermore, its magnitude is of the order of the first power of h. Thus, Scheme I is first–order accurate globally. This error is global because we only use the initial condition and want to know Enorm(xn) in terms of its behavior at grid points many h distance away from the boundary.

    Now we come back to the issue of local accuracy of Scheme I. Assume u(xn)=U(xn), i.e, u(xn) is known exactly and we want to know u(xn+h) and compare it with U(xn+h). Using Taylor series expansion, we get

    Using the finite difference formula (3.3), we have

    Comparing (3.9a) and (3.9b) and using the facts that

    the first two terms in (3.9a) and (3.9b) cancel out and we get

    i.e., Scheme I is second–order accurate locally.

    Since the local error is really the error with xn only a distance h away from the boundary, i.e, n=1, the error obtained for the global behavior can be used here also, i.e,

    Recall Eq. (3.8). Here, xn = nh. For estimating the local error, we have n=1. Thus,

    In general we have the following relation between the local and global orders of accuracy:

    provided that the starting procedure, if needed, does not introduce substantially more error into the solution. We will address this point next.

    3.2Scheme Π : δ– operator

    Using the δ– operator, the finite difference scheme can be written as

    This is a second–order difference equation and hence needs two initial conditions. One of them is available from the problem statement, Eq. (3.1), namely, u(0)=1. The second one is yet to be determined and we will come to that issue in a moment. For now, we simply assume that both u0 and u1 are already known.

    The general solution of Eq. (3.11) is

    Where

    and

    Note that

    Thus

    Similarly

    Hence, we have

    From the above, the following can be noted:

    1. Out of the two terms on the right hand side of the finite difference solution (3.14), the first term tends to the analytical solution if

    2. However, the second term has the factor (–l)n, indicating that it approaches different values depending on whether n is even or odd. Thus, the only acceptable result is that

    Furthermore, we require that exp(Axn; otherwise, the second term will grow exponentially fast and become unbounded even if (3.16) holds because the term in (3.16) approaches zero at the rate hp (where p is the order of accuracy of the scheme) whereas the exp(Axn) term grows exponentially with x.

    3.3Boundary Treatment

    One unresolved problem is that u1 is not specified; we need to obtain it in some way. Here, we examine two possibilities:

    3.3.1Possibility (i): Taylor Series Method

    This actually amounts to using the ∇–operator which is globally a first–order operator. Thus, although we use a first–order operator to obtain u1, it does not degrade the order of overall global accuracy, as seen from above. The reason lies in the fact that we apply the boundary treatment only at the point x1; hence, the local accuracy of the boundary scheme, which is second–order, controls the performance of the overall scheme. Now, we need to check whether the two conditions, (3.15) and (3.16), are satisfied by this method. We see that

    Thus, both the requirements given by . Finally, according to Eq. (3.14) we have

    Thus, it is a second–order accurate solution in the global sense.

    Next, we examine the local error of this approach. Substituting n=l in Eq. (3.13), we get the error associated with each root:

    Similarly

    Thus, the local order of accuracy contributed by λ1 and λ2 is 3. However, since u1, which appears in the coefficients of accurate, the overall local accuracy is reduced to 2. Again, the global order of accuracy remains as 2.

    3.3.2Possibility (ii): Simple Extrapolation

    Another possibility is to assign u1=u0=1. This is locally a first–order scheme since we have

    We again check for conditions (3.15) and (3.16):

    . Consequently, we have

    and hence this scheme is first–order accurate globally.

    Conclusion: From the above two possibilities of the boundary treatment, we can see that the boundary treatment can be one order lower in the global sense compared to the interior treatment without degrading the order of overall global accuracy. However, if the boundary condition is two or more orders lower in accuracy, it degrades the overall global accuracy. Some related material on the issue of numerical boundary treatment can be found in a paper by Blottner (1982).

    4STABILITY OF DIFFERENCE SCHEMES

    A perfectly reasonable finite difference scheme approximating the differential equation can result in a numerical solution (or exact discrete solution) not converging to the correct analytical solution as h → 0. The reason for the above is that the error can accumulate at a rate faster than tolerable. Thus, Enorm(xnor higher despite h being small because as the size of h decreases, the total number of grid points, n, increases in order to reach the same value of xn.

    4.1Illustration of the Instability of a Finite Difference Scheme

    Suppose that in order to compute U′(x) = f(x, U), we use a 4–point (3–backward, 1–forward) operator to update the value u(xn+l) based on U′(xaccuracy. The operator can be constructed as follows:

    This gives us

    The solution to the above is

    Hence the resulting scheme is

    The question now is that how good is this scheme? To see this, we use a trial case, say f ≡ 0 and look at the following two cases:

    (A) Assume that u0, u1 and u2 are all known exactly, i.e. u0=u1=u2=0. Thus, the true solution should be un=0. Substituting in Eq. (4.4), we see that this is indeed the case. Hence, mathematically, the scheme is quite reasonable.

    (B) Now, if we introduce a round–off error into the computation to check the sensitivity of un with respect to the round–off error, e.g., u0=0, u1=0, u2=(a small number), then it turns out that

    Notice that this error grows extraordinarily fast and it has nothing to do with the size of h since h does not appear in the Eq. (4.5) if f ≡ 0. In fact, the smaller the size of h, the higher the number of grid points, and consequently the error grows to higher levels. In short, the above scheme would only give unreasonable results.

    The obvious question is– how does this happen? To answer this, let us look at the characteristic equation of Eq. (4.4) with f = 0, which is

    The roots of the above characteristic equation are

    Hence the general solution is

    (i) If u0=0, u1=0, u2=0, then α1 = 0, α2 = 0, α3 = 0. Then un = 0, which is the correct solution for this case.

    (ii) If u0=0, u1=0, u2=ε, then α1 ≅ ε, α2 ≅ ε, α3 ≅ ε. Now the solution becomes

    The error Enorm(xn) grows at the rate of ε(– 2.59)xn/h. If we let h → 0 then the error grows exponentially. Hence, this scheme, while mathematically very reasonable, is computationally very unsuitable because it is very prone to any small disturbance caused by an inexact prescription of the local values of u at any grid point. That means, for example, during the starting procedure as required by the scheme, any small errors will eventually propagate and grow with the grid index to an unacceptably high level. This issue is at the heart of the stability analysis of finite difference schemes.

    4.2Stability

    In many physical problems, linear as well as nonlinear, we have the case of exponentially decaying solutions. In such situations, it is desirable to have a super–stable numerical method which yields a numerical solution guaranteed to converge to zero, independent of the mesh size h. Consider the following model problem:

    We seek a solution of this model problem using a numerical scheme by computing an approximate solution un at xn=nh. We desire the numerical method to be such that

    We will look at some of the alternatives to solve the above model problem. Before we do that, we define A–stable methods.

    Definition. A–Stable Method: A numerical method, used to solve Eq. (4.9), is said to be A–stable if its region of absolute stability contains the whole of the right–hand half plane, Re(Ah) > 0, i.e., the whole region in which the exact solution decays with x.

    4.3Forward Difference (Explicit Euler Method)

    In this finite difference method, we use

    Thus the model problem can be written as

    Figure 2.Stability region of the explicit method for the model problem, Eq. (4.9).

    Thus, this method is good when h → 0. From the above, it can be seen that the Euler method is stable when

    The reason for the above is that the solution grows at the rate of (1–Ah) and hence if the above Eq. (4.13) holds, then the errors arising from the grid points other than the one under consideration do not become unbounded as n → ∞. On the other hand if | 1 – Ah| > 1, then the errors grow exponentially. Fig. 2 illustrates the above statements.

    At this point, it is appropriate to address the issue of computing the stability region for a given differential equation. The procedure to compute the stability region is as follows: write down the finite difference equation corresponding to the ordinary differential equation given by Eq. (4.9) and solve the associated characteristic equation Ρ(λ) = 0 to obtain the roots λ’s. The finite difference scheme is stable if all the roots satisfy the inequality |λ| < 1.

    For the differential Equation (4.9), any finite difference scheme must satisfy the following two requirements:

    1) If Re(A) < 0, then the unstable modes, if they contribute significantly to the composite solution, must be accurately represented by the numerical solution, regardless of their intrinsic time scales.

    2) The value of Ah must lie within the region of absolute stability when Re(A) > 0.

    4.4Backward Difference (Implicit Euler Method)

    In this method, we use

    Now the model problem can be written as

    Figure 3.Stability region of the implicit method for the model problem, Eq. (4.9).

    Now the stability requirement becomes

    or

    Fig. 3 illustrates the above condition.

    It should be noted at this point that this method is actually too stable in the sense that it will yield a decaying (stable) numerical solution even if Re(A) < 0, i.e., even if the exact solution grows exponentially with x. While the present model problem is too trivial to distinguish the computing cost per nodal point between the explicit and the implicit scheme, in general, as will become clear later, the implicit scheme has higher computing cost but does have much broader choice of the mesh size to remain stable.

    4.5Central Difference

    For this scheme, we have the following:

    As we have computed earlier, the two roots of the characteristic equation associated with the above finite difference equation with real values of A, are

    For the above two roots, we have

    Hence this finite difference scheme will always be unstable for any h. Note that we have analyzed the above scheme in a previous section, namely Scheme II in Section 3.2, where we concluded that this scheme is satisfactory in terms of accuracy (second–order) and starting procedure. In that example, we had 0 ≤ x , and hence the above scheme behaved reasonably. If these restrictions are removed, then as indicated by Eq. (4.19), the numerical solution grows and becomes unbounded.

    4.6Trapezoid Difference

    For this scheme, we have

    The stability requirement for the above scheme is

    The above condition will always be met as long as Ah > 0, and hence this scheme is A–stable, as shown in Fig. 4.

    Figure 4.Stability region of the trapezoidal method for the model problem, Eq. (4.9).

    Although the trapezoid scheme is stable, it does not guarantee accurate results. For example, if Ah > 2, the numerical solution profiles exhibit oscillations, and if Ah ≫ 1, then we get un+1 ≅ – un, which leads to an odd–even fluctuation in the solution profile.

    4.7Some Relevant Facts

    1) No explicit scheme, single step or multistep, can be A–stable.

    2) For the general linear implicit multistep method given by

    no A–stable method can be of order greater than two.

    3) The second–order A–stable implicit linear multistep with the smallest error is the trapezoidal rule.

    For details on the above facts, the interested reader is referred to Dahlquist (1963).

    4.8The Issue of Accuracy Versus Stability

    Consider the Euler method to approximate the following differential equation:

    whose exact solution is given by

    The solution for the finite difference equation associated with the forward Euler method is

    Note that

    Thus, for the forward Euler method, the accuracy requirement also goes hand in hand with the stability requirement given by Eq. (4.13). On the other hand, when u(x) tends to zero, i.e., as x ≫ 1, then the accuracy is no longer a problem. However, since the forward Euler method still possesses the same stability restriction, it is very inefficient in such a situation. The issue of stability versus accuracy can be illustrated by the following example.

    Example 2 Consider the following equation

    The solution over the whole domain x can easily be divided into two regions as shown in Fig. 5. In region (i) the solution is varying rapidly and demands a good accuracy from the numerical scheme being used to simulate it; we do not have to worry about the stability in this region of the solution. On the other hand, in region (ii), the solution varies by a very insignificant amount; however, we still need to satisfy the same stability criterion, which for this case is h < 1/50, which is very stringent and thus inefficient. This is atypical example of the so–called stiff ordinary differential equation, where the calculation in region (i) is limited by accuracy requirement whereas that in region (ii) is limited by stability requirement. In a practical sense, one can view the stiff equations as equations where certain implicit methods can perform (much) better than explicit ones.

    Figure 5.Solution of Example 2.

    4.9Systems of Equations

    Consider a system of ordinary differential equations

    with N unknowns, i.e.,

    If λ denotes the eigenvalues of the matrix [Aits eigenvectors, then we have

    Usually, for an N×N matrix [A], there are N and a matrix composed of these N eigenvectors can be written as

    and we have

    where [Λ] is the diagonal matrix composed of eigenvalues:

    Thus we have

    If we define

    then we have

    and hence

    which is an uncoupled set of first–order ordinary differential equations and thus all the knowledge about single first–order ordinary differential equations can be directly applied to the above general system of ordinary differential equations. The solution to the above is given by

    where are constants to be determined by the initial condition. Hence, we have

    It should be noted that 1/λ indicates the characteristic scales that control the solution profile for the system of equations.

    Example 3 Consider the following system of equations:

    The eigenvalues for this case are

    and it can be seen that they differ by three orders of magnitude. Here, initially, λ2 is the controlling factor for maintaining accuracy, i.e, we must satisfy the requirement that h < 2/λ2. But, then λ1 becomes dominant and for maintaining accuracy, we must satisfy the condition h < 1/λ1. However, with regard to stability, λ2 is always the controlling factor since it is the largest eigenvalue of the system, and hence we always have h < 1/λ2 (which is quite a stringent condition for the present example).

    Example 4 Local Analysis of a Nonlinear Problem:

    This system is nonlinear, and hence, in order to find out the stability bound of a numerical scheme, we need to first linearize the system locally and then conduct the necessary analysis (see, e.g., Boyce and DiPrima 1986). The equations can be linearized by finding the Jacobian of the system:

    Now if we are interested in the neighborhood of the point (y1, y2, y3) = (1, 3.65 × 10–5, 0), its Jacobian is

    Its eigenvalues are λ1 = 0, λ2 = –0.405, λ3 = –2189.6. So, again, there is a wide disparity among the eigenvalues, causing difficulty in computing the solution numerically.

    It can be seen from above that stiffness corresponds to the situation in which eigenvalues of the system differ from each other by orders of magnitude, thus leading to drastically different time or length scales in the problem. Examples of such stiff problems are some situations in chemical kinetics where the rate constants of chemical reaction of, say, two different species differ by six or seven orders of magnitude. For such systems, A–stable numerical methods are very desirable.

    5DISCRETIZATION OF PDEs AND ERROR NORMS

    5.1Discretization of PDEs: an Example

    By methodically applying the techniques described above, finite difference schemes can be constructed to approximate partial differential equations. We illustrate this with an example.

    Example 5 Consider the following PDE:

    In order to approximate the above PDE with a finite difference equation, we can choose from many different combinations of treatments for x– and t–derivative terms. In this example, we choose forward differencing (Δ–operator) for the time derivative and central differencing (δ–operator) for the spatial derivative. If we define Δ t=k, Δx=h, tn=nk and xm=mh, then Eq. (5.1) can be approximated as

    which can be written as

    where r=k/. Now, we wish to assess the local order of accuracy of Eq. (5.3), for which we resort to a Taylor series expansion about the point (xm, tn). The local error can be obtained by representing all the dependent variables in a given finite difference equation with the variable and its derivatives at the reference location (xm, tn). Thus, we can write

    Substituting the above in Eq. (5.3) after replacing и with U, we get

    The right hand size side of Eq. (5.5) divided by k is the local truncation error of Eq. (5.3), which is

    5.2Local Error and its Relationship to Global Error

    From the above it can be seen that the local error is the error introduced via the difference equation used for approximating the differential equation, by assuming that all the information at t=nk is known in Eq. (5.5). Another interpretation of the local error is that it is the error produced when the exact analytical solution of the differential equation at all the nodes are introduced into the difference equation.

    The global error is defined as

    and the local error is related to the global error as follows

    i.e., the local error is the error introduced by the difference equation at the point m without accounting for the errors generated at the points before m – hence the name local error.

    5.3Error Norms

    Error norms are a means to represent the global error. We wish to associate with the global error one single positive number which will be a definite indicator of the magnitude of the global error for a particular finite difference scheme. If we define an error vector as

    where

    . If this number (norm) is small, then the numerical approximation is good; if it is large, then the approximation is bad.

    must satisfy the following axioms:

    Three most commonly used norms are as follows:

    (1) 1–norm: it is the sum of the absolute value of all the components of the error vector:

    (2) 2–norm: it is the square root of the sum of the squares of all the components of the error vector:

    :

    6FURTHER READING

    In the following, we cite some general references which offer much more detailed accounts of the various aspects of the finite difference methods. In the area of fundamentals of numerical analysis, books by Conte and de Boor (1980), Dahlquist and Bjorck (1974), Golub and van Loan (1989), Isaacson and Keller (1966), Schwarz (1989), Stoer and Bulirsch (1980), and Young and Gregory (1988) are useful. Numerical methods for ordinary differential equations have been discussed in Boyce and DiPrima (1986), Gear (1971), Hairer and Wanner (1991), Henrici (1962), and Lambert (1973). As to the solution techniques for partial differential equations, books by Ames (1992), Forsythe and Wasow (1960), Godunov and Ryabenkii (1987), Greenspan and Casulli (1988), Lapidus and Pinder (1982), Mitchell and Griffiths (1980), Richtmyer and Morton (1967), and Smith (1985) are good sources of information. Regarding the methods for solving large systems of linear equations, one should consult Birkhoff and Lynch (1984), Briggs (1987), Hackbusch (1985), Hageman and Young (1981), Varga (1962), Wachspress (1966), and Young (1971).

    CHAPTER II

    PARABOLIC EQUATIONS

    1BACKGROUND ON PARTIAL DIFFERENTIAL EQUATIONS

    Any differential equation containing partial derivatives is called a partial differential equation. The order of a partial differential equation is equal (by analogy with the theory of ODE’s) to the order of the highest partial differential coefficient occurring in it. The dependent variable in any partial differential equation must be a function of at least two independent variables, and in general may be a function of n (≥ 2) independent variables.

    1.1Three Operators and Classes of Equations

    The following three operators usually serve as the bases for a study of PDE’s:

    These three operators typify the three general classes of partial differential operators that one encounters. These are, in the same order as the operators listed above:

    (a) elliptic operators which are encountered, for example, in potential flow problems in fluid mechanics.

    (b) parabolic operators arising in heat conduction and other diffusion dominated situations in various fields of the physical sciences.

    (c) hyperbolic operators of which the most popular physical application is in the phenomenon of wave transmission.

    1.2Classification of Equations : Second–Order PDEs

    The general form of a second–order partial differential equation is,

    Let d= –AC + be called the discriminant. The classification of the partial differential equations is based on the value assumed by this quantity. Specifically, we have

    The discriminant involves coefficients of the second–order derivatives only. Thus the lower order terms are simply ignored in classifying the differential equations into the categories above.

    An elliptic equation of the general form shown above may be transformed by a change of variables from (x, y) to (ξ, η) to yield the Laplacian form, whereupon the resulting equation is called the canonical form of the elliptic equation. The canonical form is written as

    Conversion of a parabolic equation to the canonical form results in the expression,

    A hyperbolic equation, when written in the canonical form reads,

    or

    Thus the Laplacian, heat and wave operators, respectively, are the canonical forms for the three types of PDE’s classified earlier.

    If an operator has non–constant coefficients, the classification of the nature of the PDE is only local, i.e., it may change as one moves from one part of the domain to another.

    Example 1 Consider the equation

    The difference in type can be quite crucial and the physical as well as numerical aspects relating to the system under investigation may markedly change from one region to another. A striking example is flow over an airfoil at high subsonic velocities. The different regions over the airfoil surface will experience different physical phenomena based on the local Mach number. The classification of the differential equations governing flow in these regions is as follows:

    The distinct regions are depicted in Fig. 1.

    Figure 1.Change of type of equation for Example 1.

    Example 2 As another example, the equation

    is (i) elliptic in the region where > 0, (ii) parabolic along lines = 0, (iii) hyperbolic in the region where < 0.

    2ANALYTICAL BACKGROUND FOR PARABOLIC PDES

    The typical equation used as a model to design solution procedures for parabolic equations in general is the the heat equation, which in one–dimensional form is

    Various properties of such an equation are illustrated in Fig. 2. We now provide some analytical background for methods of solution of parabolic PDEs.

    2.1Fourier Analysis

    The standard method of separation of variables yields the following form for the solution of the heat equation:

    Substituting in the heat equation then gives the relationship, τ = ()² = – ω². Hence, we get

    . Therefore after some time, only low frequency components are left, resulting in smearing of sharp profiles as shown in Fig. 2. Based on this property, we know that a solution of the parabolic equation can not generate sharp

    Enjoying the preview?
    Page 1 of 1