Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Communication Systems Principles Using MATLAB
Communication Systems Principles Using MATLAB
Communication Systems Principles Using MATLAB
Ebook971 pages8 hours

Communication Systems Principles Using MATLAB

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Discover the basic telecommunications systems principles in an accessible learn-by-doing format

Communication Systems Principles Using MATLAB covers a variety of systems principles in telecommunications in an accessible format without the need to master a large body of theory. The text puts the focus on topics such as radio and wireless modulation, reception and transmission, wired networks and fiber optic communications. The book also explores packet networks and TCP/IP as well as digital source and channel coding, and the fundamentals of data encryption.

Since MATLAB® is widely used by telecommunications engineers, it was chosen as the vehicle to demonstrate many of the basic ideas, with code examples presented in every chapter. The text addresses digital communications with coverage of packet-switched networks. Many fundamental concepts such as routing via shortest-path are introduced with simple and concrete examples. The treatment of advanced telecommunications topics extends to OFDM for wireless modulation, and public-key exchange algorithms for data encryption. Throughout the book, the author puts the emphasis on understanding rather than memorization. The text also:

  • Includes many useful take-home skills that can be honed while studying each aspect of telecommunications
  • Offers a coding and experimentation approach with many real-world examples provided
  • Gives information on the underlying theory in order to better understand conceptual developments
  • Suggests a valuable learn-by-doing approach to the topic

Written for students of telecommunications engineering, Communication Systems Principles Using MATLAB® is the hands-on resource for mastering the basic concepts of telecommunications in a learn-by-doing format.

LanguageEnglish
PublisherWiley
Release dateJul 31, 2018
ISBN9781119470755
Communication Systems Principles Using MATLAB

Related to Communication Systems Principles Using MATLAB

Related ebooks

Telecommunications For You

View More

Related articles

Reviews for Communication Systems Principles Using MATLAB

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Communication Systems Principles Using MATLAB - John W. Leis

    Preface

    History has probably never witnessed such a dramatic rise in technical sophistication, coupled with blanket penetration into everyday life, as has occurred in recent times with telecommunications. The combination of electronic systems, together with readily available programmable devices, provides endless possibilities for interconnecting what were previously separated and isolated means of communicating, both across the street and across the globe.

    How, then, is the college‐ or university‐level student to come to grips with all this sophistication in just a few semesters of study? Human learning has not changed substantially, but the means to acquire knowledge and shape understanding certainly has. This is through the ability to experiment, craft, code, and create systems of our own making. This book recognizes that a valuable approach is that of learn‐by‐doing, experimenting, making mistakes, and altering our mental models as a result. Whilst there are many excellent reference texts on the subject available, they can be opaque and impenetrable to the newcomer.

    This book is not designed to simply offer a recipe for each current and emerging technology. Rather, the underpinning theories and ideas are explained in order to motivate the why does it work in this way? questions rather than how does technology X work?.

    With these observations as a background, this book was designed to cover many fundamental topics in telecommunications but without the need to master a large body of theory whose relevance may not immediately be apparent. It is suitable for several one‐semester courses focusing on one or more topics in radio and wireless modulation, reception and transmission, wired networks, and fiber‐optic communications. This is then extended to packet networks and TCP/IP and then to digital source and channel coding and the basics of data encryption. The emphasis is on understanding, rather than regurgitating facts. Digital communications is addressed with the coverage of packet‐switched networks, with many fundamental concepts such as routing via shortest path introduced with simple, concrete, and intuitive examples. The treatment of advanced telecommunication topics extends to OFDM for wireless modulation and public‐key exchange algorithms for data encryption.

    The reader is urged to try the examples as they are given. MATLAB® was chosen as the vehicle for demonstrating many of the basic ideas, with code examples in every chapter as an integral part of the text, rather than an afterthought. Since MATLAB® is widely used by telecommunication engineers, many useful take‐home skills may be developed in parallel with the study of each aspect of telecommunications.

    In addition to the coding and experimentation approach, many real‐world examples are given where appropriate. Underpinning theory is given where necessary, and a Useful Preliminaries section at the start of each chapter serves to remind students of useful background theory, which may be required in order to understand the theoretical and conceptual developments presented within the chapter.

    Although an enormous effort, it has been an ongoing source of satisfaction in writing the book over several years and developing the ``learn‐by‐doing'' concept in a field that presents so many challenges in formulating lucid explanations. I hope that you will find it equally stimulating to your own endeavors and that it helps to understand the power and potential of modern communication systems.

    I will consider that my aims have been achieved if reading and studying the book is not a chore to you, but, rather, a source of motivation and inspiration to learn more.

    John W. Leis

    Acknowledgments

    As with any work of this magnitude, a great many people helped contribute, directly and indirectly, along the journey. Some may not even have realized it.

    I wish to thank Professor Derek Wilson for his unwavering enthusiasm for any project I put forward. Many discussions formed the core approaches used in this book, which were a little nebulous to start with, but became the self‐directed learn‐by‐doing style, which I hope will aid many others.

    I thank Professor Athanassios (Thanos) Skodras for his kind and positive comments at critical points in the manuscript preparation and to friend and colleague from afar, Professor Tadeusz Wysocki, for his interest in all things about telecommunications and support over the years. His unassuming style belies his knowledge and achievements.

    To my earlier mentors, including Bruce Varnes, whose technical competency was, and remains, a source of inspiration.

    To my students in signal processing, communications, and control courses, who have often provided the critical insight as to why a thing ought to be explained in a particular way, you have helped far more than you imagined. Their names are too numerous to mention, but their critical insight and questions helped sharpen my focus and writing.

    I am grateful to Brett Kurzman, Editor, Professional Learning at Wiley, who helped to bring the manuscript to the light of day ahead of time; he had a genuine interest in the project and never failed to give assistance when needed.

    Finally, to those who indirectly shaped the work you see here, my parents Harold and Rosalie, nothing was ever too much for them to sacrifice, and they instilled the desire to learn and achieve as much as your talents will permit. It does not matter which side of the street you come from, it's what you do and how you treat others that matter. Dedication and hard work overcome any obstacles, real or imagined.

    John W. Leis

    December 2017

    Introduction

    Telecommunications encompasses a vast range of subdisciplines, and any treatment must strike a balance between the breadth of treatment and depth in specific areas. This book aims to give an overview with sufficient technical detail to enable coverage from the physical layer (how the electrical or wireless or optical signal is encoded) through to the representation of real‐world information (images, sounds) and then to the movement of that data from one point to another and finally how to encode information and ensure its secure transmission.

    Apart from the first chapter, most chapters may be studied as stand‐alone entities or chosen for specific courses. Each chapter includes a Useful Preliminaries section at the start, which reviews some important concepts that may have been studied previously, and places them in the context of telecommunications.

    Chapter 1, Signals and Systems, introduces and reviews some basic ideas about signals that convey information. The emphasis is on operations that can be performed on signals, which are important to create telecommunication subsystems such as modulators. The idea of block diagrams, and synthesizing complex systems from simpler functional blocks, is also introduced.

    Chapter 2, Wired, Wireless, and Optical Systems, covers the means of physical transmission of telecommunication signals – either through wired systems such as copper or coaxial cable, wireless or radio systems, or fiber optics. Each is treated separately, with common threads such as signal attenuation covered for all. The emphasis is on understanding the ideas behind each method and their shortcomings in terms of cost, complexity, interference, transmission, and throughput. The section on radio transmission covers transmission and reception, antennas, and related issues such as propagation and diffraction. Visualizing the propagation of a radio signal is shown through MATLAB® code, which students can try for themselves.

    Chapter 3, Modulation and Demodulation, explains how a signal is encoded or modulated. It starts from very basic signal types such as Amplitude Modulation (AM) and proceeds to develop the theory for other types of modulation, toward newer techniques such as Orthogonal Frequency Division Multiplexing (OFDM), and the concept of spread spectrum. Digital line codes are also covered. Synchronization is also introduced in this chapter, including the phase‐locked loop and the Costas loop. The notion of IQ modulation and demodulation is explained, as it underpins so much digital modulation theory. MATLAB® examples are employed throughout, including the use of the Fourier transform in OFDM to cater for advanced‐level students.

    Chapter 4, Internet Protocols and Packet Delivery Algorithms, builds upon the assumption that the physical signal is sent and received, but that a useful system needs higher‐level functionality, which is provided by packet‐switched networks. Some of the important principles of the Internet are covered, including packet routing, TCP/IP, congestion control, error checking, and routing of packets from source to destination. Algorithms for packet routing and shortest‐path determination are explained, with MATLAB® examples using object‐oriented principles employed to elucidate the concepts.

    Chapter 5, Quantization and Coding, moves to more advanced treatment of signal representation. The idea of quantization (both scalar and vector) is explained, as well as the theory of entropy and data encoding. Lossless codes are explained using object‐oriented structures in MATLAB® to illustrate the design of Huffman coding trees. Algorithms for digital encoding that are explained include the Discrete Cosine Transform (DCT) for image encoding and the Linear Predictive Coding (LPC) approach to speech encoding.

    Chapter 6, Data Transmission and Integrity, extends the previous chapter to address the important topic of data integrity, encryption, and security. Classic algorithms for error checking such as the checksum and cyclic redundancy check (CRC) for error detection are introduced, as well as the Hamming code for error correction. For data security, the keydistribution and public‐key approaches are explained with numerical examples. The mathematics behind encryption is explained, and its computational limitations are investigated using code examples. Once again, reference is made to MATLAB® examples where appropriate.

    It is one thing to read about a topic, but quite another to really understand it. For this reason, end‐of‐chapter problems for each chapter serve to reinforce the concepts covered. They variously require explanation of understanding, algebraic derivations, or code writing. A solutions manual is available to instructors, which includes fully worked solutions together with MATLAB® code solutions where requested. Additionally, both lecture presentations and MATLAB® code from the text are available to instructors.

    To gain maximum benefit from this book, it is recommended that the examples using MATLAB® be studied as they are presented. MATLAB® is a registered trademark of The MathWorks, Inc.

    For MATLAB® product information, please contact:

    The MathWorks, Inc.

    3 Apple Hill Drive

    Natick, MA, 01760‐2098 USA

    Tel: 508‐647‐7000

    Fax: 508‐647‐7101

    E-mail: info@mathworks.com

    Web: mathworks.com

    How to buy: www.mathworks.com/store

    Although several additional toolboxes are available for separate purchase, only the core MATLAB® product is required for the examples in this book. All code examples in the book were developed and tested using MATLAB® version R2017a/R2017b.

    About the Companion Website

    This book is accompanied by a companion website:

    flastf01

    www.wiley.com/go/Leis/communications‐principles‐using‐matlab

    BCS Instructor Website contains:

    Teaching slides for instructors

    Solutions for the problems given in the chapters.

    Matlab codes

    1

    Signals and Systems

    1.1 Chapter Objectives

    On completion of this chapter, the reader should:

    Be able to apply mathematical principles to waveforms.

    Be conversant with some important terms and definitions used in telecommunications, such as root‐mean‐square for voltage measurements and decibels for power.

    Understand the relationship between the time‐ and frequency‐domain descriptions of a signal and have a basic understanding of the operation of frequency‐selective filters.

    Be able to name several common building blocks for creating more complex systems.

    Understand the reasons why impedances need to be matched, to maximize power transfer.

    Understand the significance of noise in telecommunication system design and be able to calculate the effect of noise on a system.

    1.2 Introduction

    A signal is essentially just a time‐varying quantity. It is often an electrical voltage, but it could be some other quantity, which can be changed or modulated easily, such as radio‐frequency power or optical (light) power. It is used to carry information from one end of a communications channel (the sender or transmitter) to the receiving end. Various operations can be performed on a signal, and in designing a telecommunications transmitter or receiver, many basic operations are employed in order to achieve the desired, more complex operation. For example, modulating a voice signal so that it may be transmitted through free space or encoding data bits on a wire all entail some sort ofprocessing of the signal.

    A voltage that changes in some known fashion over time is termed awaveform, and that waveform carries information as a function of time. In the following sections, several operations on waveforms are introduced.

    1.3 Signals and Phase Shift

    In many communication systems, it is necessary to delay a signal by a certain amount. If this delay is relative to the frequency of the signal, it is a constant proportion of the total cycle time of the signal. In that case, it is convenient to write the delay not as time, but as a phase angle relative to 360 or (radians). As with delay, it is useful to be able to advance a signal, so that it occurs earlier with respect to a reference waveform. This may run a little counter to intuition, since after all, it is not possible to know the value of a signal at some point in the future. However, considering that a signal repetitive goes on forever (or at least, for as long as we wish to observe it), then an advance of say one‐quarter of a cycle or 90 is equivalent to a delay of .

    To see the effect of phase advance and phase delay, consider Figure 1.1, which shows these operations on both sine and cosine signals. The left panels show a sine wave, a delayed signal (moved later in time), and an advanced signal (moved earlier). The corresponding equations are

    Starting with a cosine signal, Figure 1.1 shows on the right the original, delayed (or retarded), and advanced signals, respectively, with equations

    6 Autocorrelation plots illustrating sine, cosine, sine delayed 90°, cosine delayed 90°, sine advanced 90°, and cosine advanced 90° (left–right, top–bottom).

    Figure 1.1 Sine and cosine, phase advance, and phase retard. Each plot shows amplitude versus time .

    1.4 System Building Blocks

    Telecommunication systems can be understood and analyzed in terms of some basic building blocks. More complicated systems may be built up from simpler blocks. Each of the simpler blocks performs a specific function. This section looks initially at some simple system blocks and then at some more complex arrangements.

    1.4.1 Basic Building Blocks

    There are many types of blocks that can be specified according to need, but some common ones to start with are shown in Figure 1.2. The generic input/output block shows an input and an output , with the input signal waveform being altered in some way on passing through. The alteration of the signal may be simple, such as multiplying the waveform by a constant to give . Alternatively, the operation may be more complex, such as introducing a phase delay. The signal source is used to show the source of a waveform – in this case, a sinusoidal wave of a certain frequency . The addition (or subtraction) block acts on two input signals to produce a single output signal, so that for each time instant . Similarly, a multiplier block produces at its output the product .

    4 Schematics of the basic building blocks illustrating generic input/output (top-left), signal source (top-right), multiplier (bottom-left), and adder (bottom-right).

    Figure 1.2 Basic building blocks: generic input/output, signal source, adder, and multiplier.

    These basic blocks are used to encapsulate common functions and may be combined to build up more complicated systems. Figure 1.3 shows two system blocks in cascade. Suppose each block is a simple multiplier – that is, the output is simply the input multiplied by a gain factor. Let the gain of the block be and that of the block be . Then, the overall gain from input to output would be just .

    To see how it might be possible to build up a more complicated system from the basic blocks, consider the system shown on the right in Figure 1.3. In this case, the boxes are simply gain multipliers such that and , and so the overall output is .

    2 Schematics illustrating cascading blocks in series from x1(t) to h1(t), h2(t), to x3(t) (left), and adding them in parallel from x2(t) and x1(t) to Σ, to y(t) (right), each linked by rightward arrows.

    Figure 1.3 Cascading blocks in series (left) and adding them in parallel (right).

    1.4.2 Phase Shifting Blocks

    In Section 1.3, the concept of phase shift of a waveform was discussed. It is possible to develop circuits or design algorithms to alter the phase of a waveform, and it is very useful in telecommunication systems to be able to do this. Consequently, the use of a phase‐shifting block is very convenient. Most commonly, a phase shift of is required. Of course, radians in the phase angle is equivalent to 90 . As illustrated in the block diagrams of Figure 1.4, we use +90 to mean a phase advance of 90 and, similarly, −90 to mean a phase delay of 90 .

    2 Schematics of phase shifting blocks illustrating phase advance with arrows from A sin ωt to +90°, to A cos ωt (sine–cosine) (top), and retard advance with arrows from A cos ωt to –90°, to A sin ωt (cosine–sine) (bottom).

    Figure 1.4 Phase shifting blocks. Note the input and output equations.

    1.4.3 Linear and Nonlinear Blocks

    Let us examine more closely what happens when a signal is passed through a system. Suppose for the moment that it is just a simple DC voltage. Figure 1.5 shows a transfer characteristic, which maps the input voltage to a corresponding output voltage. Two input values separated by , with corresponding outputs separated by , allow determination of the change in output as a function of the change in input. This is referred to as the gain of the system.

    Output vs. input displaying an ascending line.Output vs. input displaying an ascending line pointed by a vertical upward arrow at point 1 (x-axis) labeled input, linking to a horizontal leftward arrow on top at point 3 (y-axis) labeled output.Output vs. input displaying an ascending line pointed by 2 vertical upward arrows at –0.5 and 1 on x-axis labeled input, linking to horizontal leftward arrows at 1.5 and 3 on y-axis, respectively, labeled output.Output vs. input displaying an ascending line with 2 double-headed arrows at point 1 on y-axis between –1 and 1 on x-axis labeled δx (horizontal) and at point 1 on x-axis between 1 and 3 on y-axis (vertical).

    Figure 1.5 The process of mapping an input (horizontal axis) to an output (vertical), when the block has a linear characteristic. The constant or DC offset may be zero, or nonzero as illustrated.

    Suppose such a linear transfer characteristic with zero offset (that is, it passes through ) is subjected to a sinusoidal input. The output is a linear function of input , which we denote as a constant . Then,

    (1.1)

    With input , the output will be

    (1.2)

    Thus the change in output is simply in proportion to the input, as expected.

    This linear case is somewhat idealistic. Usually, toward the maximum and minimum range of voltages which an electronic system can handle, a characteristic that is not purely linear is found. Typically, the output has a limiting or saturation characteristic – as the input increases, the output does not increase directly in proportion at higher amplitudes. This simple type of nonlinear behavior is illustrated in Figure 1.6. In this case, the relationship between the input and output is not a simple constant of proportionality – though note that if the input is kept within a defined range, the characteristic may well be approximately linear.

    Top: output vs. input displaying 2 ascending curves indicated by arrows as linear and non-linear. Bottom: output vs. input displaying 2 ascending curves with arrows labeled output (flat) and input (vertical).

    Figure 1.6 Example of mapping an input (horizontal axis) to an output (vertical), when the block has a nonlinear characteristic. Other types of nonlinearity are possible, of course.

    To fix ideas more concretely, suppose the characteristic may be represented by a quadratic form, with both a linear constant multiplier and a small amount of signal introduced that is proportional to the square of the input, via constant . If the input is again a sinusoidal function, the output may then be written as

    (1.3)

    This is straightforward, but what does the sinusoidal squared term represent? Using the trigonometric identities

    (1.4)

    (1.5)

    we have by subtracting the first from the second, and then putting ,

    (1.6)

    After application of this relation, and simplification, the output may be written as

    (1.7)

    This can be broken down into a constant or DC term, a term at the input frequency, and a term at twice the input frequency:

    (1.8)

    This is an important conclusion: the introduction of nonlinearity to a system may affect the frequency components present at the output. A linear system always has frequency components at the output of the exact same frequency as the input. A nonlinear system, as we have demonstrated, may produce harmonically related components at other frequencies.

    1.4.4 Filtering Blocks

    A more complicated building block is the frequency‐selective filter, almost always just called a filter. Typically, a number of filters are used in a telecommunication system for various purposes. The common types are shown in Figure 1.7. The sine waves (with and without cross‐outs) shown in the middle of each box are used to denote the operation of the filter in terms of frequency selectivity. For example, the lowpass filter shows two sine waves, with the lower one in the vertical stack indicating the lower frequency. The higher frequency is crossed out, thus leaving only lower frequency components. Representative input and output waveforms are shown for each filter type. Consider, for example, the bandpass filter. Lower frequencies are attenuated (reduced in amplitude) when going from input to output. Intermediate frequencies are passed through with the same amplitude, while high frequencies are attenuated. Thus, the term bandpass filter is used. Filters defining highpass and bandstop operation may be designated in a similar fashion, and their operation is also indicated in the figure.

    Image described by caption and surrounding text.

    Figure 1.7 Some important filter blocks and indicative time responses. The waveforms and crossed‐out waveforms in the boxes, arranged high to low in order, represent high to low frequencies. Input/ouput waveform pairs represent low, medium, and high frequencies, and the amplitude of each waveform at the output is shown accordingly.

    When it comes to more precisely defining the operation of a filter, one or more cutoff frequencies have to be specified. For a lowpass filter, it is not sufficient to say merely that lower frequencies are passed through unaltered. It is necessary to specify a boundary or cutoff frequency . Input waveforms whose frequency is below are passed through, but (in the ideal case) frequencies above are removed completely. In mathematical terms, the lower frequencies are passed through with a gain of one, whereas higher frequencies are multiplied by a gain of zero.

    The operation of common filters may be depicted in the frequency domain as shown in the diagrams of Figure 1.8. First, consider the lowpass filter. This type of filter would, ideally, pass all frequencies from zero (DC) up to a specified cutoff frequency. Ideally, the gain in the passband would be unity, and the gain in the stopband would be zero. In reality, several types of imperfections mean that this situation is not always realized. The order of the filter determines how rapidly the response changes from one gain level to another. The order of a filter determines the number of components required for electronic filters or the number of computations required for a digitallyprocessed filter.

    A low‐order filter, as shown on the left, has a slower transition than a high‐order filter (right). In any given design, a tradeoff must be made between a lower‐cost, low‐order filter (giving less rapid passband‐to‐stopband transitions) and a more expensive high‐order filter.

    Lowpass filters are often used to remove noise components from a signal. Of course, if the noise exists across a large frequency band, a filter can only remove or attenuate those components in its stopband. If the frequency range of the signal of interest also contains noise, then a simple filter cannot differentiate the desired signal from the undesired one.

    In a similar fashion, a highpass filter may be depicted as also shown in Figure 1.8. As we would expect, this type of filter passes frequencies that are higher than some desired cutoff. A hybrid characteristic leads to a bandpass filter or bandstop filter. These types of filters are used in telecommunication systems for special purposes. For example, the bandstop filter may be used to remove interference at a particular frequency, and a bandpass filter may be used to pass only a particular defined range of frequencies (a channel or set of channels, for example).

    4 Pairs of graphs illustrating lowpass, highpass, bandpass, and bandstop primary filter types (top–bottom), each with a wave intersecting a square wave.

    Figure 1.8 Primary filter types: lowpass, highpass, bandpass, and bandstop, with a low‐order filter shown on the left and higher‐order on the right. Ideally, the passband has a fixed and finite signal gain, whereas the stopband has zero gain.

    1.5 Integration and Differentiation of a Waveform

    This section details two signal operations that are related to fundamental mathematical operations. First, there is integration, which in terms of signals means the cumulative or sum total of a waveform over time. The opposite operation, differentiation, essentially means the rate of change of the voltage waveform over time. These are really just the two fundamental operations of calculus: Integration and differentiation. These are the inverse of each other, as will be explained. This intuition is useful in understanding the signal processing for communication systems presented in later chapters. The functions are presented in terms of time , as this is the most useful formulation when dealing with time‐varying signals.

    Figure 1.9 shows the calculation of the area (integral) and slope (derivative) for two adjacent points. At a specific time , the function value is , and at a small time increment later, the function value is . The area (or actually, a small increment of area) may be approximated by the area of the rectangle of width and height . This small increment of area is

    (1.9)

    It could be argued that this approximation would be more accurate if the area of the small triangle as indicated were taken into account. This additional area would be the area of the triangle or , which would diminish rapidly as the time increment gets smaller ( ). This is because it is not one small quantity , but the product of two small quantities .

    Similarly, the slope at point is . This is the instantaneous slope or derivative, which of course varies with , since varies. This slope may be approximated as the slope of the triangle, which changes from to over a range . So the slope is

    (1.10)

    Image described by caption and surrounding text.

    Figure 1.9 Calculating the area over a small time increment using a rectangle and the slope of the curve using a triangle.

    The calculation of the derivative or slope of a tangent to a curve is a point‐by‐point operation, since the slope will change with and hence the value (the exception being a constant rate of change of value over time, which has a constant slope). The integral or area, though, depends on the range of values over which we calculate the area. Since the integral is a continuous function, it extends from the left from as far back as we wish to the right as far as we decide. Figure 1.10 shows a function and its integral from the origin to some point (note that we have started this curve at , but that does not have to be the case). In the lower‐left panel, we extend the area to some point . This is essentially the same concept, except that the area below the horizontal line is in fact negative. While the concept of negative area might not be found in reality, it is a useful concept. In this case, the negative area simply subtracts from the positive area to form the net area. Finally, the lower‐right panel illustrates the fact that the area from to is simply the area to , less the area to . Mathematically, this is written as

    (1.11)

    where represents the cumulative area to that point. This is called the definite integral – an integration or area calculation with definite or known start and end boundaries.1

    4 Graphs illustrating the original function (top-left), with cumulative areas to a (top-right), to b (bottom left), and from a to b (bottom-right), represented by shaded areas in 2 intersecting curves.

    Figure 1.10 A function , calculating its cumulative area to and , and the area between and . Note the negative portions of the area below the line.

    The area may be approximated by creating successive small strips of width as before, and joining enough of them together to make the desired area. This is illustrated in Figure 1.11, for just a few subdivisions. Using the idea of as the cumulative area function under the curve , consider the area under the curve from to , where is some small step of time. The change in area over that increment is

    (1.12)

    Also, the change in area is approximated by the rectangle of height and width , so

    (1.13)

    Equating this change of area ,

    (1.14)

    (1.15)

    This is the same form of equation we had earlier for the definition of slope. Now, it is showing that the slope of some function , which happens to be the integral or area under , is actually equal to . That is, the derivative of the integral equals the original function. This is our first important conclusion.

    Example area subdivision computation illustrating a curve for f(t) with 8 vertical shaded bars below labeled t0, t1, t2, t3, t4, t5, t6, t7, and t8, respectively.

    Figure 1.11 Calculating area using a succession of small strips of width .

    Next, consider how to calculate the cumulative area by subdividing a curve into successive small strips. However, instead of the plain function , suppose we plot its derivative, instead. This is illustrated in Figure 1.12, for just a few strips of area from at the start to an ending value .

    Example area subdivision computation g(t)=f’(t) illustrating a curve for g(t)=f’(t) with 8 shaded vertical bars below labeled t0, t1, t2, t3, t4, t5, t6, t7, and t8, respectively.

    Figure 1.12 The area under a curve , but the curve happens to be the derivative of .

    The cumulative area (call it ) under this curve – which we defined to be the derivative of – is the summation of all the individual rectangles, which is

    (1.16)

    (1.17)

    Now we can use the same concept for slope as developed before, where we had the approximation to the derivative

    (1.18)

    Substituting this for all the derivative terms, we have

    (1.19)

    Canceling the and using the fact that each is actually the next point (for example, , ), we can simplify things to

    (1.20)

    Looking carefully, we can see terms that will cancel, such as in the first square brackets, minus the same term in the second square brackets. All these will cancel, except for the very first and the very last to leave us with

    (1.21)

    So this time, we have found that the area under some curve (which happens to be the derivative or slope of ) is actually equal to the original . That is, the area under the slope curve equals the original function evaluated at the end (right‐hand side), less any start area. The subtraction of the start area seems reasonable, since it is cumulative area to less cumulative area to , as we had previously. Thus, our second important result is that the integral of a derivative equals the original function.

    We can see the relationship between differentiation and integration at a glance in the following figures. Figure 1.13 shows taking a function (top) and integrating it (lower); if we then take this integrated (area) function as shown in Figure 1.14 (top) and then take the derivative of that (Figure 1.14, lower), we end up with the original function that we started with. And the process is invertible: Take the derivative of the top function in Figure 1.14 to obtain the lower plot of Figure 1.14. Transferring this to the top of Figure 1.13, and then integrating it, we again end up where we started: the original function. So it is reasonable to say that integration and differentiation are the inverse operations of each other. We just have to be careful with the integration, since it is cumulative area, and that may or may not have started from zero at the leftmost starting

    Enjoying the preview?
    Page 1 of 1