Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

COMMUNICATION SYSTEMS
COMMUNICATION SYSTEMS
COMMUNICATION SYSTEMS
Ebook594 pages6 hours

COMMUNICATION SYSTEMS

Rating: 0 out of 5 stars

()

Read preview

About this ebook

The purpose of this book is to introduce the student to communication systems and the broad principles of modern communication theory at an early stage in the undergraduate curriculum. It begins with the study of specific communication systems and gradually develops the underlying role of the signal-to-noise ratio and the bandwidth in limiting the rate of information transmission. The student is also introduced to the concept of the power density spectrum of non-random signals. This concept is then extended to random signals without any formal development. Features:
• Throughout the book, the emphasis is on a physical appreciation of the concepts rather than mathematical manipulation.
• Wherever possible, the concepts and results are interpreted intuitively.
• The basic concepts of information theory are not introduced as axioms but are developed heuristically.
Contents
1. Signal Analysis 2. Transmission of Signals and Power Density Spectra 3. Communication Systems: Amplitude Modulation 4. Communication Systems: Angle Modulation 5. Communication Systems: Pulse Modulation 6. Noise 7. Performance of Communication Systems 8. Introduction to Information Transmission 9. Elements of Digital Communication 10. Bibliography
LanguageEnglish
PublisherBSP BOOKS
Release dateMar 24, 2020
ISBN9789386819123
COMMUNICATION SYSTEMS

Related to COMMUNICATION SYSTEMS

Related ebooks

Electrical Engineering & Electronics For You

View More

Related articles

Reviews for COMMUNICATION SYSTEMS

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    COMMUNICATION SYSTEMS - B.P. Lathi

    INDEX

    chapter 1: Signal Anal/sis

    There are numerous ways of communicating. Two people may communicate with each other through speech, gestures, or graphical symbols. In the past, communication over a long distance was accomplished by such means as drumbeats, smoke signals, carrier pigeons, and light bams. More recently, these modes of long distance communication have been virtually superceded by communication by electrical signals. This is because electrical signals can be transmitted over a much longer distance (theoretically, any distance in the universe) and with a very high speed (about 3 x 10⁸ meters per second). In this book, we are concerned strictly with the latter mode, that is, communication by electrical signals.

    The engineer is chiefly concerned with efficient communication. This involves the problem of transmitting messages as fast as possible with the least error. We shall treat these aspects quantitatively throughout this book. It is, however, illuminating to discuss qualitatively the factors that limit the rate of communication. For convenience, we shall consider the transmission of symbols (such as alpha-numerical symbols of English language) by certain electrical waveforms. In the process of transmission, these waveforms are contaminated by omnipresent noise-signals which are generated by numerous natural and man-made events. Man-made events such as faulty contact switches, turning on and off of electrical equipment, ignition radiation, and fluorescent lighting continuously radiate random noise signals. Natural phenomena such as lightning, electrical storms, the sun’s radiation, and intergalactic radiation are the sources of noise signals. Fluctuation noise such as thermal noise in resistors and shot noise in active devices is also an important source of noise in all electrical systems. When the message-bearing signals are transmitted over a channel, they are corrupted with random noise signals and may consequently become unidentifiable at the receiver. To avoid this difficulty, it is necessary to k increase the power of the message-bearing waveforms. A certain ratio of signal power to noise power must be maintained. This ratio, SjN, is an important parameter in evaluating the performance of a system.

    We shall now consider increasing the speed of transmission by compressing the waveforms in time scale so that we can transmit more messages during a given period. When the signals are compressed, their variations are rapid, that is, they wiggle faster. This naturally increases their frequencies. Hence compressing a signal gives rise to the problem of transmitting signals of higher frequencies. This necessitates the increased bandwidth of the channel over which the messages are transmitted. Thus the rate of communication can be increased by increasing the channel bandwidth. In general, therefore, for faster and more accurate communication, it is desirable to increase SjN, the signal-to-noise power ratio, and the channel bandwidth.

    These conclusions are arrived at by qualitative reasoning and arehardly surprising. What is surprising, however, is that the bandwidthand the signal-to-noise ratio can be exchanged. W⁷e shall show laterthat to maintain a given rate of communication with a given accuracy,we can exchange the SjN ratio for the bandwidth, and vice versa. Onemay reduce the bandwidth if he is willing to increase the SjN ratio.On the other hand, a small SjN ratio may be adequate if the bandwidthof the channel is increased correspondingly. This is expressed by theShannon-Hartley law, . a.

    where G is the channel capacity or the rate of message transmission (to be discussed later), and B is the bandwidth of the channel (in Hz). For a given C, we may increase B and reduce SjN, and vice versa.

    In order to study communication systems we must be familiar with various ways of representing signals. We shall devote this chapter to signal analysis.

    1.1 ANALOGY BETWEEN VECTORS AND SIGNALS

    A problem is better understood or better remembered if it can be associated with some familiar phenomenon. Therefore we always search for analogies wdien studying a new problem. In the study of abstract problems, similarities are very helpful, particularly if the problem can be shown to be analogous to some concrete phenomenon. It is then easy to gain some insight into the new problem from the knowledge of the corresponding phenomenon. Fortunately, there is a perfect analogy between vectors and signals which leads to a better understanding of signal analysis. We shall now briefly review the properties of vectors.

    Vectors

    A vector is specified by magnitude and direction. We shall denote all vectors by boldface type and.their magnitudes by lightface type; for example, A is a certain vector with magnitude A. Consider two vectors Yx and V2 as shown in Fig. 1.1. Let the component of Vx along V2 be given by C12V2- How do we interpret physically the component of one vector along the other vector? Geometrically the component of a vector \x along the vector V2 is obtained by drawing a perpendicular from the end of Vx on the vector V2, as shown in Fig. 1.1. The vector V can now be expressed in terms of vector V2.

    Figure 1.1

    However, this is not the only way of expressing vector Vx in terms of vector V2. Figure 1.2 illustrates two of the infinite alternate possibilities. Thus, in Fig. 1.2a,

    and in Fig. 1.26,

    In each representation, Vr is represented in terms of V2 plus another vector, which will be called the error vector. If we are asked to approximate the vector Vx by a vector in the direction of V2, then Ve represents the error in this approximation. For example, in Fig. 1.1 if we approximate Vx by C12V2, then the error in the approximation is \e. If \x is approximated by CJf2 as in Fig. 1.2a, then the error is given by Ve , and so on. What is so unique about the representation in Fig. 1.1? It is immediately evident from the geometry of these figures that the error vector is smallest in Fig. 1.1. We can now formulate a quantitative definition of a component of a vector along another vector. The component of a vector \x along the vector V2 is given by C12V2, where C12 is chosen such that the error vector is minimum.

    Figure 1.2

    Let us now interpret physically the component of one vector along another. It is clear that the larger the component of a vector along the other vector, the more closely do the two vectors resemble each other in their directions, and the smaller is the error vector. If the component of a vector Vx along V2 is Cf12V2, then the magnitude of C12 is an indication of the similarity of the two vectors. If C12 is zero, then the vector has no component along the other vector, and hence the two vectors are mutually perpendicular. Such vectors are known as orthogonal vectors. Orthogonal vectors are thus independent vectors. If the vectors are orthogonal, then the parameter C12 is zero.

    For convenience, we define the dot product of two vectors A and B as

    where 6 is the angle between vectors A and B. It follows from the definition that

    According to this notation,

    Signals

    The concept of vector comparison and orthogonality can be extended to signals.* Let us consider two signals, f±(t) and f2(t). Suppose we want to approximate fx{t) in terms of f2(t) over a certain interval (tx < t < t2) as follows:

    How shall we choose C12 in order to achieve the best approximation? Obviously, we must find Cl2 such that the error between the actual function and the approximated function is minimum over the interval (tx < t < t2). Let us define an error function fe(t) as

    One possible criterion for minimizing the error fe(t) over the interval t1 to t2 is to minimize the average value offe(t) over this interval; that is, to minimize

    However, this criterion is inadequate because there can be large positive and negative errors present that may cancel one another in the process of averaging and give the false indication that the error is zero. For example, if we approximate a function sin t with a null function/(£) = 0 over an interval 0 to 2tt, the average error will be zero, indicating wrongly that sin t can be approximated to zero over the interval 0 to 277 without any error. This situation can be corrected if we choose to minimize the average (or the mean) of the square of the error instead of the error itself. Let us designate the average of fe²(t) by e.

    * We shall often use the terms signals and functions interchangeably. A sijenal is a function of time. However, there is one difference between signals and fmictions. A function f(t) can be a multivalued function of variable t. But the physical signal is always a single-valued function of t. Hence, whenever we use a term function, it will be understood that it is a single-valued function of the independent variable.

    To find the value of C12 which will minimize e, we must have

    Changing the order of integration and differentiation, wre get

    The first integral is obviouslv zero, and hence Eq. 1.9 yields

    Observe the similarity between Eqs. 1.10 and 1.2, which expresses C12 for vectors.

    By analogy with vectors, we say that/O has a component of waveform f2(t), and this component has a magnitude C12. If C12 vanishes, then the signal/) contains no component of signal f2(t), and we say that the two functions are orthogonal over the interval (t1, t2). It therefore follows that the two functions/) and/2(0 are orthogonal over an interval (*l5 t2) if

    Observe the similarity between Eq. 1.11 derived for orthogonal functions and Eq. 1.3 derived for orthogonal vectors.

    We can easily show that the functions smnto0t and sin???a>0£ are orthogonal over any interval (£0, t0 + 2tt/co0) for integral values of n and m. Consider the integral /:

    Since n and m are integers, (n ~ m) and (n + m) are also integers. In such a case the integral / is zero. Hence the two functions are orthogonal. Similarly, it can be shown that sin na)0t and cos mco0t are orthogonal functions and cos na)0t, cos mw0t are also mutually orthogonal.

    Example I.I

    A rectangular function f(t) is denned by (Fig. 1.1): ‘

    Approximate this function by a waveform sin t over the interval (0, 2tt) such that the mean square error is minimum.

    Solution. The function/(f) will be approximated over the interval (0, 2tt), as

    We shall find the optimum value of C12 which will minimize the mean square error in this approximation. According to Eq. 1.10, to minimize the mean square error:

    represents the best approximation of f(t) by a function sin t which will minimize the mean square error.

    Figure 1.3

    By analogy with vectors, we may say that the rectangular function f(t) shown in Fig. 1.3 has a component of function sin t and the magnitude of this component is 4/77.

    What is the significance of orthogonality of two functions? In the case of vectors, orthogonality implies that one vector has no component along the other. Similarly, a function does not contain any component of the form of the function which is orthogonal to it. If we try to approximate a function by its orthogonal function, the error will be larger than the original function itself, and it is better to approximate a function with a null function f(t) = 0 rather than with a function orthogonal to it. Hence the optimum value of C12 = 0 in such a case.

    Graphical Evaluation of a Component of One Function in the Other

    It is possible to evaluate the component of a function in the other function by graphical means, using Eq. 1.10. Suppose two functions fiHt) and f2(t) are known graphically, and it is desired to evaluate the component of waveform f2(t) contained in signal fx{t) over a period (0, T). We know that this component is given by Cl2f2(t); that is, fx(t) contains the component of function f2{t) of magnitude C12, given by

    The integral in the numerator in this equation can be found by multiplying the two functions and evaluating the area under the product curve as showrn in Fig. 1.4. The denominator integral can be evaluated by finding the area under the function [f2(t)]² in a similar way.

    Figure 1.4 Graphical evaluation of the component of waveform f2(t) in a signal fx{t).

    It is evident that iifi(t) varies much more slowly than/2(£), the area under the curve/(O will be very small since the positive and negative areas will be approximately equal and will tend to cancel each other as shown in Fig. 1.4a. Hence fx(t) contains a small component of f2(t). If, however, fx(t) varies at about the same rate as f2(t), then the area under the product curve fi(t)f2{t) will be much larger, as shown in Fig. 1.46, and hence fx(t) will contain a large component of function f2(t). This result is also intuitively obvious, since if two functions vary at about the same rate, there must be a great deal of similarity between the two functions, and hence fx(t) will contain a large component of the function f2(t).

    Orthogonal Vector Space

    The analogy between vectors and signals may be extended further. Let us now consider a three-dimensional vector pace described by rectangular coordinates, as shown in Fig. 1.5. We shall designate a vector of unit length along the x axis by ax. Similarly, unit vectors along the y and z axes will be designated by &y and az, respectively. Since the magnitude of vectors a, a, and a2 is unity, it follows that for any general vector A:

    Figure 1.5

    The component of A along the x axis = A • ax

    The component of A along the y axis = A • ay

    The component of A along the z axis = A • a2

    A vector A drawn from the origin to a general point (x0, y0, z0) in space has components x0, y0, and z0 along the x, y, and z axes, respectively. We can express this vector A in terms of its components along the three mutually perpendicular axes:

    Any vector in this space can be expressed in terms of the three vectors ax, ay, and a2.

    Since the three vectors &x, a,yi and a2 are mutually perpendicular, it follows that

    The properties of the three vectors, as expressed by Eq. 1.12, can be succinctly expressed by

    where m and n can assume any value x. y, and z.

    Now we make an important observation. If the coordinate system has only two axes, x and y, then the system is inadequate to express a general vector A in terms of the components along these axes. This system can only express two components of vector A. Therefore it is necessary that to express any general vector A iu terms of its coordinate components, the system of coordinates must be complete. In this case there must be three coordinate axes.

    A single straight line represents a one-dimensional space; a single plane represents a two-dimensional space; and our universe, in general, has three dimensions of space. We may extend our concepts as developed here to a general n-dimensional space. Such a physical space, of course, does not exist in nature. Nevertheless, there are many analogous problems that may be viewed as n-dimensional problems. I or example, a linear equation in -independent variables may be viewed as a vector expressed in terms of its components along n mutually perpendicular coordinates.

    If unit vectors along these n mutually perpendicular coordinates are designated as x1? x2, . . . , xn and a general vector A in this n-dimensional space has components Cl9 C2, . . . , C.n, respectively, along these n coordinates, then

    All the vectors xl5 x2, . . . . xn are mutually orthogonal, and the set must be complete in order for any general vector A to be represented by Eq. 1.14. The condition of orthogonality implies that the dot product of any two veer • n and xrn must be zero, and the dot product of any vector with itself must be unity. This is the direct extension of Eq. 1.13 and can be expressed as

    The constants Gl9 C2, C3, . . . , Cn in Eq. 1.14 represent the magnitudes of the components of A along the vectors xl5 x2, x3, . . . , x„, respectively. It follows that

    This result can also be obtained by taking the dot product of both sides in Eq. 1.14 with vector xr. We have

    From Eq. 1.15 it follows that all the terms of the form C;x3. • xr(j =. r) on the right-hand side of Eq. 1.17 are zero. Therefore

    We call the set of vectors (xx, x2, . . . , xn) an orthogonal vector space. In general, the product xm • xn can be some constant km instead of unity. When km is unity, the set is called normalized orthogonal set, or ortho-normal vector space. Therefore, in general, for orthogonal vector space {xr} • • • (r = 1, 2, . . . , n) we have

    For an orthogonal vector space, Eq. 1.18 is modified to

    We shall now summarize the results of our discussion. For an orthogonal vector space {x } • • • (r = 1,2,...),

    If this vector space is complete, then any vector F can be expressed as

    Orthogonal Signal Space

    We shall now apply certain concepts of vector space to gain some intuition about signal analysis. We have seen that any vector can be expressed as a sum of its components along n mutually ortbogonal vectors, provided these vectors formed a complete set of coordinate system. We therefore suspect that it may be possible to express any function f(t) as a sum of its components along a set of mutually orthogonal functions if these functions form a complete set. We shall now show that this indeed is the case.

    Approximation of a Function by a Set of Mutually Orthogonal Functions

    Let us consider a set of n functions gt), g2(t), . *~. , gn(t) which are orthogonal to ont another over an interval tx to t2\ that is,

    Let an arbitrary function f(t) be approximated over an interval (t,. t«) bv a linear combination of these n mutuallv orthogonal functions.

    For the best approximation, we must find the proper values of constants Glt C2, . . . , Cn such that e, the mean square of /e(0> is minimized.

    By definition,

    It is evident from Eq. 1.24 that £ is a function of Cx, C2, . . . , Cn and to minimize £, we must have

    Let us consider the equation:

    Since (t2 tx) is constant, Eq. 1.25 may be expressed as

    When we expand the integrand, we note that all the terms arising due to the cross product of the orthogonal functions are zero by virtue of orthogonality; that is, all the terms of the form $gj(t)gk(tydt are zero, as expressed in Eq. 1.23. Similarly, the derivative with respect to Ci of all the terms that do not contain Ci is zero; that is,

    This leaves only two nonzero terms in Eq. 1.26 as follow

    Changing the order of differentiation and integration in Eq. 1..’7, we get

    We may summarize this result as follows. Given*a set of n functions mutually orthogonal over the interval (tx, t2), it is possible to approximate an arbitrary function/(f) over this interval by a linear combination of these n functions.

    For the best approximation, that is, the one that will minimize the mean of the square error over the interval, we must choose the coefficients Clf C2, . . . , Cn, etc., as given by Eq. 1.28.

    Evaluation of Mean Square Error

    Let us now find the value of e when optimum values of coefficients Ci, Co> • • • , Cn are chosen according to Eq. 1.28. By definition,

    But from Eqs. 1.28a and 1.28b it follows that

    Substituting Eq. 1.31 in Eq. 1.30, we get

    One can therefore evaluate the mean-square error bv using Eq. 1.33.

    Representation of a Function by a Closed or a Complete Set of MutnaJry Orthogonal Functions

    It is evident from Eq. 1.33 that if we increase n, that is, if we approximate f(t) by a larger number of orthogonal functions, the error will Become smaller. But by its very definition, e is a positive quantity; hence in the limit as the number of terms is made infinity, the sum converge to the integral

    and then e vanishes. Thus

    Under these conditions/(O is represented by the infinite series:

    The infinite series on the right-hand side of Eq. 1.34 thus converges to f(t) such that the mean square of the error is zero. The series is said to converge in the mean. Note that the representation of f(t) is now exact.

    A set of functions gx{t), g2(t), . . . , gr(t) mutually orthogonal over the interval (t t2) is said to be a complete or a closed set if there exists no function x(t) for which it is true that

    If a function x(t) could be found such that the above integral is zero, then obviously x(t) is orthogonal to each member of the set {gr(t)} and, consequently, is itself a member of the set. Evidently the set cannot be complete without x(t) being its member.

    Let us now summarize the results of this discussion. For a set {gr(t)}, (r = 1, 2, . . .) mutually orthogonal over the interval (tlf t2),

    If this function set is complete, then any function/(0 can be expressed

    Comparison of Eqs. 1.35 to 1.37 with JUqs. i.zu to i.zz onngs om iorce-fully the analogy between vectors and signals. Any vector can, be expressed as a sum of its components along n mutually orthogonal vectors, provided these vectors form a complete set. Similarly, any function f(t) can be expressed as a sum of its components along mutually orthogonal functions, provided these functions form a closed or a complete set.

    In the comparison of vectors and signals, the dot product of two vectors is analogous to the integral of the product of two signals, that is

    It follows that the square of the length .4 of a vector A is analogous to the integral of the square of a function, that is,

    If a vector is expressed in terms of its mutually orthogonal components, the square of the length is given by the sum of the squares of the lengths of the component vectors. An analogous result holds true for signals. This is precisely expressed by Eq. 1.34 (Parseval’s theorem). Since the component functions are not orthonormal, the right-hand side is 21 Cr²Kr² instead of E Cr². For an orthonormal set, Kr = 1. Equation 1,34 is thus analogous to the case where a vector is expressed in terms of its components along nraUiaJry orthogonal vectors whose length squares.are KXi K2, . . . , Kr . . . , etc.

    Equation 1.36 shows that/(£) contains a component of signal gr(t). and this component has a magnitude Cr. Representation of f(t) by a set of iilflnite mutually orthogonal functions is called generalized Fourier series representation of f(t).

    Example 1.2

    As an example we shall again consider the rectangular function of Example 1.1 as shown in Fig. 1.3. This function was approximated by a single function sin t. We shall now see how the approximation improves when a large number of mutually orthogonal functions are used. It was shown previously that functions sin nco0t and sin mco0t are mutually orthogonal over the interval (t0, t0 + 27r/co0) for all integral valuesof n and m. Hence it follows that a set of functions sin t, sin 2t, sin St, etc., are mutually orthogonal over the interval (0, 2-rr). The rectangular function in Fig. 1.3 will now be approximated by a finite series of sinusoidal functions.

    f(t) Cx sin t + C2 sin 2t -} + Cn sin nt

    The constants Cr can be evaluated by using Eq. 1.28

    Thus f(t) is approximated by

    Figure 1.6 shows the actual function and the approximated function when the function is approximated with one, two, three, and four terms, respectively, in Eq. 1.38. For the given number of terms of the form sin rL these are the optimum approximations which minimize the mean-square error. As Ave increase the number of terms, the approximation improves and the mean-square error diminishes. For infinite terms the mean-square error is zero.*

    Let us evaluate the error e in these approximations. From Eq. 1.33,

    * The Fourier series fails to converge at the points of discontinuity, and hence even though the number of terms is increased, the approximated function shows large amounts of ripples at the points of discontinuity. This is known as the Gibbs phenomenon,

    Figure 1.6 Approximation of a rectangular function by orthogona. functions.

    and

    Therefore, for one-term approximation,

    For two-term approximation,

    For three-term approximation,

    and so on.

    It can be easily seen that in this case the mean-square error diminishes rapidly as the number of terms is increased.

    Orthogonality in Complex Functions

    In the previous discussion, we considered only real functions of real variables. If/i(J) and/2(£) are complex functions of real variable t, then it can be shown that fx(t) can be approximated by G12f2(t) over an interval (tlt t2).

    The optimum value of Cl2 to minimize the mean-square error magnitude is given byf

    where f2*(0 is a complex conjugate of f2(t).

    It is evident from Eq. 1.39 that two complex functions fx(t) and/2(J) are orthogonal over the interval (tlt t2) if

    For a set of complex functions {gr(t)}, (r = 1, 2, . . .) mutually orthogonal over the interval (tx, t2):

    If this set of functions is complete, then any function f(t) can be expressed as

    f See, for instance, S. Mason and H. Zimmerman, Electronic Circuits, Sigyials and Systems, pp. 199-200, John Wiley and Sons, New York, 1/960.

    where

    If the set of functions is real, then gr*(t) = gr(t) and all results for complex functions reduce to those obtained for real functions in Eqs. 1.35 to 1.37.

    1.2 SOME EXAMPLES OF ORTHOGONAL FUNCTIONS

    Representation of a function over a certain interval by a linear combination of mutually orthogonal functions is called Fourier series representation of a function. There exist, however, a large number of sets of orthogonal functions, and hence a given function may be expressed in terms of different sets of orthogonal functions. In vector space this is analogous to the representation of a given vector in different sets of coordinate systems. Each set of orthogonal functions corresponds to a coordinate system. Some of the examples of sets of orthogonal functions are trigonometric functions, exponential functions, Legendre polynomials, and Jacobi polynomials. Bessel functions also form a special kind of orthogonal functions, f

    Legendre Fourier Series

    A set of Legendre polynomials Pn(x), (n 0, 1, 2. . . .) forms a complete set of mutually orthogonal functions over an interval (— 1 < t < 1). These polynomials can be defined by Rodrigues’ formula:

    It follows from this equation that

    and so on.

    f Bessel functions are orthogonal with respect to a weighting function. See, for instance, W. Kaplan, Advanced Calculus, Addison-Wesley, Reading, Mass., 1953.

    We may verify the orthogonality of these polynomials by showing that

    We can express a function f(t) in terms of Legendre polynomial over an interval ( — 1

    Enjoying the preview?
    Page 1 of 1