Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Handbook of Digital Signal Processing: Engineering Applications
Handbook of Digital Signal Processing: Engineering Applications
Handbook of Digital Signal Processing: Engineering Applications
Ebook1,731 pages13 hours

Handbook of Digital Signal Processing: Engineering Applications

Rating: 0 out of 5 stars

()

Read preview

About this ebook

FROM THE PREFACE: Many new useful ideas are presented in this handbook, including new finite impulse response (FIR) filter design techniques, half-band and multiplierless FIR filters, interpolated FIR (IFIR) structures, and error spectrum shaping.
LanguageEnglish
Release dateOct 22, 2013
ISBN9780080507804
Handbook of Digital Signal Processing: Engineering Applications

Related to Handbook of Digital Signal Processing

Related ebooks

Technology & Engineering For You

View More

Related articles

Reviews for Handbook of Digital Signal Processing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Handbook of Digital Signal Processing - Douglas F. Elliott

    Handbook of Digital Signal Processing

    Engineering Applications

    Douglas F. Elliott

    Rockwell International Corporation, Anaheim, California

    Table of Contents

    Cover image

    Title page

    Copyright

    Preface

    Acronyms and Abbreviations

    Notation

    Chapter 1: Transforms and Transform Properties

    Publisher Summary

    I INTRODUCTION

    II REVIEW OF FOURIER SERIES

    III DISCRETE-TIME FOURIER TRANSFORM

    IV z-TRANSFORM

    V LAPLACE TRANSFORM

    VI TABLE OF z-TRANSFORMS AND LAPLACE TRANSFORMS

    VII DISCRETE FOURIER TRANSFORM

    VIII DISCRETE-TIME RANDOM SEQUENCES

    IX CORRELATION AND COVARIANCE SEQUENCES

    X POWER SPECTRAL DENSITY

    XI SUMMARY

    Chapter 2: Design and Implementation of Digital FIR Filters

    Publisher Summary

    I INTRODUCTION

    II FIR DIGITAL FILTER PRELIMINARIES

    III FIR FILTER DESIGN BASED ON WINDOWING

    IV EQUIRIPPLE APPROXIMATIONS FOR FIR FILTERS

    V MAXIMALLY FLAT APPROXIMATIONS FOR FIR FILTERS

    VI LINEAR PROGRAMMING APPROACH FOR FIR FILTER DESIGNS

    VII FREQUENCY TRANSFORMATIONS IN FIR FILTERS

    VIII TWO-DIMENSIONAL LINEAR-PHASE FIR FILTER DESIGN AND IMPLEMENTATION

    IX RECENT TECHNIQUES FOR EFFICIENT FIR FILTER DESIGN

    X OTHER USEFUL TYPES OF FIR FILTERS

    XI SUMMARY

    APPENDIX A DESIGN CHARTS FOR DIGITAL FIR DIFFERENTIATORS AD HILBERT TRANSFORMERS

    APPENDIX B PROGRAM LISTINGS FOR LINEAR-PHASE FIR FILTER DESIGN

    Chapter 3: Multirate FIR Filters for Interpolating and Desampling

    Publisher Summary

    I INTRODUCTION

    II CHARACTERISTICS OF BANDWIDTH-REDUCING FIR FILTERS

    III DATA RATE REDUCTION (DESAMPLING) BY 1/M FILTERS

    IV HETERODYNE PROCESSING

    V INTERPOLATING FILTERS

    VI ARCHITECTURAL MODELS FOR FIR FILTERS

    VII SUMMARY

    APPENDIX WINDOWS AS NARROWBAND FILTERS

    D Closing Comments

    Chapter 4: IIR Digital Filters

    Publisher Summary

    I INTRODUCTION

    II PRELIMINARIES

    III STABILITY

    IV DIGITAL FILTER REALIZATIONS

    V FREQUENCY DOMAIN DESIGN

    VI ANALOG FILTER DESIGN AND FILTER TYPES†

    VII FREQUENCY TRANSFORMATIONS

    VIII DIGITAL FILTER DESIGN BASED ON ANALOG TRANSFER FUNCTIONS

    IX SPECTRAL TRANSFORMATIONS†

    X DIGITAL FILTERS BASED ON CONTINUOUS-TIME LADDER FILTERS

    XI SUMMARY

    APPENDIX IIR DIGITAL FILTER CAD PROGRAMS

    Chapter 5: Low-Noise and Low-Sensitivity Digital Filters

    Publisher Summary

    I INTRODUCTION

    II BINARY NUMBERS—REPRESENTATION AND QUANTIZATION

    III GENERATION AND PROPAGATION OF ROUNDOFF NOISE IN DIGITAL FILTERS

    IV DYNAMIC RANGE CONSTRAINTS AND SCALING

    V SIGNAL-TO-ROUNDOFF NOISE RATIO IN SIMPLE IIR FILTER STRUCTURES

    VI LOW-NOISE IIR FILTER SECTIONS BASED ON ERROR-SPECTRUM SHAPING

    VII SIGNAL-TO-NOISE RATIO IN GENERAL DIGITAL FILTER STRUCTURES

    VIII LOW-NOISE CASCADE-FORM DIGITAL FILTER IMPLEMENTATION

    IX NOISE REDUCTION IN THE CASCADE FORM BY ESS

    X LOW-NOISE DESIGNS VIA STATE-SPACE OPTIMIZATION

    XI PARAMETER QUANTIZATION AND LOW-SENSITIVITY DIGITAL FILTERS

    XII LOW-SENSITIVITY SECOND-ORDER SECTIONS

    XIII WAVE DIGITAL FILTERS

    XIV THE LOSSLESS BOUNDED REAL APPROACH FOR THE DESIGN OF LOW-SENSITIVITY FILTER STRUCTURES

    XV STRUCTURAL LOSSLESSNESS AND PASSIVITY

    XVI LOW-SENSITIVITY ALL-PASS-BASED DIGITAL FILTER STRUCTURES

    XVII DIGITAL ALL-PASS FUNCTIONS

    XVIII ORTHOGONAL DIGITAL FILTERS

    XIX QUANTIZATION EFFECTS IN FIR DIGITAL FILTERS

    XX LOW-SENSITIVE FIR FILTERS BASED ON STRUCTURAL PASSIVITY

    XXI LIMIT CYCLES IN IIR DIGITAL FILTERS

    Chapter 6: Fast Discrete Transforms

    Publisher Summary

    I INTRODUCTION

    II UNITARY DISCRETE TRANSFORMS

    III THE OPTIMUM KARHUNEN–LOÈVE TRANSFORM

    IV SINUSOIDAL DISCRETE TRANSFORMS

    V NONSINUSOIDAL DISCRETE TRANSFORMS

    VI PERFORMANCE CRITERIA

    VII COMPUTATIONAL COMPLEXITY AND SUMMARY

    APPENDIX A FAST IMPLEMENTATION OF DCT VIA FFT

    APPENDIX B DCT CALCULATION USING AN FFT

    APPENDIX C WALSH–HADAMARD COMPUTER PROGRAM

    Chapter 7: Fast Fourier Transforms

    Publisher Summary

    I INTRODUCTION

    II DFTS AND DFT REPRESENTATIONS

    III FFTS DERIVED FROM THE MIR

    IV RADIX-2 FFTs

    V RADIX-3 AND RADIX-6 FFTs

    VI RADIX-4 FFTs

    VII SMALL-N DFTs

    VIII FFTs DERIVED FROM THE RURITANIAN CORRESPONDENCE (RC)

    IX FFTs DERIVED FROM THE CHINESE REMAINDER THEOREM

    X GOOD’S FFT

    XI KRONECKER PRODUCT REPRESENTATION OF GOOD’S FFT

    XII POLYNOMIAL TRANSFORMS

    XIII COMPARISON OF ALGORITHMS

    XIV FFT WORD LENGTHS

    XV SUMMARY

    APPENDIX A Small-N DFT Algorithms

    APPENDIX B FFT COMPUTER PROGRAMS

    APPENDIX C RADIX-2 FFT PROGRAM

    APPENDIX D PRIME FACTOR ALGORITHM (PFA) PROGRAM LISTINGS FOR PRIME FACTOR TRANSFORM

    APPENDIX E HIGHLY EFFICIENT PFA ASSEMBLY LANGUAGE COMPUTER PROGRAM

    Chapter 8: Time Domain Signal Processing with the DFT

    Publisher Summary

    I INTRODUCTION

    II THE DFT AS A BANK OF NARROWBAND FILTERS

    III FAST CONVOLUTION AND CORRELATION

    IV THE DFT AS AN INTERPOLATOR AND SIGNAL GENERATOR

    V SUMMARY

    Chapter 9: Spectral Analysis

    Publisher Summary

    I INTRODUCTION

    II RATIONAL SPECTRAL MODELS

    III RATIONAL MODELING: EXACT AUTOCORRELATION KNOWLEDGE

    IV OVERDETERMINED EQUATION MODELING APPROACH

    V DETECTION OF MULTIPLE SINUSOIDS IN WHITE NOISE

    VI MA MODELING: TIME SERIES OBSERVATIONS

    VII AR MODELING TIME SERIES OBSERVATIONS

    VIII ARMA MODELING: TIME SERIES OBSERVATIONS

    IX ARMA MODELING: A SINGULAR VALUE DECOMPOSITION APPROACH

    X NUMERICAL EXAMPLES

    XI CONCLUSIONS

    Chapter 10: Deconvolution

    Publisher Summary

    I INTRODUCTION

    II DECONVOLUTION AND LTI SYSTEMS WITH NO MEASUREMENT NOISE

    III DECONVOLUTION AND THE IDENTIFICATION OF DTLTI SYSTEMS WITH MEASUREMENT NOISE

    IV FAST ALGORITHMS FOR DECONVOLUTION PROBLEMS

    V SOME PRACTICAL APPLICATIONS OF DECONVOLUTION

    VI SUMMARY

    APPENDIX A REFERENCES FOR OBTAINING COMPUTATIONAL ALGORITHMS

    APPENDIX B IMPLEMENTING THE LEVINSON OR TOEPLITZ RECURSION

    APPENDIX C IMPLEMENTING THE LATTICE FORM OF THE LEVINSON RECURSION

    Chapter 11: Time Delay Estimation

    Publisher Summary

    I INTRODUCTION

    II TIME DELAY ESTIMATION FOR ACTIVE SENSORS

    C A Time Delay Estimation Algorithm for Active Sensors

    III Time Delay Estimation for Passive Sensors

    IV CROSS-CORRELATION AND ITS RELATIONSHIP TO THE TIME DELAY ESTIMATION PROBLEM

    V THE IMPLEMENTATION OF SOME TIME DELAY ESTIMATION ALGORITHMS USING THE FAST FOURIER TRANSFORM (FFT)

    VI ALGORITHM PERFORMANCE

    VII SUMMARY

    Chapter 12: Adaptive Filtering

    Publisher Summary

    I INTRODUCTION

    II SOME MATRIX OPERATIONS

    III A CLASS OF OPTIMAL FILTERS

    IV LEAST-MEAN-SQUARES (LMS) ALGORITHM

    V LMS LATTICE ALGORITHMS

    Chapter 13: Recursive Estimation

    Publisher Summary

    I INTRODUCTION

    II LEAST SQUARES ESTIMATION

    III LINEAR MINIMUM MEAN SQUARE ESTIMATION

    IV DISCRETE KALMAN FILTERING EXAMPLES

    V EXTENSIONS

    VI SOME COMPUTATIONAL CONSIDERATIONS

    VII SUMMARY

    Chapter 14: Mechanization of Digital Signal Processors

    Publisher Summary

    I INTRODUCTION

    II DIGITAL MACHINE FUNDAMENTALS

    III THE ESSENCE OF DIGITAL SIGNAL PROCESSING

    IV NUMBER REPRESENTATIONS

    V HARDWARE COMPONENTS

    VI MICROPROGRAMMING

    VII KEEPING THINGS IN PERSPECTIVE

    VIII DISTRIBUTED ARITHMETIC

    IX SUMMARY

    Window Generation Computer Program

    Index

    Copyright

    Copyright © 1987 by Academic Press, Inc.

    all rights reserved

    no part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher.

    ACADEMIC PRESS, INC.

    1250 Sixth Avenue, San Diego, California 92101

    United Kingdom Edition published by

    ACADEMIC PRESS INC. (LONDON) LTD.

    24-28 Oval Road, London NW1 7DX

    Library of Congress Cataloging in Publication Data

    Handbook of digital signal processing.

    Includes index.

    1. Signal processing–Digital techniques–Handbooks, manuals, etc. I. Elliott, Douglas F.

    TK5102.5.H32 1986 621.38′043 86-26490

    ISBN 0-12-237075-9 (alk. paper)

    printed in the united states of america

    87 88 89 90 9 8 7 6 5 4 3 2 1

    Preface

    When Academic Press approached me with the proposal that I serve as editor of a handbook for digital signal processing, I was aware of the need for such a book in my work in the aerospace industry. Specifically, I wanted basic digital signal processing principles and approaches described in a book that a person with a standard engineering background could understand. Also, I wanted the book to cover the more advanced approaches, to outline the advantages and disadvantages of each approach, and to list references in which I could find detailed derivations and descriptions of the approaches that might be most applicable to given implementation problems.

    The various authors in this volume have done an outstanding job of accomplishing these goals. Coverage of the fundamentals alone makes the book self-sufficient, yet many advanced techniques are described in readable, descriptive prose without formal proofs. Detailing fundamental approaches and describing other available techniques provide an easily understandable book containing information on a wide range of approaches. For example, the chapter on adaptive filters derives basic adaptive filter structures and provides the reader with a background to see the forest of adaptive filtering. The chapter then describes various alternatives, including adaptive lattice structures that might be applicable to particular engineering problems. This description is provided without the detailed derivations that get one lost in the trees.

    Many new useful ideas are presented in this handbook, including new finite impulse response (FIR) filter design techniques, half-band and multiplierless FIR filters, interpolated FIR (IFIR) structures, and error spectrum shaping. The advanced digital filter design techniques provide for low-noise, low-sensitivity, state-space, and limit-cycle free filters. Filters for decimation and interpolation are described from an intuitive and easily understandable viewpoint. New fast Fourier transform (FFT) ideas include in-place and in-order mixed-radix FFTs, FFTs computed in nonorthogonal coordinates, and prime factor and Winograd Fourier transform algorithms. Transmultiplexing discussions carefully describe how to control crosstalk, how to satisfy dynamic range requirements, and how to avoid aliasing when resampling. Using an overdetermined set of Yule–Walker equations is a key concept described for reducing data-induced hypersensitivities of parameters in model-based spectral estimation. Tools are provided for understanding the basic theory, physics, and computational algorithms associated with deconvolution and time delay estimation. Recursive least squares adaptive filter algorithms for both lattice and transversal structures are compared to other approaches, and their advantage in terms of rapid convergence at the expense of a modest computational increase is discussed. Extensions of Kalman filtering include square-root filtering. The simplicity and regularity of distributed arithmetic are lucidly described and are shown to be attractive for VLSI implementation.

    There is some overlap in the material covered in various chapters, but readers will find the overlap helpful. For example, in Chapter 2 there is an excellent derivation of FIR digital filters that provides the necessary mathematical framework, and in the first part of Chapter 3 there is an intuitive explanation of how various FIR filter parameters, such as impulse response length, affect the filter performance. Similarly, in Chapter 9 the Yule–Walker equations are discussed in the context of spectral analysis, whereas in Chapter 10 these equations appear from a different viewpoint in the context of deconvolution.

    Many applications in digital signal processing involve the use of computer programs. After many discussions the chapter authors decided to include useful programs and to give references to publications in which related program listings can be found. For example, Chapter 7 points out that a large percentage of FFT applications are probably best accomplished with a radix-2 FFT, and such an FFT is found in Appendix 7-C. However, Appendixes 7-D and 7-E present prime factor algorithms designed for IBM ATs and XTs. The listing in Appendix 7-E is a highly efficient 1008-point assembly language program. Other sources for FFTs are also listed in Appendix 7-B.

    The encouragement of Academic Press was crucial to the development of this book, and I would like to thank the editors for their support and advice. I would also like to express my appreciation to Stanley A. White for his behind-the-scenes contribution as an advisor, and to thank all of the chapter authors for their diligent efforts in developing the book. Finally, I would like to thank my wife, Carol, for her patience regarding time I spent compiling, editing, and writing several chapters for the book.

    Acronyms and Abbreviations

    lsb Least significant bit

    msb Most significant bit

    ADC Analog-to-digital converter

    AGC Automatic gain control

    ALE Adaptive line enhancer

    AR Autoregressive

    ARMA Autoregressive moving average

    BP Bandpass

    BPF Bandpass filter

    BR Bounded real

    BRO Bit-reversed order

    CAD Computer-aided design

    CCW Counterclockwise

    CG Coherent gain

    CMOS Complementary metal-on-silicon

    CMT C-matrix transform

    CRT Chinese remainder theorem

    CSD Canonic sign digit

    DA Distributed arithmetic

    DAC Digital-to-analog converter

    DCT Discrete cosine transform

    DFT Discrete Fourier transform

    DF2 Direct-form 2

    DIF Decimation-in-frequency

    DIT Decimation-in-time

    DPCM Differential pulse code modulation

    DRO Digit-reversed order

    DSP Digital signal processing

    DST Discrete sine transform

    DTFT Discrete-time Fourier transform

    DTLTI Discrete-time linear time-invariant

    DTRS Discrete-time random sequence

    DWT Discrete Walsh transform

    EFB Error feedback

    ENBW Equivalent noise bandwidth

    EPE Energy packing efficiency

    ESS Error-spectrum shaping

    FDM Frequency-division (domain) multiplexing

    FDST Fast discrete sine transform

    FFT Fast Fourier transform

    FIR Finite impulse response

    GT General orthogonal transform

    HHT Hadamard–Haar transform

    HPF Highpass filter

    HT Haar transform

    IDFT Inverse discrete Fourier transform

    IDTFT Inverse discrete-time Fourier transform

    IFFT Inverse fast Fourier transform

    IFIR Interpolated finite impulse response

    IIR Infinite-duration impulse response

    IQ In-phase and quadrature

    IT Inverse transform; identity transform

    KLT Karhunen-Loève transform

    KT Kumaresan-Tufts

    LBR Lossless bounded real

    LC Inductance–capacitance

    LDI Lossless discrete integrator

    LHP Left half-plane

    LMS Least-mean-square

    LP Lowpass

    LPC Linear predictive coding

    LPF Lowpass filter

    LS Least squares

    LSA Least squares analysis

    LSI Large-scale integration

    LTI Linear time-invariant

    MA Moving average

    MAC Multiplier-accumulator

    MFIR Multiplicative finite impulse response

    MIR Mixed-radix integer representation

    MLMS Modified least-mean-square

    MMS Minimum mean-square

    MP McClellan–Parks

    MSE Mean-squared error

    MSP Most significant product

    NO Natural order

    NTSC National Television Systems Committee

    NTT Number-theoretic transform

    PFA Prime factor algorithm

    PROM Programmable read-only memory

    PSD Power spectrum density

    PSR Parallel-to-serial register

    QMF Quadrature mirror filter

    RAM Random-access memory

    RC Ruritanian correspondence

    RCFA Recursive cyclotomic factorization algorithm

    RHT Rationalized Haar transform

    RLS Recursive least squares

    ROM Read-only memory

    RRS Recursive running sum

    RT Rapid transform

    SD Sign digit

    SDSLSI Silicon-on-sapphire large-scale integration

    SER Sequential regression

    SFG Signal-flow graph

    SNR Signal-to-noise ratio

    SPR Serial-to-parallel register

    SR Shift register

    SRFFT Split-register fast Fourier transform

    SSBFDM Single-sideband frequency-division multiplexing

    ST Slant transform

    SVD Singular value decomposition

    TDM Time division (domain) multiplexed

    VLSI Very large-scale integration

    WDF Wave digital filter

    WFTA Winograd Fourier transform algorithm

    WHT Walsh-Hadamard transform

    WSS Wide-sense stationary

    Notation

    Chapter 1

    Transforms and Transform Properties

    DOUGLAS F. ELLIOTT,     Rockwell International Corporation, Anaheim, California 92803

    Publisher Summary

    This chapter provides an overview of transforms and transform properties. It reviews the nature of the sampled data and transforms and transform properties for the analysis of data sequences. The integrals defining the series coefficients correspond to the inverse discrete-time Fourier (IDTFT) and considers one- and two-dimensional series. The aliasing phenomenon leads to a periodic spectrum for data sequences so that the spectrum has a Fourier representation in terms of the data. This representation can be found from the data by using the discrete-time Fourier transform (DTFT). The DTFT is generalized to the z-transform, which is a powerful tool for data sequence analysis. The chapter discusses the reason for the periodicity of the discrete-time spectrum and presents DTFT properties. It also reviews the discrete Fourier transform (DFT) and highlights the Laplace transform. The chapter reviews discrete-time random sequences and discusses correlation and covariance sequences and their power spectral densities.

    I INTRODUCTION

    Transforms and transform properties occupy an important compartment of an engineer’s tool kit for solving new problems and gaining insight into old ones. By resolving a time-varying waveform into sinusoidal components, engineers transform a problem from that of studying time domain phenomena to that of evaluating frequency domain properties. These properties often lead to simple explanations of otherwise complicated occurrences.

    Continuous waveforms are not alone in being amenable to analysis by transforms and transform properties. Data sequences that result from sampling waveforms likewise may be studied in terms of their frequency content. Sampling, however, introduces a new problem: analog waveforms that do not look anything alike before sampling yield exactly the same sampled data; one sampled waveform aliases as the other.

    This chapter briefly reviews the nature of sampled data and develops transforms and transform properties for the analysis of data sequences. We start by reviewing Fourier series that represent periodic waveforms. We note that the aliasing phenomenon leads to a periodic spectrum for data sequences so that the spectrum has a Fourier representation in terms of the data. We can find this representation from the data by using the discrete-time Fourier transform (DTFT).

    The (DTFT) is generalized to the z-transform, which is a powerful tool for data sequence analysis. We also review the discrete Fourier transform (DFT) and recall the Laplace transform. We review discrete-time random sequences before discussing correlation and covariance sequences and their power spectral densities. Tables of properties are presented for each transform.

    II REVIEW OF FOURIER SERIES

    Fourier series have been a fundamental engineering tool since J. Fourier announced in 1807 that an arbitrary periodic function could be represented as the summation of scaled cosine and sine waveforms. We shall use Fourier series as a basis for developing the DTFT in the next section. We show that the integrals defining the series coefficients correspond to the inverse discrete-time Fourier transform (IDTFT).

    This section simply recalls for the reader’s convenience the definition of Fourier series. We consider one- and two-dimensional series.

    A One-Dimensional Fourier Series

    Let X(α) have period P and be the function to be represented by a one-dimensional (1-D) series. Let X(α) be such that

    (1.1)

    Then X(α) has the 1-D Fourier series representation

    (1.2)

    are the function’s values at the left and right sides of the discontinuity, respectively. The x(n), n = 0, ± 1, ± 2, …, are Fourier series coefficients given by

    (1.3)

    We can easily derive Eq. (1.3) from Eq. (1.2) by using the orthogonality property for exponential functions:

    (1.4)

    where

    (1.5)

    is the Kronecker delta function. Multiplying both sides of Eq. (1.2) by exp(j2παk/P), integrating from −P/2 to P/2, and using Eq. (1.5) yield Eq. (1.3).

    For most engineering applications the function X(α) is bounded and continuous, except, possibly, at a finite number of points. In this case the Fourier series holds for very general integrability conditions. The orthogonality condition, Eq. (1.4), makes the Fourier series useful by allowing a function to be converted from one domain (frequency, etc.) to another (time, etc.). Other ransforms (Walsh, etc.; see Chapter 6) also have orthogonality conditions and may be considered for the analysis of periodic functions.

    Figure 1.1(a) shows one period of a square wave of period P. Figure 1.1(b)–(e) shows Fourier series representations using 1, 2, 3, or 10 terms of the series. The reader may verify that the N-term approximation, XN(α), to the square wave reduces to

    Fig. 1.1 A periodic waveform and its Fourier series representation, (a) One period of the waveform; (b) One-term approximation, (c) Two term-approximation, (d) Three-term approximation. (e) Ten-term approximation.

    (1.6)

    If we let x(n) = 2an/π, we note that a0 = 0, an = (−1)(n − ¹)/²/n when the index n is an odd integer, and an = 0 when n is even. The series coefficients an are plotted versus both n and n/P in Fig. 1.2.

    Fig. 1.2 Scaled Fourier series coefficients for the waveform in Fig. 1.1.

    Figure 1.1(e) illustrates an advantage and a disadvantage of the Fourier series representation of the square wave. An advantage is that only 10 terms of the series give a fairly accurate approximation to the waveform. A disadvantage is the overshoot, or Gibbs phenomenon, at the points of discontinuity of the waveform. Further discussion of this phenomenon and Fourier series in general is in [1].

    We have illustrated the representation of a periodic continuous function X(α) by a sequence of coefficients x(n). Given the sequence x(n), we can find the function X(α), and, indeed, the procedure of taking a data sequence and finding the corresponding X(α) is that of the DTFT, discussed in Section III.

    B Two-Dimensional Fourier Series

    Let X(α, β) be an image with period P1 along the α axis and period P2 along the β axis (see Fig. 1.3). Note that the periodic image is generated by simply repeating a single image in both the horizontal and vertical directions. Let

    Fig. 1.3 Two-dimensional function with periods P1 along the horizontal axis and P2 along the vertical axis.

    (1.7)

    Then X(α, β) has the 2-D Fourier series representation

    (1.8)

    Paralleling the derivation of the Fourier coefficients for the 1-D series, we obtain the Fourier coefficients for the 2-D series:

    (1.9)

    The coefficient x(m, n) scales the product of complex sinusoids exp(−j2παm/P1)exp(−j2πβn/P2) that have m cycles per P1 units in the horizontal direction and n cycles per P2 units in the vertical direction. Remarks concerning integrability conditions, advantages, and disadvantages for the 1-D series apply equally to the 2-D series.

    The reader will doubtless see a pattern emerging from the 1-D and 2-D series development. This pattern leads to series representations for N-D functions, N = 3, 4, …. We will not present these representations but will exploit a similar pattern in a later section to develop N-D discrete Fourier transforms.

    III DISCRETE-TIME FOURIER TRANSFORM

    The periodic waveforms, discussed in the previous section have Fourier series representations determined, in general, by an infinite number of coefficients. Given the waveform, we can determine the sequence of coefficients. Conversely, given a sequence, we can find the continuous waveform. It is this latter procedure that yields the DTFT.

    The DTFT provides a frequency domain representation of a data sequence that might result, for example, from sampling an analog waveform every T seconds (s). The distinct difference between the frequency spectrum of the analog signal and the discrete-time sequence derived from it is that the sampling process causes the analog spectrum to repeat periodically at intervals of fs, where fs = 1/T is the sampling frequency. This section reviews the reason for the periodicity of the discrete-time spectrum, derives the DTFT and IDTFT, and presents a table of DTFT properties.

    A Reason for Periodicity in Discrete-Time Spectra

    s, …

    Fig. 1.4 Cosine waveforms yielding the same data at sampling instants.

    respectively; the sampled data from one is exactly the same as the sampled data from the other, and we say that sampled data from one aliases as sampled data from the other. It is easy to verify cosines of frequencies 1 + kfs, fs = 1/T, k = ±1, ±2, …, go through the same points of intersection. Although Fig. 1.4 depicts cosine waveforms, aliasing will occur for any sinusoid.

    We have shown that sampled sinusoids of frequency 1 Hz are indistinguishable from those of 1 + kfs Hz, where k is any integer. Likewise, sampled sinusoids of frequencies f and f + kfs are indistinguishable:

    where ϕ is an arbitrary phase angle. Consequently, a spectrum analyzer would get the same value at f as at f + kfs. We conclude that if by some means we determine the frequency spectrum of a discrete-time data sequence, the aliasing feature causes the spectrum to repeat at intervals of fs, as shown in Fig. 1.5. In general, the frequency spectrum X(f) is complex, so only the magnitude is plotted in the figure. The nonsymmetry of the spectrum about 0 Hz is due to a complex-valued data sequence that might result, for example, from frequency shifting (i.e., complex demodulation), which is described later.

    Fig. 1.5 Magnitude spectrum for a complex data sequence.

    B Fourier Series Representation of Periodic Spectra

    We have found that the spectrum of a data sequence is periodic. If the data results from sampling a continuous-time signal every T s, then the period of the spectrum is fs = 1/T Hz. Since periodic functions can be represented by Fourier series under relatively mild conditions, we can use Eq. (1.2) to represent the spectrum by the series

    (1.10)

    where the series coefficient x(n) is given by

    (1.11)

    The series coefficient x(n) is the data sequence giving rise to the spectrum. We use x(n) for samples of the continuous-time function x(t) sampled at t = nT and for data sequences in general. We know that x(n) has a periodic spectrum. Substituting f + kfs, where k is an integer, for fs in (1.10) shows that X1(f) is the same for f as for f + kfs. Thus, X1(f) has period f Hz, as required.

    Note that in the Fourier series development we assumed a periodic function was given, and we found the sequence of coefficients for the Fourier series representation, using Eq. (1.11). If we are given a sequence of coefficients instead of the spectrum, we can use the coefficients to find the spectrum by using Eq. (1.10). When dealing with sequences, we are more likely to be given data that corresponds to the coefficients. If the data is the sequence x(n), we find its spectrum using Eq. (1.10). We recover the data sequence from its spectrum by using Eq. (1.11). In any case Eqs. (1.2) and (1.3) or Eqs. (1.10) and (1.11) are a transform pair.

    Another transform pair is the continuous-time Fourier transform and its inverse defined, respectively, by

    (1.12)

    (1.13)

    We can gain additional intuition for Eq. (1.10) by noting that it is the Fourier transform of

    (1.14)

    where for any continuous function y(t),

    (1.15)

    The function δ(t nT) is a Dirac delta function that acts as a sampling function in the sense that it derives y(nT) from y(t) through Eq. (1.15). If we let Eq. (1.14) be the integrand of Eq. (1.12), then Eq. (1.15) yields

    which is a term in Eq. (1.10). Thus, Eq. (1.10) is the Fourier transform of Eq. (1.14). Whereas Eq. (1.12) yields the same answer as Eq. (1.10) if x(t) is sampled with delta functions, Eqs. (1.11) and (1.13) do not correspond directly because Eq. (1.11) applies to a sequence and Eq. (1.13) applies to a continuous-time function. Since the spectrum given by Eq. (1.10) is periodic, only one period is required to obtain the sample x(n), as Eq. (1.11) shows. This is in contrast to Eq. (1.13), where the entire spectrum is used to obtain x(t).

    C One-Dimensional DTFT and IDTFT

    We will now simplify the notation by using a normalized sampling interval of T = 1 s and radian frequency ω = 2π. Let X1(f) = X(ejωT). Then rewriting Eqs. (1.10) and (1.11) for T = 1 s gives

    (1.16)

    (1.17)

    Equations (1.16) and (1.17) are defined as the 1-D DTFT and 1-D IDTFT, respectively. The DTFT yields a periodic spectrum X(ejω) for a given data sequence x(n). The IDTFT recovers the data sequence from the spectrum. We will also use the notation

    (1.18)

    (1.19)

    for Eqs. (1.16) and (1.17), respectively. Let Ω be the analog radian frequency. Then conversion from the radian frequency ω normalized for a sampling interval of 1 s to analog radian frequency Ω for an arbitrary sampling interval T requires only the substitution ω = ΩT. Figure 1.6 indicates corresponding points on the frequency axes for the variables f, ω = 2πf, Ω = 2πF, and F, where F is the analog frequency in hertz.

    Fig. 1.6 Corresponding points on frequency axes for normalized variables f and ω and for analog variables Ω and F.

    D DTFT Properties

    Table I summarizes properties of the 1-D DTFT. A property is described by a transform pair consisting of a data sequence representation and a transform sequence representation. For example, x(n) and X(ejω) constitute a transform pair. We will illustrate derivation of the pairs with several examples. For further details see [2, 3].

    TABLE I

    Summary of Discrete-Time Fourier Transform Properties

    1 Frequency Shifting

    Let the sequence x(n) have the DTFT X(ejω). Then the frequency-shifted sequence is ejω0nx(n), and its DTFT is

    (1.20)

    The transform of ejω0nx(n) is right-shifted by ω0 rad s−1, and the DTFT of e0nx(n) is left-shifted in frequency so that e±0nx(nconstitute a pair.

    2 Data Sequence Convolution

    Convolution of the sequence x(n) with y(n) is represented by x(n) * y(n) and is defined by

    (1.21)

    The transform of Eq. (1.21) is

    (1.22)

    Interchanging summations on the right of Eq. (1.22) and letting i = n m yield

    (1.23)

    as stated in Table I.

    3 Frequency Domain Convolution

    Frequency domain convolution is defined by

    (1.24)

    Using the IDTFT definition, Eq. (1.17), interchanging integrations, and making a change of variables yield

    (1.25)

    as stated in Table I.

    4 Symmetry Properties

    Several properties in Table I deal with conjugate symmetric sequences satisfying x(n) = x*(− n) and conjugate antisymmetric sequences satisfying x(n) = − x*(− n). If a sequence is real, then conjugate symmetric or antisymmetric correspond to even or odd, respectively.

    5 Sampling Frequency Change

    As an example of the utility of transform properties, consider the sampling frequency change properties (the two entries before Parseval’s theorem at the end of Table I). Let the periodic repetitions of a spectrum of a sequence x1 (n) be widely spaced so that the signal bandwidth (BW) satisfies BW ≤ fs/M. Then the sequence may be desampled by M : 1; that is, only 1 of every M samples is retained [see Fig. 1.7(a), (b)]. This reduces the spectral amplitude by 1/M and causes the spectrum to repeat at the new sampling frequency fs/M [Fig. 1.7(c); the curve for X3(ej²πf) applies to X2(ej²πf) after frequency units are changed to Hz/3]. Desampling is used, for example, to more efficiently analyze a signal with a DFT. Before going to the DFT, the signal is desampled as much as possible without introducing aliasing, and, as a consequence of the desampling, the DFT can be run at a lower rate.

    Fig. 1.7 For fs M · BW desampling by M : 1 reduces computation rate while upsampling by 1 : M interpolates the signal, (a) Block diagram showing desampling, upsampling, and filter to remove replicas, (b) Spectral magnitude for x1(n). (c) Spectral magnitude of x3(n) for M = 3.

    A signal can be interpolated by a 1 : M upsampling that adds M – 1 zeros to every sample (padding with zeros by 1 : M). Although the upsampling increases the sampling frequency, it does not effect the spectrum, which still repeats at fs/M (Fig. 1.7(c)]. When we remove the spectral replicas at integer multiplies of fs/M by filtering, the zero values introduced by padding disappear and we obtain the original sequence x1(n). If we start with the signal x2(n) and wish to interpolate to find intermediate sample values, we simply pad with zeros by 1 : M and use a lowpass filter with a zero frequency gain of M to get a sequence x1(n) such that every Mth value matches x2(n). Another interesting application of upsampling is to effect a sampling frequency change (see Chapter 3).

    E Two-Dimensional DTFT

    Let an image x(r, s) be sampled at intervals of T1 and T2 along the r and s axes, respectively, yielding the 2-D sequence x(m, n). The spectrum will be 2-D with periods 1/T1 and 1/T2 along the f1 and f2 axes, respectively, for the same reason that a 1-D spectrum is periodic. Since the 2-D spectrum is periodic, we can represent it by a 2-D Fourier series. Paralleling the steps for the 1-D DTF and IDTFT leads to

    (1.26)

    (1.27)

    We define Eqs. (1.26) and (1.27) as the 2-D DTFT and 2-D IDTFT, respectively. Extension of Table I to the 2-D case using Eqs. (1.26) and (1.27) is straightforward.

    IV z-TRANSFORM

    The z-transform generalizes the DTFT and gives additional information on system stability. This section discusses the z-transform, the inverse z-transform, and a table of properties.

    A One-Dimensional z-Transform

    Equation (1.16) defines the 1-D DTFT:

    (1.16)

    We can generalize this equation by replacing ejωn by e−σn jωn, letting z = eσ + , and defining the resulting summation as the two-sided, 1-D z-transform of x(n) or, simply, the ztransform of x(n), denoted by

    (1.28)

    For σ = 0, z = ejω, and Eq. (1.28) is the same as Eq. (1.16). In this case |z| = |ejω| = |cos ω + jsin ω|, which defines the unit circle (a circle with unity radius centered at the origin). Evaluating the z-transform on the unit circle in the z-plane corresponds to the DTFT.

    B Region of Convergence

    The infinite series in Eq. (1.28) is meaningful only if it converges. One test of convergence is the ratio test: a series converges if the magnitude of the ratio of term n + 1 to term n (term – n – 1 to term – n on the negative axis) is less than 1 as n → ∞. For n > 0 we require that

    (1.29)

    whereas for n < 0 we require that

    (1.30)

    The region where Eqs. (1.29) and (1.30) are satisfied is called the region of convergence; R1 and R2 are called the radii of convergence. As an example, let

    (1.31)

    Applying the geometric series summation formula

    (1.32)

    to the z-transform of Eq. (1.31) gives

    (1.33)

    where

    (1.34)

    From Eq. (1.34) we conclude that the region of convergence for Eq. (1.33) is the annulus defined by |z| > a and |z| < b, as shown in Fig. 1.8. As is evident from Eq. (1.33), the function X(z) diverges at z = a and z = b. Such points are called poles of the function. Similarly, X(z) = 0 at z = (a + b)/2 and z = 0. Such points are called zeros of the function. If b < a, there is no region of convergence for (1.33) because the z-transform diverges everywhere.

    Fig. 1.8 Region of convergence for x(n) = (an, n ≥ 0; − bn, n < 0).

    C One-Sided z-Transform

    Many sequences considered in this book are zero for n < 0. Such a sequence is right-sided, and the 1-D z-transform, given by Eq. (1.28), becomes

    (1.35)

    Similarly, if x(n) = 0 for n > 0, the sequence is left-sided and its 1-D z-transform is

    (1.36)

    Equations (1.35) and (1.36) define one-sided z-transforms. Section VI gives the z-transform of a number of right-sided sequences along with the corresponding continuous-time function x(t) from which the sampled version, the sequence x(n), is derived and the Laplace transform of x(t).

    D Inverse One-Dimensional z-transform

    Given the function X(z), we derive the data sequence x(n) by taking the inverse z-transform of X(z). Equation (1.28) defines the z-transform:

    (1.28)

    Multiplying both sides of Eq. (1.28) by zm – ¹/2πj and integrating over a counterclockwise (CCW) contour C, which is in the region of convergence of X(z) and encircles the origin, yields

    (1.37)

    where we have substituted the right side Eq. (1.28) for X(z) and have then interchanged summation and integration. We evaluate the integral in the brackets by the Cauchy integral theorem:

    (1.38)

    where

    (1.39)

    , Eq. (1.37)reduces to the inverse z-transform:

    (1.40)

    When C is the unit circle z = ejω, the preceding integral reduces to

    (1.17)

    which is the IDTFT defined previously.

    We can evaluate the integral on the right of Eq. (1.40) by Cauchy’s residue theorem:

    (1.41)

    However, in practice it is often easier to use either long division or a partial-fraction expansion rather than Eq. (1.40) or Eq. (1.41). As an example of long division, consider the right-sided z-transform X(z) = 1/(1 – az−1) Dividing numerator by denominator yields

    (1.42)

    Comparing coefficients of zn gives x(n) = an.

    As an example of using a partial-fraction expansion, consider the right-sided transform

    (1.43)

    We may evaluate each term in the summation on the right of Eq. (1.43) by using Eq. (1.42) to get x(n) = (an + ¹ – bn + ¹)/(a – b), or we can use z-transform pairs. Some z-transform pairs are stated for right- and left-sided sequences in Table II. In the table ω and a are real numbers; right- and left-sided sequences converge for |z| > p and |z| < q, respectively, where p is a right-sided sequence pole and q is a left-sided sequence pole. More extensive right-sided z-transforms are in Section VI.

    TABLE II

    z-Transform Pairs

    E z-Transform Properties

    Table III summarizes a number of 1-D z-transform properties, most of which apply to either one- or two-sided data sequences. When the property applies only to a one-sided sequence, this is stated. For example, the initial value theorem in the table applies to right-sided sequences. Derivation of the properties is treated in [3–13, 19]. Most of the properties are a straightforward application of the z-transform definition, as the following examples illustrate.

    TABLE III

    Summary of z-Transform Properties

    aC is in the region of convergence of X(v) and Y(z/v).

    bR is the radius of convergence of X(z).

    cThe poles of X(z) must lie within the unit circle except for possibly a first-order pole at z = 1.

    dThe poles of X(z) and Y(z) must lie within the unit circle.

    1 Data Sequence Horizontal Axis Sign Change

    Let x(n) be replaced by x(− n)—for example, by a time reversal in taking data. The z-transform of the sequence x(− n) is, by definition,

    (1.44)

    The data sequence horizontal axis sign change yields x(− n), which has the z-transform X(1/z), whereas x(n) has the z-transform X(z).

    2 Convolution of Data Sequences

    This property states that the z-transform of the convolution of data sequences yields the product of the z-transforms of each sequence. Let x(n) and y(n) be the sequences convolved, and let w(n) be the resulting sequence:

    (1.45)

    By definition the z-transform of w(n) is

    (1.46)

    Interchanging the summations and letting k = n m yield

    (1.47)

    so the z-transform of x(n) * y(n) is X(z)Y(z).

    3 Periodic Convolution of Transforms

    This property states that the z-transform of the sequence formed from the term-by-term product of two data sequences is given by a contour integral. If the region of convergence of the z-transform of each sequence includes the unit circle in the z-plane, then the contour integral is a periodic convolution. Let w(n) = x(n)y(n). Then the z-transform of w(n) is

    (1.48)

    where Eq. (1.40) was used to express y(n) and C1 is a CCW contour around the origin in the region of convergence of Y(v). Interchanging the integration and summation in Eq. (1.48) yields

    (1.49)

    where now C1 must lie in the region of convergence of X(z/v) as well as that of Y(v). Interchanging the roles of x(n) and y(n) yields another form of the integral:

    (1.50)

    where C is a CCW contour that encircles the origin and lies in the regions of convergence of X(v) and Y(z/v). Combining Eqs. (1.48) and (1.50) gives the result that the z-transform of the sequence x(n)y(n) is a contour integration. Let C include the circles with radii ρ and r/ρ. Let z = rejϕ and v = ρejω. Then Eq. (1.50) gives

    (1.51a)

    which is called a periodic convolution because W(rejϕ) has period 2π. When the circles of convergence include r = ρ = 1, we interchange the roles of ϕ and ω and denote the periodic convolution by

    (1.51b)

    which is the same as frequency domain convolution for the DTFT [see Eq.(1.24)].

    F Two-Dimensional z-Transform

    Just as we generalized the 1-D DTFT to obtain the 1-D z-transform, we shall generalize the 2-D DTFT to obtain the 2-D z-transform. Let x(m, n) be a 2-D sequence representing, for example, a sampled image. We obtain the 2-D z-transform of x(m, n), L2-D[x(m, n)], by generalizing Eq. (1.26), letting zi = eσi + jωi, i = 1, 2, which gives

    (1.52)

    The region of convergence of X(z1, z2) is that region in z1, z2 space for which Eq. (1.52) is absolutely summable:

    (1.53)

    Likewise, the 2-D inverse z-transform results from generalizing Eq. (1.27):

    (1.54)

    where Ci is a closed contour encircling the origin of the zi-plane, i = 1, 2. The contours Ci are generally difficult to specify unless the 2-D z-transform is separable: X(z1, z2) = X1(z1)X2(z2), which is true if and only if the data sequence is separable.

    The table of 1-D z-transform properties extends in a straightforward manner to 2-D properties. For example, the z-transform of a 2-D convolution gives the product of transforms:

    (1.55)

    The 2-D z-transform is important in the development of 2-D digital filters used in image processing. Thus, stability of the 2-D filter is an important consideration, and the stability assessment requires us to determine the location of the zeros of the denominator polynomial of the filter transfer function. The filter is stable if the denominator polynomial is never zero for any values of z1 and z2 such that |z1| > 1 and |z2| > 1.

    V LAPLACE TRANSFORM

    Whereas the z-transform is the primary tool for analysis of discrete-time systems, the Laplace transform is often the primary tool for analysis of continuous-time systems. Laplace transforms were originally developed by Oliver Heaviside to solve ordinary differential equations by algebraic means without finding a general solution and evaluating arbitrary constants.

    Laplace transforms have several applications in this book. We use them principally to describe an analog filter transfer function. We can convert this transfer function to a digital filter by the techniques described in Chapter 4.

    A Definition of the One-Sided Laplace Transform

    Let x(t) be a function such that

    (1.56)

    for some finite, real-valued constant σ. Then the one-sided Laplace transform of x(t), L[x(t)], is defined as X(s) and given by

    (1.57)

    To insure essential uniqueness of the function x(t), if we are given X(s), we require that

    (1.58)

    Then the inverse Laplace transform of X(s) is

    (1.59)

    where σ1 > σ and the latter is the σ in Eq. (1.56).

    B Laplace Transform Properties

    Table IV summarizes some Laplace transform properties. In the table all functions are zero for negative time; that is, if t < 0, then x(t) = x1(t) = x2(t) = 0. Other notational definitions include

    TABLE IV

    Summary of Laplace Transform Propertiesc

    aRe[w] = c lies to the right of the poles of X1(w) and to the left of the poles of X2(s − w).

    bThe poles of sX(s) must be in the left half of the s-plane.

    cAdapted from [14].

    (1.60)

    (1.61)

    We illustrate derivation of the pairs with several examples.

    1 Laplace Transform of e−at

    For Re[s] > a the transform of eat is

    (1.62)

    2 Laplace Transform of

    The Laplace transform of ∫0t x(τ)dτ is

    (1.63)

    where we used ∫ u dv = uv – ∫ v du, u = ∫0t x(τ) dτ, and dv = est dt, and where the expression in brackets when evaluated at zero and infinity equals zero [14].

    3 Laplace Transform of dx(t)/dt

    Let limt → ∞ [estx(t)]| = 0. Then

    (1.64)

    where again we integrated by parts, using u = est and dv = dx(t).

    VI TABLE OF z-TRANSFORMS AND LAPLACE TRANSFORMS

    Table V states the Laplace transform for each listed function x(t), as well as the z-transform of the right-sided sequence x(n), x(n) = x(t) evaluated at t = nT, where T is the sampling interval. Note that the z-transforms have not been stated for a normalized sampling frequency of 1 Hz, but the sampling interval is contained explicitly in the transforms.

    TABLE V

    Table of Laplace and z-Transformsa

    aFrom [8].

    VII DISCRETE FOURIER TRANSFORM

    The DTFT discussed in Section III yields a periodic, continuous spectrum for a nonperiodic data sequence of infinite length. The DFT of this section also yields a periodic spectrum characteristic of sampled data. In contrast to the DTFT, the DFT has a line spectrum that represents a sequence of period N. The term discrete Fourier transform is somewhat of a misnomer since the DFT provides a Fourier series representation for a finite sequence, whereas the DTFT yields a true Fourier transform of an infinite sequence incorporating Dirac delta functions [see Eq. (1.14)].

    A Series Representation of an N/-Point Sequence

    Let an N-point sequence, x(n) be given for n = 0, 1, 2,…, N – 1. Then we form the periodic sequence, xp(n), from x(n) by simply repeating x(n) with period N:

    (1.65)

    where for some integer mn = i + Nm. Thus, i is the remainder of n/N, or, stated another way, i is congruent to n (modulo N). These equivalent statements are written as

    (1.66)

    As discussed in Section II, periodic functions can be represented by a Fourier series. There is a periodic function xp(t) that yields the sequence x(n) when sampled at t = nT, n = 0, 1, 2,…, N – 1, where P = NT is the period of the function. The Fourier coefficients Xp(k) for the series represent a line spectrum where the lines are at intervals of 1/P = fs/N as illustrated in Fig. 1.2. Thus, the lines in the spectrum are at the frequencies

    (1.67)

    where just N values are required for k because Xp(k) has the period N.

    B Inverse Discrete Fourier Transform

    We just showed that the spectrum for xp(n) is a line spectrum with period N. In Section III.B we showed that a sequence, xp(n) in this case, defines the Fourier series for the periodic spectrum. Since the spectrum is now a line spectrum, we find xp(n) from a summation over a period rather than an integration over a period as in Eq. (1.11), which is repeated here for convenience:

    (1.68)

    where the limits have been shifted from – fs/2 and fs/2 to 0 and fs. This shift has no effect on the value of the

    Enjoying the preview?
    Page 1 of 1