Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Pseudo Random Signal Processing: Theory and Application
Pseudo Random Signal Processing: Theory and Application
Pseudo Random Signal Processing: Theory and Application
Ebook735 pages7 hours

Pseudo Random Signal Processing: Theory and Application

Rating: 0 out of 5 stars

()

Read preview

About this ebook

In recent years, pseudo random signal processing has proven to be a critical enabler of modern communication, information, security and measurement systems. The signal’s pseudo random, noise-like properties make it vitally important as a tool for protecting against interference, alleviating multipath propagation and allowing the potential of sharing bandwidth with other users.

Taking a practical approach to the topic, this text provides a comprehensive and systematic guide to understanding and using pseudo random signals. Covering theoretical principles, design methodologies and applications, Pseudo Random Signal Processing: Theory and Application:

  • sets out the mathematical foundations needed to implement powerful pseudo random signal processing techniques;
  • presents information about binary and nonbinary pseudo random sequence generation and design objectives;
  • examines the creation of system architectures, including those with microprocessors, digital signal processors, memory circuits and software suits;
  • gives a detailed discussion of sophisticated applications such as spread spectrum communications, ranging and satellite navigation systems, scrambling, system verification, and sensor and optical fibre systems.

Pseudo Random Signal Processing: Theory and Applicationis an essential introduction to the subject for practising Electronics Engineers and researchers in the fields of mobile communications, satellite navigation, signal analysis, circuit testing, cryptology, watermarking, and measurement. It is also a useful reference for graduate students taking courses in Electronics, Communications and Computer Engineering.

LanguageEnglish
PublisherWiley
Release dateJul 17, 2013
ISBN9781118691212
Pseudo Random Signal Processing: Theory and Application

Related to Pseudo Random Signal Processing

Related ebooks

Technology & Engineering For You

View More

Related articles

Related categories

Reviews for Pseudo Random Signal Processing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Pseudo Random Signal Processing - Hans-Jurgen Zepernick

    1

    Introduction

    1.1 PROLOGUE

    The performance of modem communication and information systems is influenced by the potential of the available integrated circuits and by the efficiency of the algorithms chosen for the actual signal processing. The tremendous advances in integrated circuit technology facilitate the implementation of increasingly sophisticated signal processing algorithms. This has resulted in strong interactions between theoretical concepts and technological developments, which in turn have produced a wide range of practical applications.

    This general trend applies in particular to the fields of pseudo random signals and sequences, which constitute an important element of efficient signal processing in almost every modem communication and information system. These types of signals and sequences are strictly deterministic in nature but offer similar characteristics as random signals. The strong mathematical structure associated with pseudo random signals not only provides a solid foundation for systematic signal set design but also guides the development of extremely powerful signal processing techniques. The distinct benefits of pseudo random signal processing compared to standard processing techniques include a very robust immunity to hostile jamming and a superior operation against several forms of unintentional interference. These advantages have been exploited first for military applications, where a secure, reliable, and robust communication link is of major concern. Later, with the advent of multi-user communications along with rapid advances in technology, more intricate pseudo random signal processing techniques were introduced to the civilian and commercial fields. In the meantime, implementation costs for these techniques have been largely reduced and this allows the many attractive features of pseudo random signal processing to be used extensively in practice.

    The wide range of applications includes areas such as signal analysis, correlation analysis, bit error measurements, scramblers, positioning, ranging and navigation systems, spread spectrum techniques, code-division multiple-access systems, and cryptology. This indicates the importance of pseudo random signal processing for communication, information, and computer technologies. From a conceptual point of view, we think it is meaningful and justified to introduce the generic term of pseudo random signal processing.

    1.2 ELEMENTS OF PSEUDO RANDOM SIGNAL PROCESSING

    The field of pseudo random signal processing centers around the terms signal and structure. Essentially, it is concerned with the analysis and synthesis of random-like signals as well as their generation, realization, and processing. The related pseudo random signal processing algorithms are often associated with a particular application.

    A signal in the technological and scientific sense is usually considered as a carrier of information. Signals are often associated with technical, physical, or biological processes and as such may be described by mathematical functions that evolve over time. The fundamental properties of a signal can be relatively complex and significantly influence the mathematical framework to be used for their description. In fact, it is often found that the most important characteristic of a signal serves as an attribute when referring to a particular signal type. Some examples may illustrate the variety of signal terminologies used to indicate signal properties:

    Shape of mathematical function. Sinusoidal signal, pulse signal, rectangular signal, triangular signal.

    Purpose of use. Test signal, measurement signal, control signal, training signal, timing signal, synchronization signal, clock signal.

    Particulars in relation to transmission. Message signal, noise signal, interfering signal, carrier signal, modulation signal, transmit signal, receive signal.

    Time and amplitude characteristics. Periodic signal, aperiodic signal, continuous signal, discrete signal, digital signal, binary signal, nonbinary signal.

    Predictability and structure. Deterministic signal, random signal, cyclostationary signal, pseudo random signal.

    Origin and application field. Data signal, speech signal, audio signal, video signal, satellite signal, radio signal, radar signal.

    As with the numerous signal terminologies, there exists a large variety on how the term structure is used. Some of the more frequently applied notions include word combinations such as system structure, circuit structure, network structure, code structure, algebraic structure, fine structure, and coarse structure. In technical, physical, and biological processes, the term signal structure typically corresponds to the level of uncertainty with respect to the signal amplitude over time. The uncertainty can be quantified by characterizing the level of similarity between signals and their shifted versions using correlation functions. In particular, the autocorrelation measures the dependence of one signal on itself and the crosscorrelation measures the dependence of one signal on another. In the latter case of dealing with more than one signal, the considered sets of signals are frequently referred to as codes.

    Figure 1.1 shows a classification of structured signals with reference to the example of communication systems. In the case when the amplitudes of a signal are given over discrete time, the signal is referred to as a sequence. Accordingly, pseudo random signals have their origins in correlation codes and in signals with special correlation functions. Due to the close relationship between these two roots of pseudo random signals, the terms pseudo random code and pseudo random signal are both used synonymously in the literature and without differentiating whether a set of sequences or an individual signal is considered. Also, it should be mentioned that pseudo random signals are called pseudo noise signals in some applications.

    Figure 1.1 Classification of structured signals.

    The design and generation of pseudo random signals are closely related to the particular requirements given by the specifications of the considered application. Characteristics such as those derived from the different autocorrelation and crosscorrelation functions are commonly used to pose the design objectives for a given application. Some applications may require a pseudo random sequence with special autocorrelation properties. A different application may need pseudo random codes with certain crosscorrelation characteristics while another application may rely on some specific structural properties. Table 1.1 gives an overview of the correlation requirements of some prominent applications together with examples of the related practical systems. It is not intended to be exhaustive with this overview but to illustrate the different requirements of contemporary applications of pseudo random signals. The same applies to the selected practical system examples of which many will be described in more detail in succeeding chapters. The first group of applications requires good autocorrelation properties, which translates to an impulse-like function with the corresponding frequency spectrum being uniformly distributed over a wide bandwidth. These properties facilitate tasks such as precise range measurements, recovery of synchronization, and spectral shaping, but may also serve for hiding a signal below the noise level to protect it against interference. The next group of applications asks for good crosscorrelation characteristics implying that sets of pseudo random sequences are used. As a number of different pseudo random signals are usually to be distinguished with these types of systems, similarities among the sequences of a given set must be kept to a minimum. Therefore, low crosscorrelations between pairs of sequences are considered as favorable in the related application areas. In multi-user communication scenarios, this property allows different users simultaneous access to a common transmission medium. As each user has been assigned its own pseudo random signal, identification of the communicating partners is easily obtained. Finally, a third group of applications may draw upon a wide range of other specific structural signal properties. Here, the linear complexity serves as an example of such a specific structural requirement. In brief, the linear complexity specifies the length of the shortest shift register that is capable of generating a given pseudo random sequence. This measure enables quantification of the effort needed to recover the entire structure contained in a particular pseudo random sequence given that only a snapshot of the sequence was available. As such, large linear complexities are preferred in applications associated with cryptology to ensure that secret information cannot be recovered by an eavesdropper. Table 1.1 also indicates that some applications and the related practical systems may rely on both favorable autocorrelation and crosscorrelation properties or may even have additional specific requirements. In these cases, a trade-off between the sometimes conflicting objectives needs to be considered in the pseudo random signal design.

    Table 1.1 Examples of application areas of pseudo random signals.

    It is evident from the above brief discussions that the rich area of pseudo random signal processing is influenced by many different mathematical and engineering areas. Some of the major relationships of pseudo random signal processing to other fields are shown in the overview given in Table 1.2. Several of these different connections between pseudo random signal processing and other fields will be revealed in more detail throughout the succeeding chapters of the book.

    Table 1.2 Relationships of pseudo random signal processing to other fields.

    Over the past four decades, the theoretical understanding of pseudo random signals and sequences has very much matured. There exists a considerable amount of mathematical contributions reporting on many discrete random-like sequences, their properties, and the various means of sequence design [53, 65, 96, 111, 161]. On the other hand, it can be observed that engineering-oriented publications mainly concentrate on an in-depth coverage of a particular application. For example, the number of texts entirely devoted to the subject of spread spectrum systems continues to grow and the interested reader may be directed to [54, 67, 143, 222, 248, 272]. The same applies to the coverage of the even more specific spread spectrum-based mobile radio systems [33, 57, 117, 143, 175, 181, 227] and wireless networks [28, 180], for which a tremendous body of literature exists. Modem navigation systems can be found in textbooks such as [100, 114, 242], scrambling techniques and their applications are the focus of [139, 142], and in-depth treatments of cryptology can be found in [216, 232] and many other excellent texts. However, an effective attempt that brings together the underlying theoretical concepts, the powerful signal processing techniques, the general practical aspects, and the numerous application fields of pseudo random signal processing seems to be missing.

    1.3 OUTLINE OF THE BOOK

    This book will provide a transition from covering the engineering and mathematical foundations to conveying the powerful signal processing principles, which serve as the connecting link between theory and application. Here, the practical applications not only are drawn from the fields of communications but also span the boundaries of several technical disciplines including examples from test, information, and computer systems. The four key segments of the book can be broadly described as foundations, designs, realizations, and applications of pseudo random signal processing.

    Foundations

    The engineering foundations cover the basic signal processing concepts associated with pseudo random signals and sequences. Special attention is given to describing and discussing important signal characteristics such as correlation measures and power spectral density. The offered insights will enable the reader not only to identify design objectives but also to guide the assessment of processing techniques in view of their ability to cope with constraints given by practical applications. The mathematical foundations are mainly concerned with the relevant topics of abstract algebra such as algebraic structures, finite fields, and the corresponding arithmetic. This introduces the feature of strong mathematical structure to the area of pseudo random signal processing as required for the systematic design of signal and sequence sets and for the efficient realization of advanced processing techniques.

    Designs

    Equipped with solid insights into the theoretical foundations of pseudo random signal processing, the reader is then introduced to a discussion of prominent designs of conventional binary sequences as well as modem classes of nonbinary and complex-valued sequences. This segment provides a sound knowledge about design objectives, design methodologies, and properties of the respective sequence designs. It inherently points to means of sequence generation and to potential areas of their application. It also indicates the general trend from simple to more advanced approaches. The presented designs give a thorough overview of sequences and related processing techniques that are used in the traditional and contemporary applications as well as the directions the technology may support in the future. However, it is not intended to provide an exhaustive survey of the vast number of sequence designs which have been proposed over the previous decades.

    Realizations

    This segment of the book is dedicated to the technical realizations of pseudo random signal processing. The standard circuits that are used in practice for implementing the respective processing algorithms are presented together with case studies of typical building blocks and realizations. In this context, some important properties of the underlying logic are emphasized. In addition to these hard-wired realizations, the more flexible system architectures based on microprocessors, digital signal processors, memory units, and software suits are presented.

    Applications

    The fourth segment of the book provides a survey and a discussion of important applications of pseudo random signal processing in modem communication, information, and computer technologies as well as several applications in other specialized fields. The background to each of the different applications is provided first and to a degree required for understanding the relationship to pseudo random signal processing. This is then followed by a more detailed coverage of the representative practical realizations, standards, and systems in the considered areas. In this way, interconnections and potential synergies among the different fields are revealed. In particular, the considered applications are drawn from the fields of spread spectrum systems, ranging and navigation systems, scrambling, automatic test and system verification, cryptology, and other applications.

    These four segments are accommodated within the succeeding six chapters of the book. Chapters 2 and 3 contain the engineering and mathematical foundations of pseudo random signal processing, respectively. Binary sequence designs and nonbinary sequence designs are presented in Chapters 4 and 5, respectively. In Chapter 6, the realizations together with implementation issues of pseudo random signals and generators are covered. Chapter 7 is dedicated to the enormous range of applications and as such also details several signal processing techniques.

    2

    Characterization of signals and sequences

    This chapter provides an overview of concepts and characteristics associated with signals and sequences. Formal definitions of continuous-time signals, discrete-time signals, continuous-valued signals, and discrete-valued signals will be given to establish the underpinning mathematical framework of pseudo random signal processing and to state more precisely the term sequence. On this basis, periodic and aperiodic correlation measures as well as average mean-square correlation measures are specified. These measures in conjunction with some useful operations on sequences can be used to gain quantitative insights about the structure contained in sets of sequences. Correlation measures are also widely accepted for guiding the classification, design, and implementation of sequences. The power spectral density is derived as the important frequency-domain counterpart to the time-domain characterization of signals and sequences. Eventually, several generic criteria for assessing the pseudo randomness of signals are outlined and discussed.

    2.1 CLASSIFICATION OF SIGNALS AND SEQUENCES

    A signal can be considered as a carrier of information or energy. The signals with which we are concerned in this book relate mainly to technical, physical, or biological processes or events that progress over time. The characteristics of those signals can therefore comprise amplitude, waveform, time duration, and other properties. A signal can be formulated as a function x(t) of time t or may be given in graphical form. A classification of signals may be performed with respect to morphological, energy, and phenomenological criteria. In addition, signals can also be classified in the frequency-domain by the shape of their frequency distribution or spectrum. As we are interested primarily in signals as a function of one independent variable, dimensional classifications in terms of the number of independent variables are not considered.

    2.1.1 Morphological classification

    This type of classification relates to signal properties such as whether the signals are continuous or discrete in the time variable and/or the amplitude value. Accordingly, signals can be grouped into the following four classes:

    Continuous-time and continuous-valued signals. These types of signals are defined for every time value t and may assume any amplitude value in a given continuous open interval:

    (2.1)

    Continuous-time and continuous-valued signals are also called analog signals.

    Continuous-time and discrete-valued signals. These types of signals are defined for every time value t but can take on only discrete amplitude values:

    (2.2)

    where denotes the cardinality of set . In practical systems, quantization of amplitudes is often performed using equidistant increments to form finite sets of allowed amplitude values.

    Discrete-time and continuous-valued signals. If the signal is defined only at discrete-time instants , it is said to be sampled. For a discrete-time, continuous-valued signal, we have

    (2.3)

    presuming that the signal is equidistantly sampled at intervals of spacing Δt.

    Discrete-time and discrete-valued signals. These types of signals are defined for discrete-time instants t = iΔt, i∈ , and assume only discrete amplitude values:

    (2.4)

    Discrete-time and discrete-valued signals are referred to as digital signals. As signals are often to be processed by digital computers and digital signal processors, the suitable digital signal formats must be provided.

    Another morphological characteristic of a signal relates to whether the signal is aperiodic (non-periodic) or periodic. A periodic continuous-time signal x(t) fulfills the condition

    (2.5)

    where T is referred to as the period of the continuous-time signal. Similarly, a periodic discrete-time signal x(iΔt) fulfills the condition

    (2.6)

    where N denotes the period of the discrete-time signal.

    In the field of pseudo random signal processing, we are mainly concerned with discrete-time signals while amplitudes of the signals may be either continuous-valued or discrete-valued. We will therefore use the term sequence to stronger emphasize the discrete-time nature of the considered signals. Furthermore, a sequence will be denoted by

    (2.7)

    Table 2.1 Sequence types and notations.

    where xi represents the element of the sequence {xi} at discrete-time instant i. If a sequence is of finite length or if it is periodic, we may use alternative notations

    (2.8)

    where the parameter N is called the length or the period of the sequence, respectively. Some examples of frequently used sequence types are listed in Table 2.1 together with the notation that will be applied to differentiate among the various cases. For example, if the elements ui of the sequence {ui} are taken from the set of complex numbers, we call it a complex sequence. In the digital communications context, the bipolar binary signal format is also known as antipodal signaling.

    2.1.2 Phenomenological classification

    Signals can be classified either as deterministic or as random. In the case when the signal is deterministic but shows certain characteristics of a random signal, we classify it as a pseudo random signal. Several criteria that may be used to assess whether a signal qualifies for pseudo randomness are given in Section 2.5.

    Deterministic signals

    As far as deterministic signals are concerned, there is no uncertainty with respect to their amplitude values at any time. Deterministic signals are sometimes called waveforms and can be described by an explicit mathematical expression. Examples of waveforms that will be used throughout the book are listed in Table 2.2.

    Random signals

    In contrast to the deterministic signals, there is some degree of uncertainty about the amplitude values of random signals prior to the time when the signal actually occurs. Random signals are also referred to as random processes and may exhibit certain regularities. These regularities can be exposed by either monitoring the random process over a sufficiently long time interval or considering a sufficiently large set of representative random signals of the particular process. As the random signal cannot be modeled by an explicit mathematical expression, the inherent regularities of a random process are commonly described by concepts such as probabilities, probability distributions, probability densities, and statistical averages.

    Random variables

    Prior to introducing the aforementioned statistical averages for random processes, it is instructive to first develop the framework and characteristics related to random variables. In doing so, we assume that an outcome of an experiment can take on elements a from a set . A function X(a) that translates the so-called samples a of the process to a real number x ∈ will be called a random variable and be denoted simply as X. Given the probability density function (PDF) of the random variable X as

    Table 2.2 Examples of deterministic signals.

    (2.9)

    then the mean or expected value, the kth moment, and the kth central moment of the random variable X are defined as

    (2.10)

    (2.11)

    (2.12)

    respectively, where operator E{·} denotes expectation or statistical averaging. When k = 2, the second central moment is called the variance of X and is given by

    (2.13)

    Given two random variables Xi, i = 1, 2, …, M, and Xj, j = 1, 2, …, M, of a set of random variables and provided that their joint PDF is known as f(xi xj), then the joint moment and joint central moment can be defined as

    (2.14)

    (2.15)

    respectively, where E{Xi} = mi and E{Xj} = mj. In the context of pseudo random signal processing, we will almost exclusively be concerned with the case k = l = 1. Then, the joint moment is called correlation ρij and the joint central moment is called covariance μij. We have

    (2.16)

    (2.17)

    Random processes

    As random phenomena observed in nature are usually functions of time, we shall now return to random processes. A single realization x(t) of a random process X(t) is called a sample function. As such, a random process can be defined as the ensemble of all possible sample functions. Let us now consider two random processes X(t) and Y(t) at a number N of different discrete-time instants t1 < … < ti < … < tN, which in turn results in a set of N random variables and . Given that the joint PDFs of these random variables are known as , and , ensemble averaging can be performed. Here, we will mainly be concerned with the autocorrelation function (ACF) and the crosscorrelation function (CCF) of two random processes X(t) and Y(t) as given by

    (2.18)

    (2.19)

    respectively, where only the two discrete-time instants ti and tj are considered.

    In practical systems, one often has to deal with random processes that can be regarded as wide-sense stationary (WSS). A WSS random process X(t) is characterized by the property that neither the mean nor the ACF of the process is affected by a shift in the time origin, thus

    (2.20)

    (2.21)

    In order to compute statistics such as the mean mx and the ACF Rx,x(τ) of a random process X(t), averaging must be performed across all possible sample functions x(t) and complete knowledge of joint PDFs is required. This extensive knowledge is often not available in practice. A more practical approach would be to compute the time average over a single sample function of the random process instead, which can be expressed as

    (2.22)

    (2.23)

    It is important to note that this approach can only be applied to the special class of ergodic processes which have the property that the time average equals the ensemble average. Although the property of ergodicity may intuitively apply to a great variety of random processes observed in communication and information systems, its verification in practice is usually very difficult.

    The interested readers can further familiarize themselves with the phenomenological properties of signals through most of the introductory textbooks about probability theory and random processes [45, 112, 149, 188, 189] and can find more about the relationship to communication systems in [196, 223].

    2.1.3 Energy classification

    The concept of correlation is closely related to the energy and the power of a signal as will be seen in Section 2.3. In particular, the following classification needs to be considered.

    A deterministic signal is classified as an energy signal if and only if its energy is nonzero but finite, i.e.,

    (2.24)

    This characteristic applies to deterministic signals of finite duration that have finite energy for all time instants but contain zero average power.

    Signals with infinite duration such as periodic waveforms and random signals usually contain infinite energy. This type of signal can be classified as a power signal if and only if its power is nonzero but finite, i.e.,

    (2.25)

    2.1.4 Spectral classification

    The spectrum of a signal is a frequency-domain description that specifies the signal’s amplitude and phase as a function of frequency. It provides insights into fundamental signal characteristics such as the distribution of average power or energy at the various frequencies and the bandwidth occupied by the signal. In technical, physical, and biological systems we often encounter energy signals, i.e., aperiodic signals of finite duration. The time-frequency relationship for these continuous-time aperiodic signals can be conveniently characterized by the Fourier transform and the inverse Fourier transform, respectively, which are defined as

    (2.26)

    (2.27)

    where f denotes frequency. The operators and denote the Fourier transform and inverse Fourier transform, respectively. In order for this transform to hold, the signal must satisfy certain conditions such as having a finite number of maxima and minima, a finite number of finite discontinuities, and being absolutely integrable. Table 2.3 shows the Fourier transforms of some common signals. The correspondence between the time-domain and the frequency-domain, i.e., the relationship between signal x(t) and its spectrum X(f), may be indicated symbolically as follows:

    (2.28)

    where the spectrum X(f) is specified in terms of magnitude |X(f)| and phase θ(f).

    Table 2.3 Fourier transforms of some common signals.

    Depending on the morphological classification of the signal, another three Fourier transform pairs can be formulated in addition to Equations (2.26) and (2.27). That is, for continuous-time periodic signals of period T = 1/f0, discrete-time periodic signals of period N, and discrete-time aperiodic signals, respectively, the pairs of continuous and discrete Fourier transforms are defined as

    (2.29)

    (2.30)

    (2.31)

    A comprehensive treatment of the most important mathematical transforms and their applications can be found in [106, 187, 194].

    2.2 TRANSFORMATIONS OF SIGNALS AND SEQUENCES

    As signals and sequences are typically used in relation to systems and are therefore somewhat processed, a brief background on fundamental transformations will therefore be provided.

    2.2.1 Basic transformations

    Common transformations that can be found, for example, in communication, information, and signal processing systems, include the following functions [196, 223]:

    Source encoding and decoding

    Channel encoding and decoding

    Encryption and decryption

    Spreading and despreading

    Line encoding and decoding

    Modulation and demodulation

    The various encoding, encryption, spreading, and modulation functions are performed before the signal stimulates the particular technical, physical, or biological system. The inverse operations such as demodulation, despreading, and various decodings are performed on the response of the system to the stimulus. The objectives associated with the individual processing steps are to remove redundant information, to make signals less vulnerable to such impairments as noise and interference, to avert unauthorized interception, and to transform signal characteristics into a form that is compatible with the system it corresponds to.

    Fourier operations

    Many of the above transformations can be modeled as a linear time-invariant (LTI) system and can mathematically be described by Fourier transforms. Figure 2.1 shows the abstract model of an LTI system. The input and output of the system are given by the signals x(t) and y(t) or their spectra X(f) and Y(f), respectively. The LTI system itself is described by an impulse response h(t) that is observed when a Dirac impulse δ(t) is used as input. The frequency-domain equivalent of the impulse response is called the transfer function H(f) of the system.

    The system response to an arbitrary input signal can basically be found as a superposition of a series of impulse responses. This operation is mathematically formulated by the convolution integral

    (2.32)

    where operator * denotes convolution. The frequency-domain representation can be developed using Fourier transforms and the system’s transfer function. Then, we obtain the correspondence

    (2.33)

    Table 2.4 shows examples of Fourier operations that are often used in system theory.

    Figure 2.1 Components of an LTI system model.

    Table 2.4 Examples of Fourier operations.

    Algebraic operations

    Given discrete-time signals or even digital signals, it is convenient to describe transformations on these sequences using the language of algebra [156, 157] as follows.

    An n-tuple or vector is an ordered set of elements and will be denoted as

    (2.34)

    The elements xi, i = 1, 2, …, n, of vector x will take on values from an algebraic structure such as those associated with natural numbers , integer numbers , real numbers , complex numbers , or may assume elements of a finite field .

    Due to their importance for pseudo random signal processing, let us focus on vectors that are comprised of elements taken from a finite field (see Chapter 3). Then, an n-dimensional vector space V over can be defined as the set

    (2.35)

    The following algebraic operations must hold for all vectors and scalar elements

    (2.36)

    (2.37)

    (2.38)

    (2.39)

    (2.40)

    A linear combination of n vectors hi, i = 1, 2, …, n, of length n can be formulated as

    (2.41)

    where y = (y1, y2. … yn) is also a vector of length n and H = [hij]n×n is called an n-dimensional matrix. In view of Equation (2.41), an algebraic system model can be adopted as shown in Figure 2.2. The structure of a system is contained in matrix H, which determines the way in which input x is transformed into output y.

    Similar to the Fourier transform with corresponding time-domain and frequency-domain representations, an algebraic system model can be translated into another domain by applying a so-called similarity transformation. The various relationships encountered with a similarity transformation are illustrated in Figure 2.3 and can be formulated mathematically as

    (2.42)

    (2.43)

    (2.44)

    (2.45)

    where T and T−1 denote the transform matrix and its inverse, respectively.

    Figure 2.2 Components of an algebraic system model.

    Figure 2.3 Similarity transformation.

    2.3 CORRELATION MEASURES

    In technical, physical, and biological applications, it is of great importance to be able to mathematically quantify the degree of similarity between different signals. This knowledge is beneficial not only for the development of advanced signal processing algorithms but also for the development of powerful techniques for signal analysis and efficient methods for signal design. Signal sets are therefore needed that possess certain structure, i.e., showing either one or both of the following characteristics:

    Each signal in a signal set can easily be distinguished from a shifted version of itself.

    Each signal in a signal set can easily be distinguished from every other signal in that signal set as well as from shifted versions of these signals.

    A first quantification of the degree of similarity between pairs of signals in a given signal set can be obtained by considering simple distance measures. One suitable choice would be to calculate the average power of the difference signal

    (2.46)

    where τ denotes the shift variable. Then, the average power of the difference signal is defined as

    (2.47)

    where Px and Py denote the power of the signals x(t) and y(t), respectively, as defined in Equation (2.25). The third term in Equation (2.47) is called the crosscorrelation function (see Section 2.3.1) and is given by

    (2.48)

    In order to reveal the close relationship between similarity and correlation measures, it is instructive to consider the average power of the difference signal

    (2.49)

    Then, we obtain the average power for zero shift between the involved signals as

    (2.50)

    where the third term is called average cross-power and is given by

    (2.51)

    It can be concluded from Equations (2.50) and (2.51) that the average power PΔ of a difference signal Δ(t) disappears if and only if the two signals x(t) and y(t) are identical. Therefore, two signals may be considered as having a maximum degree of similarity if the average power of their difference signal is zero. However, this statement relates to the absolute similarity between signals but does not reflect on their relative similarity as would be needed for signals that differ only by a constant factor. A quantification of relative similarity can be obtained by rephrasing the above arguments in terms of the difference signal for normalized signals:

    (2.52)

    Then, the average power of the difference signal between the two normalized signals can be written as

    (2.53)

    It can be seen from Equation (2.53) that the normalized average power of the normalized difference signal ϑ(t) depends only on the normalized average cross-power, which for this reason is usually called the normalized crosscorrelation coefficient

    (2.54)

    The above ideas can readily be translated to energy signals or may be exploited for the sum x(t) + y(t + τ) of two signals. For all these variants and their normalized versions, it turns out that eventually the product signals x(t)y(t) and x(t)y(t + τ) provide the essential key for measuring similarity between signals.

    2.3.1 Autocorrelation and crosscorrelation functions

    In view of the motivation given above, the degree of similarity between signals in a signal set can be characterized analytically by correlation functions. In particular, the ACF and CCF can be formulated in several ways and are known as fundamental signal processing operations for a great variety of technical applications [161]. Although correlation functions can be defined for analog and digital signals, pseudo random signal processing is first and foremost concerned with the digital type of signals. However, some relationships and background applicable to both analog and discrete signals will be introduced first before concentrating on the digital case.

    Correlation functions for analog power signals

    Let us consider real-valued analog signals x(t) ∈ and y(t) ∈ along with their shifted versions x(t + τ) ∈ and y(t + τ) ∈ . In view of Equation (2.48), the formal measure of similarity between two signals is referred to as the CCF and is defined by

    (2.55)

    In order to differentiate between the correlation functions for random signals and those related to pseudo random signals, we adopt lower case indices with the correlation functions of the pseudo random signal types as the means of indicating the time-averaging nature of the operation.

    Similarly, a correlation measure can be defined between the real-valued analog signal x(t) and its shifted version x(t + τ). The respective average over time becomes an ACF and is given by

    (2.56)

    The ACF for an analog signal has several important properties including the following:

    (2.57)

    (2.58)

    This means that the ACF is an even function with respect to the shift variable τ. In addition, the ACF value obtained for the zero shift represents the average power Px = Rx,x(0) that is contained in the signal x(t). This peak value is also called in-phase value of the ACF while the out-of-phase values of the ACF for the nonzero shifts are always smaller than or at best equal to the in-phase value.

    In order to support easy comparison between autocorrelation properties of different signals, the ACF may be represented in normalized form with respect to the in-phase value, which leads to the correlation coefficient

    (2.59)

    So far we have considered correlation measures only for continuous-time and real-valued power signals. However, the transition to continuous-time and complex-valued power signals is straightforward. For example, if the signals u(t) and v(t) are continuous-time and complex-valued functions of time t, integrands x(t)y(t + τ) and x(t)x(t + τ) in Equations (2.55) and (2.56), respectively, have to be replaced by u(t)v*(t + τ) and u(t)u*(t + τ) giving

    (2.60)

    (2.61)

    where a*(t) denotes the complex conjugate of a(t).

    Correlation functions for analog periodic signals

    If the considered real-valued analog signals x(t) ∈ and y(t) ∈ are periodic with periods given by Tx and Ty, respectively, we have

    (2.62)

    (2.63)

    Then, the ACF of an analog periodic signal x(t) as defined by Equation (2.56) has the additional property of

    (2.64)

    As a consequence, the ACF of an analog periodic signal x(t) can be expressed as

    (2.65)

    where the initial condition t0 can take on any constant value.

    A similar argument applies to the CCF between an analog periodic signal x(t) of period Tx and an analog periodic signal y(t) of period Ty. Then, we can write

    (2.66)

    where operator lcm(a, b) denotes the least common multiple of the numbers a and b. If x(t) and y(t) have the same period, then the overall period is given by T = Tx = Ty.

    The time-averaging operation described by Equations (2.65) and (2.66) may be performed only over a fraction of the period. The respective correlation functions are then called partial ACF and partial CCF. The outcomes of these partial correlations apparently depend on the initial condition t0. On the other hand, if the integration time is chosen to be equal to a period or a multiple of the period, then the initial condition may be set as t0 = 0. Another important factor when choosing the integration time is the required noise reduction, which can be influenced by averaging.

    Correlation functions for analog energy signals

    Correlation measures for analog energy signals can be obtained accordingly. In the general case of continuous-time and complex-valued signals, for instance, we have

    (2.67)

    (2.68)

    Example 2.1 Let the signal x(t) be a continuous-time and real-valued energy signal given by the pulse train shown in Figure 2.4(a). The signal x(t)

    Enjoying the preview?
    Page 1 of 1