Digital Communications 2: Digital Modulations
By Mylène Pischella and Didier Le Ruyet
()
About this ebook
This second volume covers the following blocks in the chain of communication: the modulation baseband and transposed band, synchronization and channel estimation as well as detection. Variants of these blocks, the multicarrier modulation and coded modulations are used in current systems or future.
Related to Digital Communications 2
Related ebooks
Wireless Communications Systems Design Rating: 0 out of 5 stars0 ratingsDigital Communications 1: Source and Channel Coding Rating: 0 out of 5 stars0 ratingsShort-Range Optical Wireless: Theory and Applications Rating: 0 out of 5 stars0 ratingsTransitions from Digital Communications to Quantum Communications: Concepts and Prospects Rating: 0 out of 5 stars0 ratingsAdvanced Chipless RFID: MIMO-Based Imaging at 60 GHz - ML Detection Rating: 0 out of 5 stars0 ratingsDiscrete Wavelet Transform: A Signal Processing Approach Rating: 5 out of 5 stars5/5Digital Signal Processing for RFID Rating: 0 out of 5 stars0 ratingsWavelet Analysis and Transient Signal Processing Applications for Power Systems Rating: 0 out of 5 stars0 ratingsAntenna Designs for NFC Devices Rating: 0 out of 5 stars0 ratingsSimplified Robust Adaptive Detection and Beamforming for Wireless Communications Rating: 0 out of 5 stars0 ratingsAdvanced Multicarrier Technologies for Future Radio Communication: 5G and Beyond Rating: 0 out of 5 stars0 ratingsHandbook of Defence Electronics and Optronics: Fundamentals, Technologies and Systems Rating: 0 out of 5 stars0 ratingsDiscrete Fourier Analysis and Wavelets: Applications to Signal and Image Processing Rating: 0 out of 5 stars0 ratingsIntroduction to Mobile Network Engineering: GSM, 3G-WCDMA, LTE and the Road to 5G Rating: 0 out of 5 stars0 ratingsSpace Modulation Techniques Rating: 0 out of 5 stars0 ratingsCommunication Systems Principles Using MATLAB Rating: 0 out of 5 stars0 ratingsQuantum Communications in New Telecommunications Systems Rating: 0 out of 5 stars0 ratingsReverberation Chambers: Theory and Applications to EMC and Antenna Measurements Rating: 0 out of 5 stars0 ratingsDistributed Cooperative Control: Emerging Applications Rating: 0 out of 5 stars0 ratingsDigital Signal Processing: Instant Access Rating: 4 out of 5 stars4/5Computer Processing of Remotely-Sensed Images: An Introduction Rating: 0 out of 5 stars0 ratingsChipless Radio Frequency Identification Reader Signal Processing Rating: 0 out of 5 stars0 ratingsForensic Radio Survey Techniques for Cell Site Analysis Rating: 4 out of 5 stars4/5Analytical Modeling of Wireless Communication Systems Rating: 0 out of 5 stars0 ratingsAnalog Electronics for Measuring Systems Rating: 0 out of 5 stars0 ratingsIntroduction to Operational Modal Analysis Rating: 0 out of 5 stars0 ratingsFrom GSM to LTE-Advanced Pro and 5G: An Introduction to Mobile Networks and Mobile Broadband Rating: 0 out of 5 stars0 ratingsChipless RFID Sensors Rating: 0 out of 5 stars0 ratingsBasic Digital Signal Processing: Butterworths Basic Series Rating: 5 out of 5 stars5/5Reliability of Maintained Systems Subjected to Wear Failure Mechanisms: Theory and Applications Rating: 0 out of 5 stars0 ratings
Telecommunications For You
The TAB Guide to DIY Welding: Hands-on Projects for Hobbyists, Handymen, and Artists Rating: 0 out of 5 stars0 ratingsPharmacology Demystified Rating: 4 out of 5 stars4/5101 Spy Gadgets for the Evil Genius 2/E Rating: 4 out of 5 stars4/5Steampunk Gear, Gadgets, and Gizmos: A Maker's Guide to Creating Modern Artifacts Rating: 4 out of 5 stars4/5MORE Electronic Gadgets for the Evil Genius: 40 NEW Build-it-Yourself Projects Rating: 4 out of 5 stars4/5Make Your Smartphone 007 Smart Rating: 4 out of 5 stars4/5Antenna Toolkit Rating: 4 out of 5 stars4/5Codes and Ciphers - A History of Cryptography Rating: 4 out of 5 stars4/5Tor and the Dark Art of Anonymity Rating: 5 out of 5 stars5/522 Radio and Receiver Projects for the Evil Genius Rating: 0 out of 5 stars0 ratingsProbability Demystified 2/E Rating: 4 out of 5 stars4/5Martin Cooper: The Father Of The Cell Phone On Making Technology More Efficient And Accessible For All Rating: 0 out of 5 stars0 ratings15 Dangerously Mad Projects for the Evil Genius Rating: 4 out of 5 stars4/5iPhone X Hacks, Tips and Tricks: Discover 101 Awesome Tips and Tricks for iPhone XS, XS Max and iPhone X Rating: 3 out of 5 stars3/5VoIP For Dummies Rating: 0 out of 5 stars0 ratingsMedical Terminology Demystified Rating: 4 out of 5 stars4/5Medical Charting Demystified Rating: 2 out of 5 stars2/5Android App Development For Dummies Rating: 0 out of 5 stars0 ratingsRadio and Radar Astronomy Projects for Beginners Rating: 0 out of 5 stars0 ratingsPre-Algebra DeMYSTiFieD, Second Edition Rating: 0 out of 5 stars0 ratings130 Work from Home Ideas Rating: 3 out of 5 stars3/5Cell Phone Photo Tips: How to Take Better Photos with Your Smart Phone Rating: 3 out of 5 stars3/5Practical, Made Easy Guide To Building, Office And Home Automation Systems - Part Two Rating: 0 out of 5 stars0 ratingsLinear Algebra Demystified Rating: 0 out of 5 stars0 ratingsCodes and Ciphers Rating: 5 out of 5 stars5/5The E-Commerce Book: Building the E-Empire Rating: 5 out of 5 stars5/5The Hello Girls: America’s First Women Soldiers Rating: 4 out of 5 stars4/5The Wireless Deployment Handbook for LTE, CRAN, and DAS Rating: 0 out of 5 stars0 ratingsProgramming Amateur Radios with CHIRP: Amateur Radio for Beginners, #6 Rating: 0 out of 5 stars0 ratings
Reviews for Digital Communications 2
0 ratings0 reviews
Book preview
Digital Communications 2 - Mylène Pischella
Introduction
The fundamental objective of a communication system is to reproduce at a destination point, either exactly or approximately, a message selected at another point. This was theorized by Claude Shannon in 1948 [SHA 48].
The communication chain is composed of a source (also called transmitter) and a destination (also called receiver). They are separated by a transmission channel, which may, for instance, be a wired cable if we consider assymetric digital subscriber line (ADSL) transmission, optical fiber, a wireless mobile channel between a base station and a mobile terminal or between a satellite and its receiver, a hard drive and so forth. The latter example indicates that when we refer to point, we may consider either location or time. The main issue faced by communication systems is that the channel is subject to additional noise, and may also introduce some distortions on the transmitted signal. Consequently, advanced techniques must be implemented in order to decrease the impact of noise and distortions on the performances as much as possible, so that the receiver signal may be as similar as possible to the transmitted signal.
The performance of a transmission system is evaluated by either computing or measuring the error probability per received information bit at the receiver, also called the bit error rate. The other major characteristics of a communication system are its complexity, its bandwidth, its consumed and transmitted power and the useful data rate that it can transmit. The bandwidth of many communication systems is limited; it is thus highly important to maximize the spectral efficiency, which is defined as the ratio between the binary data rate and the bandwidth. Nevertheless, this should be done without increasing the bit error rate.
This book consists of two volumes. It aims at detailing all steps of the communication chain, represented in Figure I.1. Source and channel coding are deeply studied in the first volume: Digital Communications 1: Source and Channel Coding. Even though both volumes can be read independently, we will sometimes refer to some of the notions developed in the first volume. The present volume focuses on the following blocks: modulation, synchronization and equalization. Once source and channel coding have been performed, data have first been compressed, and then encoded by adding some well-chosen redundancy that protects the transmitted bits from the channel’s disruptions. In this second volume, we are first located at the modulator’s input. The modulator takes the binary data at the output of the channel encoder in order to prepare them for transmission on the channel.
Figure I.1. Block diagram of the communication chain
The coded bits are then associated with symbols that will be transmitted during a given symbol period. They are shaped by a transmit filter before being sent on the channel. At the stage, transmission techniques vary, depending on whether baseband transmission (when the signal is carrier by the null frequency) or sine waveform transmission takes place. In the latter case, the frequency carrier is far larger than the bandwidth. In the former case, symbols are carried by the specific value of a line code; whereas in the latter case, they are carried by either the signal’s amplitude, phase or carrier frequency value. Baseband transmissions are detailed in Chapter 2. Digital modulations on sine waveforms are studied in Chapter 3. Furthermore, advanced digital modulation techniques, called coded modulations, are presented in Chapter 6. They associate modulations and channel encoding in order to increase the spectral efficiency. Chapter 2 introduces some fundamental notions such as optimum detection on the additive white Gaussian noise channel, and Chapter 3 explains the connection between baseband transmissions and sine waveform transmissions. Modulations’ performances are determined in terms of bit error rate depending on the signal-to-noise ratio and on the bandwidth.
The demodulator aims at extracting samples while maximizing a given criterion (signal-to-noise ratio, bit error rate, etc.). The detector’s objective is to determine the most likely transmitted symbol. If channel decoding takes hard inputs, the detector directly provides the estimated binary sequence. If channel decoding takes soft inputs, then the detector provides soft information on bits, which generally are logarithms of likelihood ratio.
In order to have an efficient detection, the input samples should be as close as possible to the transmitted ones. Yet, the channel may generate distortions on the transmitted signal, as well as delays which may result in a loss of the symbol period, or frequency shifts. Demodulation and detection must consequently be preceded by a synchronization at the receiver with the transmitted signal in order to correct these shifts. Thus, if the channel has introduced distortions that lead to intersymbol interference, an equalization step must be added in order to estimate the transmitter symbols. These two steps are presented in Chapter 4. We can notice that synchronization may be blind, which means that it is performed without any channel knowledge, in which case its performances will be far lower than if synchronization is based on a training sequence. Similarly, equalization requires channel estimation, which must be implemented prior to these two steps.
In order not to use equalization at the receiver, most recent communication systems split the original wideband channel into several subchannels, where each subchannel only produces a constant signal attenuation but no distortion. This step modifies the modulation block and is thus located before transmission on the channel. Multi-carrier modulations are detailed in Chapter 5.
Finally, we can notice that some useful mathematics and digital signal processing background are provided in Chapter 1.
The summary of the contributions of this book shows that we have tried to be as exhaustive as possible, while still studying a large spectrum of the digital communications domain. Some topics have not been considered due to lack of space. For instance, the wireless mobile channel’s characteristics or the multiple-access techniques on this channel have not been mentioned, as well as multiple antenna multiple input, multiple output (MIMO) techniques. Moreover, we have only focused on the most modern techniques, only considering older techniques with an educational objective, when the latter allow us to better understand more complex modern techniques.
Most of the chapters detail fundamental notions of digital communications and are thus necessary to comprehend the whole communication chain. They provide several degrees of understanding by giving an introduction to these techniques, while still giving some more advanced details. In addition, they are illustrated by examples of implementations in current communications systems. Some exercises are proposed at the end of each chapter, so that the readers may take over the presented notions.
We can nevertheless notice that Chapter 6 proposes an opening to advanced modulation and coding techniques. It may be read as a complement in order to strengthen knowledge.
1
Background
1.1. Introduction
This chapter provides some background necessary for the remainder of this book. The common operations and functions are presented first. The common transforms required for calculations are detailed in section 1.3. Then, some background on discrete and continuous probabilities is provided in section 1.4. Finally, some elements of digital signal processing are recalled in section 1.5.
1.2. Common operations and functions
1.2.1. Convolution
The convolution between two signals s(t) and h(t) is defined as:
[1.1]
We will write it as (s * h)(t) = s(t) * h(t) with a notational abuse.
The convolution is linear and invarious to time. Consequently, for any continuous signals s1(t) and s2(t), any complex values α1, α2 and any time delay t1, t2, we can write:
[1.2]
1.2.2. Scalar product
The scalar product between two continuous signals s(t) and r(t) is defined as:
[1.3]
For any continuous signals s1(t), s2(t), r1(t) and r2(t) and any complex values α1, α2, the following linearity properties hold:
[1.4]
For vectors of size m × 1, s = [s0, s1, …, sm − 1]T and r = [r0, r1, …, rm − 1]T, the scalar product is similarly defined:
[1.5]
1.2.3. Dirac function, Dirac impulse and Kronecker’s symbol
The continuous Dirac function, denoted as t δ(t), is defined in the following way: for any finite-energy signal s(t) and any real value τ,
[1.6]
From [1.6] and [1.1], the convolution of any function s(t) by a Dirac function delayed by τ is equal to the function s(t) delayed by τ:
[1.7]
The Dirac impulse is defined for discrete signals as follows:
[1.8]
The Kronecker’s symbol is a function of two variables which is equal to 1 if both variables are equal, and to 0 elsewhere.
[1.9]
As a result, the Dirac impulse is equal to the Kronecker symbol when n0 = 0.
1.2.4. Step function
The step function is a continuous function which is equal to 0 when the input value is negative, and equal to 1 when the input value is positive:
[1.10]
1.2.5. Rectangular function
The rectangular function is a continuous function which is equal to 0 outside of a time interval T that starts at t = 0:
[1.11]
1.3. Common transforms
1.3.1. Fourier transform
1.3.1.1. Fourier transform of a continuous signal
The Fourier transform is a particularly important tool of the field of digital communications. It allows us to study a signal no longer in the time domain, but in the frequency domain. The spectral properties of a signal are more relevant to characterize it than its time properties. For instance, due to its spectral properties, a signal can be determined as baseband or passband, its bandwidth can be characterized and so on.
The Fourier transform of a continuous signal s(t) is:
[1.12]
It is often denoted by S(f) = T F[s](f).
The inverse Fourier transform allows us to recover the time signal s(t) from its Fourier transform S(f):
[1.13]
The Fourier transform is linear. In digital communications, the following property is of particular interest: the Fourier transform of a convolutional product is equal to the product of both Fourier transforms. The opposite property also stands:
[1.14]
We list here the Fourier transform of some functions that are useful in digital communications and signal processing:
– the Fourier transform of the Dirac function is a constant: s(t) = δ(t) ⇔ S(f) = 1;
– the Fourier transform of the rectangular function between −T/2 and T/2 is a cardinal sine function:
– the Fourier transform of a cosine is a sum of two Dirac functions:
– the Fourier transform of a sine is a difference between two Dirac functions:
.
1.3.1.2. Discrete Fourier transform
Le x be a discrete signal composed of N samples, x = [x0, x1, …, xN − 1], with sampling frequency Fe and sampling period is the sample corresponding to time nT.
Its Fourier transform is defined as:
[1.15]
The discrete Fourier transform is generally determined for some discrete frequency values:
[1.16]
The N samples of the Fourier transform are denoted as X = [X0, X1, …, XN − 1] with .
The inverse discrete Fourier transform is similarly defined:
[1.17]
Energy is maintained between the time sample’s vector x and the frequency sample vector X. This is proven by Parseval’s equality:
[1.18]
The discrete Fourier transform and its inverse can be implemented with low complexity by using the fast Fourier transform (FFT). This recursive algorithm was established by Cooley and Tuckey in 1965. For N samples, the FFT requires N log2(N) operations, whereas a direct application of equation [1.17] would require N² operations.
1.3.2. The z transform
The z transform is used in the fields of signal processing and digital communications to model filtering operations, and especially time delays.
It is applied on discrete sampled signals x = [x0, x1, …, xN − 1], and is defined as follows:
[1.19]
We can see that the z transform is equal, up to a proportionality factor, to the discrete Fourier transform [1.19] when z = ej2πfTe.
1.4. Probability background
Probability theory is a mathematical domain that describes and models random processes. In this section, we present a summary of this theory. We recommend for further reading [PAP 02] and [DUR 10].
Let X be an experiment or an observation that can be repeated under similar circumstances several times. At each repetition, the result of this observation is an event denoted by x, which can take several possible outcomes. The set of these values is denoted by AX.
The result X = x of this observation is not known before it takes place. X is consequently called a random variable. It is modeled by the frequency of appearance of all its outcomes.
Two classes of random variables can be distinguished:
– discrete random variables, when the set of outcomes is discrete;
– continuous random variables, when their distribution functions are continuous.
1.4.1. Discrete random variables
A discrete random variable X takes its values in a discrete set, called its alphabet AX. This alphabet may be infinite (for instance, if AX = N) or finite with a size n if AX = {x1, x2, …, xn}. Each outcome is associated with a probability of occurrence PX = {p1, p2, …, pn}:
[1.20]
For discrete random variables, the probability density fX(x) is defined by:
[1.21]
where δ(u) is the Dirac function.
1.4.1.1. Joint probability
Let X and Y be two discrete random variables with respective sets of possible outcomes AX = {x1, x2, …, xn} and AY = {y1, y2, …, ym}.
Pr(X = xi, Y = yj) is called the joint probability of the events X = xi and Y = yj. Of course, the following property is verified:
[1.22]
1.4.1.2. Marginal probability
The probability Pr(X = xi) can be computed from the set of joint probabilities Pr(X = xi, Y = yj):
[1.23]
1.4.1.3. Conditional probability
[1.24]
Similarly, we can write:
[1.25]
As a result, the following relation stands:
[1.26]
which can be further developed to:
[1.27]
Equation [1.27] is called the Bayes law. From this equation, we can see that Pr(X = xi|Y = yj) is the a posteriori probability, whereas Pr(Y = yi) is the a priori probability.
1.4.1.4. Independence
If two discrete random variables X and Y are independent, then:
[1.28]
and
[1.29]
1.4.2. Continuous random variables
The random variable X is continuous if its cumulative distribution function FX(x) is continuous. FX(x) is related to the probability density in the following way:
[1.30]
The random variable mean is defined as:
[1.31]
Its Nth moment is equal to:
[1.32]
1.4.3. Jensen’s inequality
Let us first recall that a function f(x) is convex if, for any x, y and 0 < λ < 1, the following inequality stands:
[1.33]
Let f be a convex function, [x1, …, xn] a real n-tuple belonging to the definition set of f and [p1, …, pn] a real positive n-tuple such that . Thus:
[1.34]
Jensen’s inequality is obtained by interpreting the pi terms as probabilities: if f(x) is convex for any real discrete random variable X, then:
[1.35]
1.4.4. Random signals
The signals used in digital communications depend on time t.
Signal x(t) is deterministic if the function t x(t) is perfectly known. If, on the contrary, the values taken by x(t) are unknown, the signal follows a random process. At time t, the random variable is denoted by X(t), and an outcome of this random variable is denoted as x(t). The set of all signal values x(t), for any t in the definition domain, is a given outcome of the random process X.
A random process is defined by its probability density and statistical moments. The probability density is equal to:
[1.36]
The random process is stationary if its probability density is independent of the time: fX(x, t) = fX(x) ∀t . As a result, all of its statistical properties are independent of t. Its probability density can thus be obtained from equation [1.30] in the following way:
[1.37]
mx(t), the mean of the random variable x(t) from the random process X, is defined as:
[1.38]
The autocorrelation function Rxx(τ) of a random variable is:
[1.39]
The random process X is second-order stationary or wide-sense stationary if, for any random signal x(t):
– its mean mx(t) is independent of t;
– its autocorrelation function verifies Rxx(t1, t2) = Rxx(t1 +t, t2+t) ∀t.
Thus, it can simply be denoted as:
[1.40]
In this case, the power spectrum density γxx(f) is obtained by applying the Fourier transform on the autocorrelation function:
[1.41]
Reciprocally, the autocorrelation function Rxx(τ) is determined from the power spectrum density as follows:
[1.42]
Generally, the mean and autocorrelation function of a stationary random process are estimated from a set of outcomes of the signal X(t). When the mean over time tends to the random process’s mean, the random process is ergodic. Only one time set of outcomes of the random process X is required to evaluate its mean and autocorrelation function. Most random processes that are considered in digital communications are second-order stationary and ergodic.
For discrete signals (for instance, signals that have been sampled from a continuous random signal x(t) at frequency ) , the autocorrelation function Rxx(τ) is only defined at discrete times τ = nTe, and the power spectrum density becomes:
[1.43]
The power spectrum density can be estimated with the periodogram. When N samples are available, it is equal to:
[1.44]
1.4.4.1. Power
The power of x(t) is defined as:
[1.45]
For discrete signals, it is equal to:
[1.46]
1.4.4.2. Energy
The energy of a random signal x(t) with finite energy is:
[1.47]
For discrete signals, it is