Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Signal Processing for Cognitive Radios
Signal Processing for Cognitive Radios
Signal Processing for Cognitive Radios
Ebook1,344 pages13 hours

Signal Processing for Cognitive Radios

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book examines signal processing techniques for cognitive radios. The book is divided into three parts:

Part I, is an introduction to cognitive radios and presents a history of the cognitive radio (CR), and introduce their architecture, functionalities, ideal aspects, hardware platforms, and state-of-the-art developments. Dr. Jayaweera also introduces the specific type of CR that has gained the most research attention in recent years: the CR for Dynamic Spectrum Access (DSA).

Part II of the book, Theoretical Foundations, guides the reader from classical to modern theories on statistical signal processing and inference. The author addresses detection and estimation theory, power spectrum estimation, classification, adaptive algorithms (machine learning), and inference and decision processes. Applications to the signal processing, inference and learning problems encountered in cognitive radios are interspersed throughout with concrete and accessible examples.

Part III of the book, Signal Processing in Radios, identifies the key signal processing, inference, and learning tasks to be performed by wideband autonomous cognitive radios. The author provides signal processing solutions to each task by relating the tasks to materials covered in Part II. Specialized chapters then discuss specific signal processing algorithms required for DSA and DSS cognitive radios.

LanguageEnglish
PublisherWiley
Release dateNov 19, 2014
ISBN9781118986769
Signal Processing for Cognitive Radios

Related to Signal Processing for Cognitive Radios

Related ebooks

Technology & Engineering For You

View More

Related articles

Related categories

Reviews for Signal Processing for Cognitive Radios

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Signal Processing for Cognitive Radios - Sudharman K. Jayaweera

    PREFACE

    Arguably, it is signal processing that makes a cognitive radio cognitive. Its predecessor, software-defined radio (SDR) technology, already provides a software-reconfigurable device platform by implementing most baseband radio operations in software instead of in hardware. Cognitive radios are meant to be SDRs that are cognitive and intelligent. Acquisition of knowledge through learning is a common aspect of both cognition and intelligence, while self-awareness and reasoning (application of acquired knowledge) are perhaps distinctive features of cognition and intelligence, respectively. A cognitive radio, thus, is supposed to possess all these features: self-awareness, learning, and reasoning. Clearly, these are attributes that a radio can possess mostly through signal processing. It is the signal processing algorithms, implemented on an SDR platform, that will endow a radio with self-awareness, learning, and reasoning abilities.

    There are many books devoted to cognitive radios. However, none are devoted to signal processing in cognitive radios. This book is an attempt to highlight the fundamental role of signal processing in cognitive radios. One may identify two types of signal processing within a cognitive radio: signal processing for gaining spectrum awareness and signal processing for achieving efficient communications. Many processing algorithms that fall under the latter are already present in all wireless communications systems and devices. However, signal processing for spectrum awareness is unique to cognitive radios. These are aimed at providing the cognitive radio with self-awareness, a necessary ingredient of cognition. Attempting to comprehensively cover signal processing of both these types in a single book is perhaps an unrealistic goal. Moreover, it is unnecessary given the fact that there are many excellent books devoted to signal processing in wireless communications systems. Hence, the focus of this book is on signal processing that is unique to a cognitiveradio. Not surprisingly, signal processing for gaining spectrum awareness constitutes a bulk of this focus. Still though, there are certain advanced signal processing techniques aimed at achieving efficient communications that are realistically well suited for implementations on sophisticated devices such as cognitive radios, thus deserving to be included in a book on signal processing for cognitive radios. An example is the advanced cooperative communications and processing techniques.

    Any radio that is simply cognitive and intelligent can be considered as a cognitive radio. Hence, the notion of cognitive radios is a broad concept. Depending on the application context and the type of communications network in question, cognition and intelligence in a cognitive radio may be directed at achieving different objectives. One such widely pursued objective is the development of radios that may coexist with licensed spectrum users through dynamic spectrum sharing (DSS). However, there are other useful objectives as well, including, among others, multiband/multimode operation and antijamming. In general, such cognitive radios may be taken to be wideband radios in the sense that they may be able to operate over a wide span of non-contiguous spectrum. By default, the cognitive radios in this book are such wideband radios whose cognition and intelligence are aimed at arbitrary performance objectives. The DSS cognitive radios are treated as an important special case.

    I would like to thank my former graduate students Mario Bkassiny, Yang Li, and Tianming Li whose pursuit of graduate research on the topic of cognitive radios led to much of the new results in this book. In particular, this book would not have been possible without the help of Mario, Yang, and Ding Li (another former graduate student of mine), who helped me in preparing many diagrams and figures for the book. I cannot thank them enough. I would also like to thank Prof. Christos Christodoulou, my colleague at the ECE Department of UNM, for the opportunity to collaborate with him on cognitive radio research as well as his encouragement during this book writing project. I am grateful for Prof. Carlos Mosquera of the University of Vigo, Spain, and his wife Vicky for their friendship and support during the times that I spent in Vigo. You gave me much needed distraction to get through the tough times while still keeping the interest in working on CR research. I must mention Prof. Hyuck Kwon's help with some of the proof-reading and my friend Thusitha Liyanage for checking on me time-to-time when I was too absorbed in work to keep in touch with any friends.

    I would like to thank Simone Taylor and Kari Capone at Wiley for getting me on board to do this book with Wiley in the first place. From there, I am thankful to Brett Kurzman and Ho Kin Yunn at Wiley USA and Singapore, respectively, for handling the book project in such a speedy manner while making the whole process painless to me. I must also mention Alex Castro at Wiley. Finally, I would like to thank Jayapriya Purushothaman at SPi Global for her timely management of the copy-editing, typesetting and proof reading stages.

    SUDHARMAN K. JAYAWEERA

    Albuquerque, NM, United States

    05/12/2014

    PART I

    INTRODUCTION TO COGNITIVE RADIOS

    1

    INTRODUCTION

    1.1 INTRODUCTION

    Cognitive radios (CRs) are one of the most researched topics since the introduction of the concept in 1999 [1]. The original definition of the CR concept was ambitious and visionary [1]: CRs have the ability for … automated reasoning about the needs of the user, and they are … radio-domain-aware intelligent agents that search out ways to deliver the services the user wants even if that user does not know how to obtain them [1]. May be this definition and our intuitive understanding of the word cognitive are enough for now. We can postpone defining the CRs more formally till the next chapter.

    In recent years, there have been many efforts toward developing CRs that are, being true to the above definition, user-, and environment-aware. Intuitively, cognitive abilities of a radio must emerge from being able to interpret and react to its RF environment and user’s needs (the performance goals of the radio). Moreover, these cognitive abilities should lead to effective learning. But, of course, we are talking about a radio device. Just as human perception of the external world is gained through the five sensory organs (nose, smell; ears, hearing; eyes, sight; skin, touch; and mouth, taste), a radio’s perception of its external world has to be gained through the single sensory organ a radio is equipped with, that is, its antenna. The sensory input is essentially signals or, in general, some form of electromagnetic radiation, as in Figure 1.1. Hence, the ability to interpret signals to comprehend its environment and acquire knowledge, followed by reasoning, decision-making, and learning from such acquired knowledge, is at the heart of a radio’s cognition. In that sense, arguably, all cognitive abilities of a radio must emerge from advanced signal processing techniques it possesses: be they for interpreting RF environment, making decisions on how to communicate, or learning new knowledge. The focus of this book, thus, is on signal processing techniques for CRs.

    FIGURE 1.1 RF signals and the antenna are the sensations and sensory organ of a cognitive radio.

    1.2 SIGNAL PROCESSING AND COGNITIVE RADIOS

    Mathematically, a signal is simply a function. Depending on in what domain we are looking at the signal, the independent variable of the function can be time, frequency, space, or even a combination of them. Since a signal as a function of time is the most familiar, perhaps, let us think of it as such, for now. Signals are, of course, fundamental to all communications and can be of various types physically. But for our purposes, it may be thought of as an electrical current/voltage or an RF waveform.

    Generating, processing (or manipulating), transforming, transmitting, and detecting signals are involved in all communications systems and devices. By signal processing, in a (digital) communications receiver, we essentially mean the processing involved from the reception at the antenna until the detection of binary bits, as in Figure 1.2. In a communications receiver, information bits are extracted from a received RF signal by converting it into several different forms in several steps: analog RF waveform to an intermediate frequency (IF) waveform, then to a discrete-time sequence, then to symbol values, and finally to bits. We may consider processing up to this point as signal processing. There may be further decoding of error control coding (ECC) based on these bits that we may not necessarily treat as signal processing. Detected binary bits may need to be processed again before they can be in a form that is useful for a human user, but we may skip treating such processing as part of the communications receiver.

    FIGURE 1.2 Signal processing in a receiver.

    Similarly, signal processing in a transmitter can include processing involved from the output of an error control coder till the generation of an analog waveform that is ready to be radiated from an antenna as in Figure 1.3. We may loosely treat processing of bits as coding (or ECC) and not as part of signal processing. Of course, this is not completely precise. For example, both trellis coding and space–time coding can make the coding and signal processing to overlap. In general, this is true with coded modulation techniques that combine coding and modulation together.

    FIGURE 1.3 Signal processing in a transmitter.

    Signal processing can be either analog or digital. Early communications systems and devices mainly used analog signal processing implemented in hardware. Thus, all processing of signals were performed directly in the analog domain on the analog waveforms using analog circuits. For example, in an old AM or FM radio, the message waveform m(t) would be filtered using an analog filtering circuit and modulated using a mixer circuit. On the other hand, in digital communications systems, part of the system (or the radio device) holds the signals in digital format. The interface between the analog and digital worlds is usually a pair of an analog-to-digital (A/D) and a digital-to-analog (D/A) converters. Processing a signal in digital form is called digital signal processing (DSP). DSP can, however, be performed also in either hardware or software. But usually, even if the processing is in hardware, it is essentially a software algorithm specifically implemented on a hardware circuit for speed and efficiency. Hence, in a certain sense, we may call all DSP essentially as implemented in software. Of course, to implement signal processing algorithms as software, one needs a computer or a processor. In the case of a communications device, this could be a microprocessor, a general-purpose processor (GPP), a DSP chip, an application-specific integrated circuit (ASIC), or a field-programmable gate array (FPGA). All these essentially provide digital hardware platforms to implement and run the software programs that implement DSP algorithms.

    The importance of signal processing in modern digital communications cannot be overemphasized. In fact, without the advanced signal processing techniques, many of today’s sophisticated and high-performance communications systems may not have been possible. For instance, over the last two decades, cellular communications systems have seen a tremendous worldwide growth and expansion. This was mostly due to the fact that each generation of new cellular systems rapidly evolved into providing breathtaking quality improvements compared to the previous generation. Many of these gains, in part, were due to advanced signal processing techniques that facilitated overcoming various channel impairments and interference issues. For example, the cellular communications concept relies on dividing the geographical area into a set of cells and reusing the same set of carrier frequencies in nonadjacent cells. Frequency reuse pattern and the cell sizes are carefully optimized so that signals sufficiently attenuate beyond its own cell. It is, however, still unavoidable that the same frequency signals from one cell will spill over and interfere with that of another cell. This is called the cochannel interference (CCI). Earlier cellular systems used the so-called matched-filter-based single-user detection at the receiver. Later generations of cellular systems, on the other hand, improve the performance of the simple matched-filter receiver by incorporating sophisticated CCI suppression algorithms. There are many such examples of the use of advanced signal processing algorithms in modern communications systems.

    For a CR, signal processing is even more important because it is fundamental to the radio’s identity as a cognitive device. If the processing in the brain (brain functions) is the source of cognition and intelligence of a biological entity, then signal processing algorithms are the source of cognition and intelligence in a CR. In other words, signal processing algorithms are the brain functions of a CR.

    1.3 SOFTWARE-DEFINED RADIOS

    The theory of evolution states that the intelligence in biological entities develops and evolves out of necessity. Cognition and intelligence in radios seem to be no different. The concept of CRs evolved from its primitive ancestors, early hardware defined radios, as the application domains of radios expanded, just like life on earth moved from oceans to land followed by various land migrations.

    Early wireless devices were hardware defined in the sense that they were developed to communicate using a single waveform format as is permitted by their hardware. By the early 1990s, it has been noted that keeping up with rapid technology advancements in wireless communications is a challenge with these traditional hardware-defined radios. To take advantage of new communications technologies, one has to redesign the hardware.

    Moreover, different wireless systems were designed to achieve different performance objectives. Hence, they were designed to operate in various spectrum bands available, or best suited, for the application in mind. The RF characteristics, air interfaces, and signaling waveforms/protocols used were different from one system to another. A hardware-defined radio designed to operate according to one scheme could not communicate with a device using another scheme. Final result was that one system could not interact with another system. As wireless systems proliferated, this inability of interoperability of hardware-defined radios emerged as a clear limitation.

    These were particularly significant limitations in situations in which it is important for users on different systems to be able to easily interact with each other. A particularly interested group of interoperability and rapid technology advancement was the US military [2]. In many cases, different service branches within the US military had developed their own systems to meet specific demands and performance goals. In fact, even within the same service, there can be multiple radio systems based on multiple legacy standards. In addition, it is also of importance to be able to communicate with friendly forces while denying interception by enemies. For the US military, limitations of hardware-defined radios were in full display in 1991 during the First Gulf War (Operation Desert Storm). One of the major lessons learned from the war was the inadequacy of interoperability of the joint forces communications technologies [3]: The systems were mostly inconsistent among each other, and thus, communications among different services, or with friendly forces, that were using different radio systems were almost impossible. In addition, taking advantage of rapid technological advancements made by the commercial industry was not possible in a cost-effective way when one has to replace or redesign the system hardware.

    It was against this backdrop that the notion of software radios emerged in the early 1990s, which later in 1995 became known as software-defined radios (SDRs) [4]. According to Wikipedia,¹ however, the origin of the term software radio goes back to 1984. A team at the Garland Texas Division of E-Systems Inc. (now part of Raytheon) has used the term software radio to refer to a digital baseband receiver in a company newsletter in 1985 [5]. However, it was [4] that used the term in the context of a radio that also included a transmitter and is widely credited for coining the term in the sense it is used today.

    Software radios are radios that perform some, or all, baseband operations in software. Fixed, or hardware-defined, radios are limited in their operational capabilities by their hardware configurations. If, however, some or all of these hardware can be controlled by software, then at least to some degree these limitations can be overcome. In this sense, the concept of SDR promised a path toward interoperability of radio systems. Indeed, if all systems were to follow a common SDR architecture, then, at least in theory, the interoperability issue can be resolved. Moreover, with software radios, it is much easier and cost-effective to incorporate new waveforms and technology advancements to existing radios without having to redesign or replace the hardware components.

    SDR was also motivated by the popularity and proliferation of consumer wireless systems, in particular cellular and WLAN networks. Indeed, during the 1980s and the early 1990s, the telecommunications industry players were aware of the pending potential rapid worldwide growth in demand and popularity of wireless systems. The competitive pressure to develop better, faster, and improved wireless technologies was enormous. There were a lot of money to be made. As a result, there were a large amount of new signal processing and coding algorithms being pumped out of an ever-increasing number of research groups spread all over the world. The SDR technology provided a flexible approach to integrate these algorithms to wireless systems and then to switch or choose among them in real time.

    Moreover, in the same geographical area, one may usually find several different wireless networks. A wireless device that is hardwired to operate on one particular network (following a certain wireless standard) can only transmit and receive signals from that network. For example, an early analog Advanced Mobile Phone System (AMPS) cell phone could only transmit and receive analog AMPS signals. As wireless networks proliferate, users needed to carry several different devices, each designed to communicate over a different network, to take advantage of different networks and services available at different locations. The reason is that since signal generation and processing is done in hardware, the same circuit cannot generate or process a different signaling waveform. With the rapid growth of computers and software, it was clear that if, instead of hardware, software programs could be used to generate and process signals in these communications devices, then things could be much simpler: Instead of having to have several different circuits to generate and process different waveforms, all we need is several different software program instructions residing in a microprocessor’s memory. When a certain type of signaling waveform is needed, all one needs to do is to give a command to activate that portion of the program. Everything can be done in software, provided signals are represented in digital domain. This transition from A/D and D/A can be achieved by using A/D and D/A converters. Hence, all we need to do is once the signal needs to be transmitted (say), construct an analog waveform corresponding to the software-defined digital signal and feed it to an antenna.

    Example 1.1 (Analog, Digital, and Discrete-Time Signals)

    An analog signal is a function of the continuous-time variable t that can take a continuum of values as in Figure 1.4(a). Hence, when we say x(t) is an analog signal, we mean that as a function both its domain and range are the set of real values: That is, if x(t) is an analog signal, then . On the other hand, a discrete-time signal is a function whose domain is the set of integers but the range is still the set of reals: That is, if is a discrete-time signal, then . In other words, a discrete-time signal can be considered as a signal that is temporally quantized (or sampled) as shown in Figure 1.4(b).

    FIGURE 1.4 Signals involved in the process of analog-to-digital conversion. (a) An analog signal. (b) A sampled discrete-time signal. (c) A digital signal.

    A digital signal is a discrete-time signal whose range is constrained to only a finite set of finite values A. Hence, if x[n] is a digital signal, then . For example, a digital signal whose amplitude is only allowed to take the two values A and —A will have A = {A, –A} as in Figure 1.4(c). Comparing to a discrete-time signal, we see that a digital signal is a signal whose amplitude and time are both quantized.■

    Hence, an ideal SDR (receiver) architecture may directly feed the antenna output to an A/D converter, which then interfaces with what we may call the software-defined radio SDR platform. The SDR platform is essentially a software-based digital radio platform as shown in Figure 1.5. Sometimes, this is also called the reconfigurable digital radio [6].

    FIGURE 1.5 The ideal software-defined radio concept.

    The ideal SDR architecture of Figure 1.5, of course, is not realistic. To start with, at least one needs a BPF/LPF before feeding the antenna output to the A/D converter to avoid aliasing (since all A/D converters have to have a finite sampling rate). We will also at least need some amplification of the RF output of the antenna since it might be too weak for any further processing. Hence, some filtering and power amplification may be needed to get a clean and sufficiently strong signal for later digital processing. These components must then be in an analog RF stage. This forces us down to the somewhat more realistic SDR architecture of Figure 1.6 that consists of a software-based digital radio platform connected to a software-controllable analog RF front-end and an (RF) antenna.

    FIGURE 1.6 A feasible software-defined radio architecture.

    The A/D converter plays the key role of gateway between digital and analog domains. The sampling theory, which is the underlying theory assuring the equivalence between the analog signal and its A/D converted digital signal, places a fundamental demand on the A/D converter: The sampling rate must be at least twice that of the analog signal’s bandwidth (or the highest frequency) to avoid aliasing and to ensure accurate equivalence between the two signals (analog and discrete time). Of course, output from the antenna may not have a clear cutoff frequency. Hence, it is customary to use an antialiasing filter before the A/D converter to avoid aliasing.

    It is, however, not possible to have arbitrarily large sampling rates and arbitrarily fine resolutions in an A/D converter for physical reasons such as power consumption, electronics, and memory requirements. These essentially boil down to a maximum possible bit rate for a given A/D converter output. As Example 1.2 shows, in an A/D converter, each sample is quantized and represented by a binary string of a certain number of bits. Hence, the higher the sampling rate, the lower the resolution will be to keep the final (maximum) bit rate fixed, since we have to reduce the number of bits used to represent each sample. The lower the number of bits per sample, the smaller the number of levels that can be used in the quantizer, leading to crude quantization of analog sample values. This distortion is called the quantization noise.

    Example 1.2 (A/D conversion)

    A/D conversion takes an analog (i.e., continuous-valued, continuous-time) signal, or a waveform, into a digital signal. Hence, A/D conversion involves following steps:

    Time quantization (sampling): Given an analog signal x(t), time quantization step produces a discrete-time signal. Of course, this is the familiar process called sampling. Since the objective is to obtain a digital signal that is equivalent to the original analog signal x(t), we would like to ensure that the discrete-time signal produced by the sampling process, say, , is equivalent to the original signal x(t). We know from the sampling theorem (see Appendix A for details) that this is guaranteed by sampling at a rate not less than twice the bandwidth (or the maximum frequency) of the signal. Hence, if the analog signal is a baseband signal with highest frequency fm to ensure that the time quantized signal is equivalent to the original signal, we need a sampling rate of at least fs = 2fm since the signal bandwidth is W = fm. As long as we satisfy this condition, time quantization step is fully reversible. Indeed, as shown in Appendix A, it is possible to reconstruct the original analog signal x(t) from the discrete-time sampled sequence . However, when the signal of interest is centered at a certain (high) frequency fc with a much narrower bandwidth W than the center frequency, then it is sufficient to have a sampling rate of fs = 2W rather than . In this case, we do have frequency aliasing. However, as long as we use a suitable band-pass antialiasing filter before sampling, it is possible to recover the signal from the sampled sequence. In practice, it is customary to choose an IF that is larger than fs = 2W and perform the conversion to the baseband directly in digital domain.

    Amplitude quantization (quantization): Output from the sampling stage is a discrete-time signal . These sample values, however, can take any arbitrary real value (assuming real-valued signals). The amplitude quantization step, often simply referred to as the quantization step, limits the amplitude values to a finite set of finite values. Once the number of allowed amplitude levels and their values (called the quantizer levels) are determined, the simplest form of quantization can be achieved by rounding off the arbitrary amplitude values of the discrete-time signal to the nearest allowed quantizer levels. For a chosen number of allowed quantizer levels L, if the quantizer levels (amplitude values) are chosen by equally dividing the range of the original signal into L levels and then rounding off to the nearest level as aforementioned, then we obtain what is called a uniform quantizer. One may choose more elaborate quantization schemes than the uniform quantization in order to obtain better performance in specific situations. Unlike sampling, the amplitude quantization is not reversible.

    Binary encoding: The output from the quantization step is a digital signal that assumes L number of possible amplitude levels. These quantizer amplitude levels are each assigned a unique bit-sequence (bit-string) label. Then, rather than the actual quantizer amplitude level, the A/D converter outputs the bit-sequence label associated with each amplitude level. If the quantizer is to have an L number of amplitude levels, in order to assign a unique bit-sequence label to each of them, the minimum length of the bit-sequence label must be log2L number of bits.

    From the earlier discussion, we can determine the output bit rate of an A/D converter that employs a sampling rate of fs (samples/second). If the quantizer has L levels, each sample must be represented by at least log2 L bits. Hence, the output bit rate of the A/D converter is

    (1.1)

    This points out the relationship between the signal bandwidth, the quantization noise, and the output bit rate of the A/D converter. If the original analog signal has a large bandwidth, then fs has to be chosen to be large, as required by the sampling theorem. Hence, we need an A/D converter with a higher output bit rate. Also, if smaller quantization noise (distortion) is desired, then a quantizer with finer levels is needed. This implies a larger value of L, leading again to an A/D converter with a higher output bit rate.

    Example 1.3 (Digital Voice)

    Voice signals have a bandwidth of about 4 kHz. Hence, voice is digitized by sampling at a rate of fs = 8 kHz. Each sample is quantized with an L = 256-level quantizer resulting in an output bit rate of

    (1.2)

    In practice, this is too large a bit rate for supporting voice communications, especially with wireless systems. However, it is quite common that the output bit rates of A/D converters can be reduced by removing some of the bits without losing information or with very small distortion. This is called compression or source coding, which relies on the fact that information-bearing signals, such as voice signals, have a significant amount of redundancies that can be removed without losing information. Many cellular phones, for example, use a technique called linear predictive coding (LPC) to compress the 64 kbps toll-quality voice by a factor somewhere between 4 and 8, resulting in effective bit rates of 8–16 kbps.

    The redundancies in an information signal imply that successive signal samples are not independent of each other. They are correlated. The idea of LPC is to exploit this correlation to linearly predict the next signal sample value as a linear combination of some of the past signal sample values. Then, instead of the exact next sample value, an LPC speech coder quantizes the difference between the exact and the predicted sample values. Since this prediction error can have a much smaller dynamic range than the actual voice sample value itself, the quantizer only needs to have a smaller number of levels to maintain the same resolution.

    The required preconditioning of the signal before sampling (e.g., filtering by a suitable antialiasing filter) also shows why it is not practical to have an SDR that directly attaches an A/D converter (or a D/A converter) to the (RF) antenna itself as implied in Figure 1.5. Instead, as shown in the feasible SDR architecture of Figure 1.6, there will be a certain amount of processing still performed in analog hardware before the A/D converter (in an SDR receiver) and after the D/A converter (in an SDR transmitter). The analog circuitry that performs this processing is called the (RF) front-end in Figure 1.6.

    In practice, it is not just an RF antialiasing filter that may make up the front-end. It is advantageous to perform the A/D conversion at an IF rather than directly in baseband. Sampling at an IF does require the sampling rate to be higher than what it would have been in the baseband. However, it makes the design of frequency-tunable components more manageable as well as makes all hardware to be essentially fixed after the IF stage. The signal that is sampled at the IF, for example, is brought to baseband by direct digital down-conversion that is essentially achieved in software. Hence, the analog front-end in a practical SDR may include both RF and IF stages.

    The A/D and D/A converters form the interface between the software-controllable analog RF and IF front-end and the software-based digital radio platform as in Figure 1.7. A practical SDR goal is to shrink this RF/IF front-end as much as possible so that the A/D converter and the D/A converter become as close to the RF antenna as possible.

    FIGURE 1.7 A realistic software-defined radio architecture.

    In the SDR architecture of Figure 1.7, the digital radio platform operates in digital baseband. The software-controllable analog front-end may be either a homodyne or a superheterodyne receiver but with components that are controllable/tunable by software commands from the digital radio platform. A simple homodyne receiver design in which sampling is performed in baseband may look as in Figure 1.8.

    FIGURE 1.8 A realistic software-defined radio (SDR) receiver (homodyne) with baseband I/Q channels.

    As mentioned earlier, early communications systems were designed in such a way that all components of the radio were hardwired. On the other hand, in a software-defined radio as in Figure 1.7, the waveforms to be communicated are generated in the software-based digital radio platform. Essentially, the type of waveform to be generated is written into a software program. If there is a need to change the waveform, all one has to do is to change the software program residing in the software-based digital radio platform. Or we may actually have definitions (or codes) to produce several different types of waveforms in the software program residing in the software-based digital radio platform at the same time and just switch among them as needed. Of course, depending on the type of waveform (modulated signal) one wants to transmit, there may be a need to modify some of the analog front-end components. The software-based digital radio platform is also responsible for generating these control signals as can be seen in Figure 1.7.

    1.3.1 Software-Defined Radio Platforms

    Implementing an SDR as described earlier requires analog RF components that are controllable by software commands. These include RF and IF stage amplifiers and filters. Over the last two decades, there have been significant advances in developing software-controllable tunable RF front-end components including filters, power amplifiers, reconfigurable antennas, and duplexers, among others. A discussion on some of these can be found in [6].

    On the other hand, software-based digital radio platforms are nothing but digital computers, perhaps especially designed to handle specific tasks required in a communications device. Indeed, in early SDRs, the software-based digital radio platform was based on common microprocessors or GPPs. Digitally processing a signal, however, involves certain computations that are very much specialized (such as computing the FFT) and commonly needed. This led to the digital signal processors, which are processors with certain built-in capabilities that facilitate efficient computation of certain specific signal processing steps. Although DSPs provided a significant improvement over the GPPs, they alone were not sufficient as research led to complex signal processing algorithms that required faster and complex computations. This led to another class of platforms called ASICs that especially addressed the issue of processing speed of DSPs. ASICs offered better performance compared to DSPs in terms of speed, but as their name suggests, they were designed and programmed for specific applications and thus were not easily reconfigurable like DSPs. Many software-based digital radio platforms of SDRs may thus use a combination of DSPs and ASICs to implement different parts. For example, tasks that are so specialized for a particular radio, and need to run fast, may be implemented in ASICs since they may not be needed to be changed much, while algorithms that may need a certain level of modifications later may be implemented on DSPs. Currently emerging platforms, however, tend to use FPGAs that offer high performance of ASICs with the additional advantage of high reconfigurability, even better than that of DSPs. The drawback of FPGAs, however, is their high power consumption. As a result, they tend to be larger in size and more expensive compared to both DSPs and ASICs.

    In this book, we may use the terms software-based digital radio platform and SDR platform interchangeably. An SDR system contains not only the SDR platform but also the front-end hardware as in Figure 1.6.

    1.3.2 Software-Defined Radio Systems

    Over the years, there have been several SDR systems that have been developed by both commercial industry and by the US military. In the following, we briefly review some of these systems.

    1.3.2.1 Military-Use SDR Systems

    Speakeasy I: In the early 1990s, especially after the First Gulf War, the US military realized the limitations of its then-existed communications systems. In addition to perhaps obvious desirables such as improvements on communications speed, jammer/interference resistance, and low probability of interception/detection (LPI/LPD), the lack of interoperability among radios of joint forces was exposed as a critical issue [3]. The Speakeasy program, managed by the Air Force Research Laboratory (AFRL), was a response to these challenges that aimed at developing a communications system with waveform reconfigurability by implementing waveforms and their associated processing in software on programmable signal processors [2]. The primary objective of Speakeasy I (1992–1995) SDR project was to develop and demonstrate in a field test a modular, reprogrammable modem with an open architecture. The secondary objective was to develop a generic software architecture that will facilitate addition of new waveforms [7]. The initial goal was to be able to demonstrate a multiband multimode radio (MBMMR) that can support any waveform in 2 MHz–2 GHz range.

    In 1995, the Speakeasy I project demonstrated the feasibility of a military SDR modem in which rapid waveform reconfigurability was achieved by implementing waveforms in software rather than in analog circuits. The DSPs were used to develop the required modular and programmable architecture for software implementation of waveforms [2]. However, the phase I demonstration only covered the waveforms in frequency range from 30 to 400 MHz. Indeed, it had been determined that the 2 MHz–2 GHz range cannot be covered by using a single RF front-end with then state-of-the-art technology. Hence, the radio was to have three RF sections covering ranges from 2 to 30 MHz, 30 to 400 MHz, and 400 to 2 GHz [7]. Although the basic goal was achieved, the modem software, the user interface, and the waveform development platform of Speakeasy I did not have ease of use, which was also one of the objectives of the program.

    Speakeasy II: The 4-year Speakeasy II program began in 1995 immediately following the success of Speakeasy I project. The main objective of Speakeasy II was to expand the software-based, open, modular, and reprogrammable architectural implementation of the modem, which was achieved during Speakeasy I, to include the entire radio system or the network [7]. Hence, Speakeasy II was to be an open architecture, networked, secure, and software-programmable radio system with simultaneous multichannel, multiband and multimode (MBMM) capabilities to operate from 2 MHz to 2 GHz [8]. An additional goal was to encourage the commercial off-the-shelf (COTS) modules and commercial standards in the development [7].

    The Speakeasy II program successfully demonstrated the ability to implement certain radio waveform functions in software modules that can be developed to be independent from the underlying hardware [8]. It also demonstrated that interoperability among radios can be achieved by using such SDRs. Indeed, in a US Army field demonstration in March 1997, the Speakeasy II was able to successfully bridge voice channels between an Air Force aircraft using HAVE QUICK UHF systems and soldiers using commercial handheld land mobile radios (LMRs) as well as Army Single Channel Ground and Airborne Radio System (SINCGARS) VHF radios [7]. The implementation included an especially developed RF homodyne transceiver and DSPs augmented with FPGAs. Although the waveforms involved were limited to AM and FM and the frequency range covered by the especially developed transceiver was 4–400 MHz, the performance of this field demonstration was so impressive that it was determined to be ready to enter production without even having to wait till all original objectives are achieved [7].

    The portable SDRs produced from the Speakeasy program were not small or flexible enough for handheld use. Indeed, the initial Speakeasy II platforms were about the size of old PCs [8]. Still, the hardware-independent software module implementation of waveform functions and programmable radios based on such modules achieved by the Speakeasy program have to be considered a great success.

    Joint Tactical Radio System (JTRS): The success of the Speakeasy program paved the way for the JTRS program, arguably the most ambitious SDR development project undertaken to date. Until 2011, it was managed by the Joint Program Executive Office (JPEO) for the Joint Tactical Radio System (JPEO JTRS). In 2012, it was restructured under the Joint Tactical Networking Center (JTNC). The goal of the JTRS project was to achieve real-time interoperable multimedia (voice, data, video) communications among radios of different branches of the US military as well as those of friendly forces. Additionally, compatibility with newer systems, reduced time to adapt technology advancements, and affordability are expected.

    The JTRS program aims to achieve these objectives by ensuring software reuse and portability [9]. It recognizes the fact that while SDRs provide a means to achieve the aforementioned goals of interoperability, technology enhancement, and affordability, unless the codes used in implementing originally analog circuit-based functions in software are portable and reusable, they themselves may become the limiting factor to achieving the stated goals. As a result, all JTRS software are required to be compliant with the open SDR reference architecture called the Software Communications Architecture (SCA).

    The SCA was developed collaboratively by a broad coalition of stakeholders in developing SDRs, including the US, Japan, the UK, Sweden, and Canada, in response to the need for a common standard for the software development for SDRs. As different groups pursued SDR platform and system development, it had quickly become clear that without a common standard for the software development, SDR systems will run into the same interoperability issues that plagued the earlier hardwired systems. The SCA specifies a set of baseline requirements for any software developed for SDRs in order to achieve interoperability, reconfigurability, and portability [10]. According to the 2006 report from the JPEO of the JTRS [10], SCA is essentially a set of rules that guides software developers to achieve the following goals, independent of the particular implementation:

    Portability of application software

    Reduced developmental cost

    Reusing of design modules

    Leveraging of commercial architectures

    Adhering to the SCA can obviously lead to compatibility among different manufacturers and suppliers, a desirable objective for the military but also important for manufacturers. It also helps promote competition among developers, which may ultimately lead to reduced costs for the users.

    Thus, the JTRS is not a single system, but a family of software-programmable, modular, MBMM communications systems that maximizes the software and hardware commonality and reusability. The commonality and the compliance with the SCA lead to interoperability, while reusability ensures both affordability and compatibility with legacy systems as well as newer systems that may become available due to technological advances.

    Since 1998, it is mandatory that all new radio procurements by the US Department of Defense (DoD) agencies be JTRS compliant with the hope that ultimately JTRS communications devices will replace the old single-function, hardware-intensive communications systems with software-centric systems that satisfy the requirements of the user domains while being interoperable with legacy communications systems and supporting growth for new waveforms [11]. In addition, JTRS system was to be scalable to match the communications requirements of different users, extendible to support growth and change, affordable, and usable through open systems standards and technology that will serve as the baseline for evolution [11].

    It is an ambitious project that aims to develop an open radio architecture that will allow affordable interoperable operation among perhaps hundreds of different systems used by various branches of military services. While JTRS itself has perhaps turned out to be more and more complex and inflexible, one of its lasting impacts seems to be the development of the SCA that essentially provides a set of rules for software developers for SDRs to ensure compatibility among each other.

    1.3.2.2 Commercial-Use SDR Systems

    The commercial telecommunications industry has been a driving force of the advancement of wireless technologies including the SDR. In fact, one of the objectives of the initial military SDR programs was to be able to take advantage of the rapid technological advancements made by the commercial wireless industry by being able to quickly, and cost-effectively, incorporate them into military systems. Similarly, the commercial telecommunications industry was also quick to realize the potential of SDR technology. Indeed, by the end of the 1990s, there were already several successful commercial-use SDR systems by various industry players (for more details, see [12]):

    Motorola Horizon 3G: Motorola’s Horizon 3G SDR was aimed at supporting then-existed GSM- and CDMA-based cellular systems.

    Vanu Inc.’s Anywave: One of the successful commercial SDR products was the Vanu Inc.’s Anywave SDR cellular base station that supported several cellular standards such as GSM and cdma2000 [13]. Compared to other competing cellular base stations that used specialized protocols on the wire-line part of the network, the Anywave SDR used VoIP for both internal and external traffic.

    AirNet Communications Corp.’s AdaptaCell: In 1997, AirNet Communications Corp. deployed the first commercial SDR cellular base station. This broadband multicarrier system was designed to support several standards including GSM, GPRS, and EDGE. Fully adaptive later versions of AirNet SDR base transceiver subsystems, named AdaptaCell BTS, can be configured to support the emerging 802.16e mobile Worldwide Interoperability for Microwave Access (WiMAX) standard, using single carrier, orthogonal frequency division multiplexing (OFDM), and/or orthogonal frequency division multiple access (OFDMA) waveforms.

    It is noteworthy that many commercial SDR platforms were SDR base stations rather than SDR handsets. While one reason for this was the increased power consumption of SDR platforms, another reason was that perhaps the objective was to achieve base station interoperability among many different standards.

    1.4 FROM SOFTWARE-DEFINED RADIOS TO COGNITIVE RADIOS

    SDRs were concerned with a radio device implementation that provided flexibility in changing the signal waveforms to be received or transmitted. How and when to switch from one signaling waveform to another was not a focus of SDR. When interoperability is desired, perhaps an operator may switch the radio to the desired mode. The ability to change the radio’s physical communications mode by flipping a switch, or giving a command, was still a great achievement. Toward the end of the 1990s, however, the SDRs were ready to evolve into its logical next level: Rather than having an external entity making the decisions of when and how to change the communications mode and configure operating characteristics, maybe the radio itself can make these decisions based on its RF environment conditions and performance objectives.

    This naturally implies that the radio needs to have some sort of self-awareness, or a consciousness, built into it. Hence, such self-aware SDRs were termed the CRs in [1]. This evolution of wireless communications from analog hardware-based implementations to SDRs and then from there to CRs is shown in Figure 1.9.

    FIGURE 1.9 The evolution of wireless devices from conventional to SDR to cognitive radio. (a) A conventional radio. (b) A software-defined radio. (c) A cognitive radio.

    Arguably, CR concept is one of the most important innovations in wireless communications during the last two decades. CRs offer the potential to radically change the future of wireless and mobile communications. However, that has to wait due to another development.

    1.4.1 The Spectrum Scarcity Problem

    The primary objective of a communications system designer is to be efficient in power budget and spectrum usage. This is motivated by the commonly accepted fact that these two resources, power and bandwidth, are both scarce. While transmit power at least seems somewhat less fundamental a concern, it is usually observed that almost all spectrum that can possibly be useful, or suitable, for communications seems to be already assigned for particular services or systems. Moreover, according to the traditional way of assigning spectrum if a certain frequency band is assigned for a particular type of service, or licensed for a particular system, then no other system or service can operate in that frequency band. Thus, while in the 1980s and 1990s mobile wireless communications were gaining wide popularity all over the world and the possible applications seem to be limitless, the only spoiler seemed to be the radio spectrum: There was not much unassigned spectrum to go around.

    During the late 1990s several research groups, as well as the Federal Communications Commission (FCC) in the US, measured the utilization of spectrum in the highly desirable range for mobile wireless communications (roughly from a few tens of hundreds of MHz to a few GHz). The results were eye-opening: Most of the allocated radio spectrum was heavily underutilized. Specifically, most of the frequencies traditionally allocated for certain services (e.g., broadcast TV) were not used most of the times in many places. It appeared that, after all, the perception of overcrowded spectrum was not exactly accurate. The spectrum was overcrowded on the paper. But in practice, the license holders of certain spectrum bands were not really using it in many places and/or much of the time. Another finding from these measurement campaigns was that compared to exclusively allocated frequency bands, the so-called industrial, scientific and medical (ISM) bands were highly crowded and better utilized. The 2.4–2.4835 GHz ISM band in the US, for example, was found to be highly crowded with WiFi WLANs, Bluetooth, and other wireless signals.

    These observations led to a rethinking of spectrum allocation policies practiced by governmental agencies. The realization that the perceived scarcity of radio spectrum is mainly due to the inefficiency of traditional spectrum allocation policies led the FCC to recommend several broad solutions to improve the spectrum utilization in its 2002 Spectrum Policy Task Force Report [14]. One important theme of these recommendations was permitting unlicensed users and services to access the already allocated spectrum bands if and when they are not used by the license holder. Although this was a reasonable approach to better utilize the spectrum, achieving this would require new technological breakthroughs.

    The concept of CRs emerged against the backdrop of these new observations on the inefficient spectrum utilization and the growing demand for various wireless services. It did not take too long for many to realize that the concept of CRs can provide a promising solution to the problem of underutilization of the allocated spectrum: If a radio can be aware of its RF environment, as was envisioned with CRs, then it may be able to take advantage of unused spectrum opportunities. Self-awareness of a CR combined with the reconfigurability of an SDR can possibly facilitate dynamically sharing the spectrum with a legacy licensed system while not harming its performance.

    Hence, a key driver of such dynamic spectrum sharing (DSS) approaches for improving spectrum utilization was the CRs. This in turn made DSS to be a key driver for the development of CR technology. These two became so much intertwined that it was almost difficult to separate them. As a consequence, during its initial phase of development, CRs were interpreted as synonymous with DSS or dynamic spectrum access [15], almost replacing the original vision and definition of [1] of CRs as autonomous, intelligent, and self-aware radio devices with much broader scope.

    1.4.2 Emergence of CRs

    A CR is an evolution of the concept of SDRs in the sense that the radio has built-in intelligence and self-awareness to manage its own operations in response to the perceived state of the RF environment and the radio’s performance objectives. Hence, obviously, an SDR is the host platform of a CR. Indeed, it was as an evolution of SDR that will make SDRs more personal that the CR was initially introduced in [1]. Since the motivation for SDR was interoperability, affordability, and compatibility with future systems, then it follows that these much broader objectives are also the motivations for CRs. Indeed, it appears that the objective of MBMMR of Speakeasy II SDR program, for example, had been retained as an objective of CRs. This is clear from the way the successor program to the Speakeasy II, the JTRS, has naturally embraced the CR technology. A CR, defined as a self-aware SDR with decision-making and learning abilities, promised great potential in realizing the goals of interoperability, affordability, and compatibility with future systems. Naturally, one may expect such CRs to be wideband, multichannel, and MBMM.

    On the other hand, CRs pursued for achieving DSS did not have the interoperability or compatibility with future systems as a primary motivation. There, the key driver was the spectrum coexistence, in particular with license holders to the spectrum, so as to improve the spectrum utilization. It is, however, clear that the objective of spectrum coexistence can easily be met by being cognitive, wideband, and MBMM. Hence, while reinterpreting CRs as a technology specifically aimed at achieving DSS may have helped the progress of CR technology in that direction faster, the same objectives might have been still achieved if the focus stayed as a general evolution of SDR with the original SDR objectives intact as envisioned in [1]. The latter has more broad implications for the future of wireless communications: Self-aware SDRs with autonomous decision-making ability are a robotic radio, or a Radiobot [16]. Such robotic radios promise the potential to transform today’s wireless telecommunications in almost limitless exciting ways. Ultimately, CRs may lead to autonomous and universal communications devices that attend to human needs without us even having to be aware of them.

    CR is still a developing technology. While CRs explicitly aimed at achieving DSS seem more mature in their development, certainly, CRs with broader application scope are still only at a very young stage. It is, however, conceivable that the full promise of CRs will only be realized when the technology evolves into CRs as just intelligent and self-aware autonomous radios ready to take up any application, not just DSS. For this reason, in this book, our main focus is on emphasizing this more general vision of CRs. However, given the fact the CRs for DSS are more developed already and in many cases provide the foundation for the more general CRs, we will treat many aspects of them specifically as well.

    1.5 WHAT THIS BOOK IS ABOUT

    This book is about signal processing techniques for CRs. While implementation of traditionally analog circuit-based signal processing functions in open standard, modular software is the key idea behind SDR technology, it is the addition of powerful advanced signal processing algorithms that perform newly defined cognitive and intelligent tasks that forms the key idea of CR technology. As it will be clear from the next chapter, by a CR, we necessarily mean a wideband MBMMR with self-awareness and intelligence. CRs specifically aimed at DSS are treated as a special case.

    Cognition implies that the radio has the ability to make its own decisions based on its interpretation of the RF environment. Hence, we may organize signal processing within a CR under two broad themes: signal processing for spectrum awareness and signal processing for communications. Signal processing for communications is already present in all wireless communications systems and devices. It refers to processing of information-bearing signals to achieve communications. Signal processing for spectrum awareness is, however, unique to CRs. This is what indeed makes CRs cognitive, or self-aware. The purpose of signal processing algorithms for spectrum awareness is to allow the radio to gain an understanding of what is present and happening in the radio’s RF environment as well as user needs. In the context of DSS networks, the unique signal processing task that is aimed at gaining spectrum awareness is called the spectrum sensing. As we will see later in this book, the spectrum sensing problem defined in the context of DSS CRs does not go far enough to allow acquiring complete spectrum knowledge that may be required by a wideband MBMM CR. To make the distinction clear, we may term signal processing involved in gaining spectrum awareness in such a CR as spectrum knowledge acquisition.

    Signal processing algorithms for communications and spectrum awareness are not completely isolated from one another within a CR. Indeed, spectrum knowledge gained through the spectrum awareness signal processing algorithms provides additional side information to the signal processing algorithms for communication, and vice versa. Still, this broad classification allows us to focus on signal processing that is unique to a CR in this book without delving too much into signal processing tasks that are common to traditional radios. This is necessary not only to emphasize and highlight the unique aspects of signal processing in CRs but also to keep the scope of the book at a manageable level. Note that signal processing for communications is covered in much more detail in many other books dedicated to signal processing for wireless communications including [17–19]. Almost all signal processing techniques that have been developed for achieving better wireless communications can conceivably be integrated also into CRs with little modifications.

    The remainder of Part I of the book is dedicated to introduce the concept of CR. First, in Chapter 2, we will formally define the concept of CRs. As already mentioned, our emphasis will be on CRs as self-aware, intelligent, autonomous radio devices with broad applications. In Chapter 3, we will treat in detail the special type of CRs that are aimed at achieving DSS. Thus, in Chapter 3, we will also discuss the DSS problem and various different approaches for achieving DSS.

    All signal processing encountered in CRs, as well as in general wireless communications, are applications of theories of detection, estimation, and inference. Thus, these underlying theories form the basis for understanding all signal processing algorithms we discuss throughout this book. Hence, Part II of the book is devoted to introduce theoretical foundations on detection, estimation, and inference to make the book self-contained. In Chapter 4, we will start by first giving a brief introduction to the theory of signal detection in discrete time. Note that throughout this book we will mostly work in discrete time since all signal processing in CRs will be performed in discrete time on sampled and digitized signals. Chapter 4 will present both parametric and nonparametric detection theories. This will be followed by an introduction to the theory of parameter estimation in Chapter 5. We will first discuss Bayesian parameter estimation with special attention to the minimum mean-squared error (MMSE) parameter estimation. Next, we will introduce several approaches for estimating parameters that are not modeled as being random. We will briefly discuss the theory of minimum variance unbiased estimation (MVUE) followed by the widely used maximum likelihood (ML) estimation.

    A special estimation problem that is of interest in many communications and signal processing applications, and in particular in wideband CRs, is the power spectrum estimation. This refers to estimating the power spectral density (PSD) of a signal from a finite number of signal samples. Broadly, these techniques can be classified as nonparametric and parametric spectrum estimation approaches. Parametric spectrum estimation approaches employ knowledge of the specific structure of signals whose spectrum is to be estimated, whereas nonparametric approaches do not assume such knowledge. Taken together, the literature on PSD estimation is broad and it is perhaps not possible to cover in detail in a single chapter. Hence, in Chapter 6, we limit our discussion to nonparametric spectrum estimation approaches. This is justified because in wideband spectrum knowledge acquisition, a CR must mostly rely on nonparametric spectrum estimation approaches. Chapter 6 will also discuss spectrum estimation when the signals of interest are cyclostationary.

    An important feature of CRs is their ability to make decisions based on its knowledge of its own state, the environmental conditions, and user needs. In Chapter 7, we introduce one approach to making optimal decisions. Specifically, Chapter 7 deals with decision-making in Markov environments. We will introduce the theory of Markov decision processes (MDPs) followed by the so-called partially observable MDPs (POMDPs). MDPs can be solved by using well-known dynamic programming principles. However, solving POMDPs can be considerably more difficult due to computational complexity. In Chapter 7, we will discuss several algorithms that have been proposed to compute optimal decision policies for POMDPs.

    We will conclude our discussion of theoretical foundations by discussing Bayesian nonparametric classification in Chapter 8. Signal processing in CRs can involve various classification and clustering problems. One specifically important situation is in spectrum knowledge acquisition in which the radio may need to classify detected signals. As we will see later in the book, this problem is better modeled as a Bayesian nonparametric classification problem. Unlike many commonly used classification and clustering algorithms, Bayesian nonparametric approach to classification allows the number of clusters to be unknown and unbounded. Chapter 8 will introduce the theory behind Bayesian nonparametric classification and specifically discuss an approach based on the so-called Dirichlet process mixture model (DPMM).

    Part III of the book is devoted to signal processing problems and techniques in CR. In Chapter 9, we start our discussion by introducing the wideband spectrum knowledge acquisition problem in a wideband MBMM CR. As we will see, the general wideband spectrum knowledge acquisition problem consists of several subproblems, some of which are not encountered in the spectrum sensing problem addressed in the context

    Enjoying the preview?
    Page 1 of 1