Error-Correction on Non-Standard Communication Channels
()
About this ebook
Related to Error-Correction on Non-Standard Communication Channels
Related ebooks
Resource Efficient LDPC Decoders: From Algorithms to Hardware Architectures Rating: 0 out of 5 stars0 ratingsInformation Theory: A Concise Introduction Rating: 0 out of 5 stars0 ratingsNonparametric Analysis of Univariate Heavy-Tailed Data: Research and Practice Rating: 0 out of 5 stars0 ratingsExact Statistical Inference for Categorical Data Rating: 0 out of 5 stars0 ratingsLie Theory and Special Functions Rating: 0 out of 5 stars0 ratingsAdvances in Independent Component Analysis and Learning Machines Rating: 0 out of 5 stars0 ratingsExploring University Mathematics: Lectures Given at Bedford College, London Rating: 0 out of 5 stars0 ratingsFinite Automata: Behavior and Synthesis Rating: 5 out of 5 stars5/5Foundations of Genetic Algorithms 1991 (FOGA 1) Rating: 0 out of 5 stars0 ratingsNanomaterials-Based Charge Trapping Memory Devices Rating: 0 out of 5 stars0 ratingsStability of Linear Systems: Some Aspects of Kinematic Similarity Rating: 0 out of 5 stars0 ratingsApplied Automata Theory Rating: 0 out of 5 stars0 ratingsMaterials and Process Characterization Rating: 0 out of 5 stars0 ratingsVLSI Electronics: Microstructure Science Rating: 0 out of 5 stars0 ratingsMathematical Experiments on the Computer Rating: 0 out of 5 stars0 ratingsA Course of Mathematics for Engineers and Scientists: Theoretical Mechanics Rating: 0 out of 5 stars0 ratingsSurfaces and Interfaces: Physics and Electronics Rating: 0 out of 5 stars0 ratingsStatistical Monitoring of Complex Multivatiate Processes: With Applications in Industrial Process Control Rating: 0 out of 5 stars0 ratingsRandom Processes: Measurement, Analysis and Simulation Rating: 5 out of 5 stars5/5Spectral Radius of Graphs Rating: 0 out of 5 stars0 ratingsFive of Maxwell's Papers Rating: 0 out of 5 stars0 ratingsNonlinear Equations in the Applied Sciences Rating: 1 out of 5 stars1/5Ergodic Theory and Topological Dynamics Rating: 0 out of 5 stars0 ratingsThe Natural Language for Artificial Intelligence Rating: 0 out of 5 stars0 ratingsApplied Fuzzy Systems Rating: 0 out of 5 stars0 ratingsSubmodular Functions and Optimization Rating: 0 out of 5 stars0 ratingsQuantum Theory of Collective Phenomena Rating: 0 out of 5 stars0 ratingsDynamic Programming and Stochastic Control Rating: 3 out of 5 stars3/5Pro Cryptography and Cryptanalysis: Creating Advanced Algorithms with C# and .NET Rating: 0 out of 5 stars0 ratings
Technology & Engineering For You
The Art of War Rating: 4 out of 5 stars4/5Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future Rating: 4 out of 5 stars4/5The Big Book of Hacks: 264 Amazing DIY Tech Projects Rating: 4 out of 5 stars4/5Electrical Engineering 101: Everything You Should Have Learned in School...but Probably Didn't Rating: 5 out of 5 stars5/5How to Write Effective Emails at Work Rating: 4 out of 5 stars4/580/20 Principle: The Secret to Working Less and Making More Rating: 5 out of 5 stars5/5The Big Book of Maker Skills: Tools & Techniques for Building Great Tech Projects Rating: 4 out of 5 stars4/5The 48 Laws of Power in Practice: The 3 Most Powerful Laws & The 4 Indispensable Power Principles Rating: 5 out of 5 stars5/5The CIA Lockpicking Manual Rating: 5 out of 5 stars5/5Ultralearning: Master Hard Skills, Outsmart the Competition, and Accelerate Your Career Rating: 4 out of 5 stars4/5The ChatGPT Millionaire Handbook: Make Money Online With the Power of AI Technology Rating: 0 out of 5 stars0 ratingsMy Inventions: The Autobiography of Nikola Tesla Rating: 4 out of 5 stars4/5Pilot's Handbook of Aeronautical Knowledge (Federal Aviation Administration) Rating: 4 out of 5 stars4/5Understanding Media: The Extensions of Man Rating: 4 out of 5 stars4/5The Systems Thinker: Essential Thinking Skills For Solving Problems, Managing Chaos, Rating: 4 out of 5 stars4/5Broken Money: Why Our Financial System is Failing Us and How We Can Make it Better Rating: 5 out of 5 stars5/5Smart Phone Dumb Phone: Free Yourself from Digital Addiction Rating: 0 out of 5 stars0 ratingsU.S. Marine Close Combat Fighting Handbook Rating: 4 out of 5 stars4/5The Art of War Rating: 4 out of 5 stars4/5How to Disappear and Live Off the Grid: A CIA Insider's Guide Rating: 0 out of 5 stars0 ratingsLogic Pro X For Dummies Rating: 0 out of 5 stars0 ratingsThe Fast Track to Your Technician Class Ham Radio License: For Exams July 1, 2022 - June 30, 2026 Rating: 5 out of 5 stars5/5Summary of Nicolas Cole's The Art and Business of Online Writing Rating: 4 out of 5 stars4/5Rust: The Longest War Rating: 4 out of 5 stars4/5The Invisible Rainbow: A History of Electricity and Life Rating: 4 out of 5 stars4/5No Nonsense Technician Class License Study Guide: for Tests Given Between July 2018 and June 2022 Rating: 5 out of 5 stars5/5
Reviews for Error-Correction on Non-Standard Communication Channels
0 ratings0 reviews
Book preview
Error-Correction on Non-Standard Communication Channels - Edward A. Ratzer
Error-Correction on Non-Standard Communication Channels
Edward A. Ratzer
9781447614739
ABSTRACT
Many communication systems are poorly modelled by the standard channels assumed in the information theory literature, such as the binary symmetric channel or the additive white Gaussian noise channel. Real systems suffer from additional problems including time-varying noise, cross-talk, synchronization errors and latency constraints. In this thesis, low-density parity-check codes and codes related to them are applied to non-standard channels.
First, we look at time-varying noise modelled by a Markov channel. A low-density parity-check code decoder is modified to give an improvement of over 1dB.
Secondly, novel codes based on low-density parity-check codes are introduced which produce transmissions with Pr(bit = 1) ≠ Pr(bit = 0). These non-linear codes are shown to be good candidates for multi-user channels with crosstalk, such as optical channels.
Thirdly, a channel with synchronization errors is modelled by random uncorrelated insertion or deletion events at unknown positions. Marker codes formed from low-density parity-check codewords with regular markers inserted within them are studied. It is shown that a marker code with iterative decoding has performance close to the bounds on the channel capacity, significantly outperforming other known codes.
Finally, coding for a system with latency constraints is studied. For example, if a telemetry system involves a slow channel some error correction is often needed quickly whilst the code should be able to correct remaining errors later. A new code is formed from the intersection of a convolutional code with a high rate low-density parity-check code. The convolutional code has good early decoding performance and the high rate low-density parity-check code efficiently cleans up remaining errors after receiving the entire block. Simulations of the block code show a gain of 1.5dB over a standard NASA code.
ACKNOWLEDGEMENTS
My interest in error correction started in my last undergraduate year in Cambridge while attending David MacKay’s course Information Theory, Pattern Recognition and Neural Networks
and being supervised by him for my MSci project. For the last three years David MacKay has been a PhD supervisor always full of ideas and encouragement.
EPSRC and Schlumberger have generously funded me for the duration of my studies. Their funding has allowed me to attend several conferences which have kept me up-to-date with the academic side of the field. Discussions with Astrium, Azea, IBM and Schlumberger have been invaluable in keeping in touch with industrial applications. Work in Appendix H was done while visiting the IBM Zurich Research Laboratory.
I would like to thank all the members of my research group in Cambridge : Seb Wills, Sanjoy Mahajan, David Ward, James Miskin, Phil Cowans, John Winn, John Barry and Hanna Wallach, for being ready to bounce ideas off and looking over draft papers. The members of Cambridge URNU also deserve thanks for being willing volunteers for experimental work.
LIST OF PUBLICATIONS
Edward A. Ratzer. Convolutional code performance as a function of decoding delay. In 6th International Symposium on Communication Theory and Applications, July 2001.
Edward A. Ratzer. Low-density parity-check codes on Markov channels. In Second IMA Conference on Mathematics in Communications, December 2002.
Edward A. Ratzer and David J.C. MacKay. Sparse low-density parity-check codes for channels with cross-talk. In IEEE Information Theory Workshop, pages 127–130, April 2003.
Edward A. Ratzer. Intersected low-density parity-check and convolutional codes. In Postgraduate Research Conference in Electronics, Photonics, Communications and Software, April 2003.
Edward A. Ratzer. Sparse data blocks and multi-user channels. In International Symposium on Information Theory, July 2003.
Edward A. Ratzer. Intersected low-density parity-check and convolutional codes. In 7th International Symposium on Communication Theory and Applications, July 2003.
Edward A. Ratzer. Marker codes for channels with insertions and deletions. In 3rd International Symposium on Turbo Codes and Related Topics, September 2003.
Edward A. Ratzer. Marker codes for channels with insertions and deletions. Annals of Telecommunications, January 2005.
Table of Contents
Title Page
Copyright Page
ABSTRACT
ACKNOWLEDGEMENTS
LIST OF PUBLICATIONS
CHAPTER 1 - INTRODUCTION
CHAPTER 2 - LOW-DENSITY PARITY-CHECK CODES
CHAPTER 3 - MARKOV CHANNELS
CHAPTER 4 - MULTI-USER CHANNELS
CHAPTER 5 - SPARSE-DENSE CODES FOR MULTI-USER CHANNELS
CHAPTER 6 - SPARSE LDPC CODES FOR MULTI-USER CHANNELS
CHAPTER 7 - INSERTION-DELETION CHANNELS
CHAPTER 8 - CONVOLUTIONAL CODES
CHAPTER 9 - INTERSECTED LDPC AND CONVOLUTIONAL CODES
CHAPTER 10 - CONCLUSION
APPENDIX A - INFORMATION THEORY
APPENDIX B - SIMULATIONS
APPENDIX C - FINITE FIELD ARITHMETIC
APPENDIX D - LINEAR BLOCK CODES
APPENDIX E - CONVOLUTIONAL CODES
APPENDIX F - FACTOR GRAPHS
APPENDIX G - THE HAT PROBLEM
APPENDIX H - COMPLEXITY OF LDPC BELIEF PROPAGATION DECODING
APPENDIX I - EXIT CHARTS WITH A GAUSSIAN APPROXIMATION
BIBLIOGRAPHY
CHAPTER 1
INTRODUCTION
1.1 Forward Error Correction
We are all familiar with noise on the radio. If the noise level gets so high that we miss something, we can not get back what we missed. This is a common problem in many areas of communication and has led to the development of forward error-correction
or channel coding
. In forward error-correction we add extra redundancy to our original signal and then hope we can reconstruct the original transmission from this new signal when it is corrupted by noise. In 1948 Shannon [93] showed that there was a theoretical solution that would allow an arbitarily small probability of error under certain constraints. This thesis will look at situations where these constraints do not hold or are not suitable.
We will look at the field of binary communication–where we transmit a sequence of 0s and 1s over a system like that illustrated in figure 1.1. For example, on a fibre optic channel 1s could correspond to the presence of light and 0s to no light. This is one possible type of modulation; many modulation schemes exist for different types of channel [81]. The receiver observes the original sequence corrupted by noise. There are two main classes of receivers. Hard-decision
receivers report a best guess of each transmitted symbol (for the fibre-optic channel this could be obtained from a threshold of the observed light level). Soft-decision
receivers report analogue information, ideally Pr(received signal|transmitted bit = 1) and Pr(received signal|transmitted bit = 0). For the fibre-optic channel this could be obtained by observing the light intensity and using a knowledge of the channel characteristics. In Appendix B.1 simple standard channel models that approximate some real systems are presented.
Figure 1.1: Schematic of a communication system
In forward-error correction we are interested in forming an error-correcting code–a set of allowed sequences, E. Redundancy for forward-error correction can then be obtained as not all sequences are allowed. We define a one-to-one mapping from input sequences s to transmitted sequences x ∈ E. In this thesis we will look at codes where there is a fixed ratio between the length of these sequences, e9781447614739_i0003.jpg . R is a measure of redundancy called the code rate and corresponds to the number of information bits carried per symbol transmitted on the channel.
There are two main classes of error-correcting codes. The first are block codes for which lengths and length x are fixed. Length x is the blocksize of the code, commonly given the symbol N . The second class are stream or convolutional codes for which only R is fixed and there are no block boundaries. Commonly stream codes are implemented by an encoder which, for example, outputs 2 bits for every 1 bit of input. This would then define a stream code of R = 1/2.
1.2 Random codes
Shannon [93] showed that on a channel with stationary ergodic noise of a fixed power there exists a rate C called the channel capacity. For R < C transmission at an arbitarily small error probability is possible with a large random block code (meaning that the set of codewords is chosen at random). For R > C arbitrarily reliable transmission is not possible.
To get a particular error probability with a random block code one needs to choose the blocksize N. To get a smaller error probability, N needs to be larger. For a code to be practical we would like an e9781447614739_i0004.jpg (N ) complexity encoder and decoder so we can choose a larger blocksize without a penalty on the amount of computation per bit. Random block codes are not practical. One could imagine a simple encoder and decoder based on the set of codewords. Such an encoder and decoder are both O (NeN) in space as they need to store all the codewords (there are (eN) codewords each of length N). The maximum-likelihood decoder is (NeN) in time–the decoder looks through every codeword and evaluates the probability of the received signal having been produced by the channel given that codeword. More elaborate algorithms could be used, for example with decision trees, but no algorithm is known that is better than exponential [69].
So instead more structured codes with tractable encoders and decoders were studied.
1.3 Algebraic coding theory
Algebraic coding theory mainly deals with noise from the binary symmetric channel (as defined in appendix B.1) which flips bits with probability pf . The simplest family of error-correcting codes for such a channel are the repetition codes RN. For the binary case RN consists of two codewords : the codeword of N 1s and the codeword of N 0s. The code has R = 1/N . To encode, we take one input bit and repeat it N times. On receiving an RN codeword corrupted by a binary symmetric channel, the maximum likelihood decoder returns the bit that forms the majority of the received noisy codeword (if pf < 1/2) [71].
In [50] the idea of looking at error-correcting codes from the point of view of the minimum distance
, dmin, was introduced. One can define a distance (the Hamming distance) between two codewords as the number of positions where the