Part 2 of 4
In the November 2005 MRT, we defined terms and introduced basic errorcontrol coding concepts. This month we will cover some of the applications and limitations of codes and the theory behind their operation.
The traditional role for errorcontrol coding was to make a troublesome channel acceptable by lowering the frequency of error events. The error events could be bit errors, message errors or undetected errors. While reducing the occurrence of undetected errors was one of the first uses of errorcontrol coding, today's errordetection codes are so effective that the occurrence of undetected errors is, for all practical purposes, eliminated. Today, the role of errorcontrol coding as expanded to include many new applications, including:

Reduce the cost of communications systems: Transmitter power is expensive, especially on satellite transponders. Coding can reduce the satellite's power needs because messages received at close to the thermal noise level can still be recovered correctly.

Eliminate interference: As the electromagnetic spectrum becomes more crowded with manmade signals, errorcontrol coding will mitigate the effects of unintentional interference.
Despite these new uses of errorcontrol coding, there are limits to what coding can do. On the Gaussian noise channel, for example, Shannon's capacity formula sets a lower limit on the signaltonoise ratio that we must achieve to maintain reliable communications. For strictly powerlimited (unlimited bandwidth) channels, Shannon's lower threshold can be expressed by E_{b}/N_{0} equals 0.69, or 1.6 decibels (dB) [1]. In other words, we must maintain an E_{b}/N_{0} of at least 1.6 dB to ensure reliable communications, no matter how powerful an errorcontrol code we use.
Many channels, like the land mobile radio channel, are bandwidth limited. For bandwidthlimited channels with Gaussian noise, Shannon's capacity formula can be written as Equation 1 where r is the spectral bit rate in bits/s/Hz [2].
For example, consider a bandwidthlimited channel operating with uncoded quadrature phase shift keying (2 bits/symbol and a maximum spectral bit rate of r=2 bit/s/Hz) and a required bit error rate (BER) of 10^{5}. Without coding, this communications system requires an E_{b}/N_{0} of 9.6 dB [2]. Shannon's formula says that to maintain reliable communications at an arbitrarily low BER, we must maintain (for r=2 bits/s/Hz) an E_{b}/N_{0} of at least 1.5, or 1.8 dB.
Therefore, if we need to lower the required E_{b}/N_{0} by more than 7.8 dB, coding can't do it. We must resort to other measures, such as increasing transmitter power. In practice, the situation is worse because no practical code achieves Shannon's lower threshold. Until recently, a more realistic coding gain for this example was 3 dB rather than 7.8 dB. The invention of turbo codes in 1993 allows us to nearly achieve Shannon's coding limit, but at the price of delay [3]. Consequently, turbo codes are good choices for data channels, but they are impractical for twoway voice channels.
A full understanding of the structure and performance of errorcontrol codes requires a foundation in modern algebra and probability theory, which is beyond the scope of this column. Instead, I'll appeal to your intuition and common sense. Let's begin by showing how the encoder and decoder work for binary block codes.
The block encoder takes a block of k bits and replaces it with an nbit codeword (n is bigger than k). For a binary code, there are 2^{k} possible codewords in the codebook. The channel introduces errors and the received word can be any one of 2^{n} nbit words of which only 2^{k} are valid codewords. The job of the decoder is to find the codeword that is closest to the received nbit word. How a practical decoder does this is more than we can cover here, but our examples will use a brute force lookup table method. The decoding spheres represented in Figure 1 on page 35 will be used to illustrate the decoding process.
In Figure 1, each valid codeword is represented by a point surrounded by a sphere of radius t, where t is the number of errors that the code can correct [2]. Note that codewords A and B of Figure 1 are separated by a distance d_{min}, called the minimum distance of the code. The minimum distance of a code is defined as the smallest number of places that any two codewords in the codebook differ. Usually, codes with large minimum distance are preferred because they can detect and correct more errors.
Let's first consider a decoder that can only detect errors, not correct them.

Error detection only: The minimum distance of a code is a measure of its errordetection capability. An errorcontrol code can be used to detect all patterns of u errors in any codeword as long as d^{min}=u + 1. The code also may detect many error patterns with more than u errors, but it is guaranteed to detect all patterns of u errors or less. We'll assume that the errordetection decoder comprises a lookup table with all 2^{k} valid codewords stored. When an nbit word is received by the decoder, it checks the lookup table, and if this word is one of the allowable codewords, it flags the nbit word as errorfree and sends the corresponding information bits to the user.
We'll use Figure 1 to illustrate three cases: no errors, a detectable error pattern, and an undetectable error pattern.

No errors: Assume the encoder sends codeword C, and the channel introduces no errors. Then codeword C also will be received, the decoder will find it in the lookup table and decoding will be successful.

Detectable error pattern: This time we send codeword C, and the channel introduces errors such that the nbit word Y is received. Because Y is not a valid codeword, the decoder will not find it in the table and will therefore flag the received nbit word as an errored codeword. The decoder does not necessarily know the number or location of the errors, but that's OK because we only asked the decoder to detect errors. Because the decoder properly detected an errored codeword, decoding is successful.

Undetectable error pattern: We send codeword C for the third time, and this time the channel introduces the unlikely (but certainly possible) error pattern that converts codeword C into codeword D. The decoder can't know that codeword C was sent and must assume that codeword D was sent instead. Because codeword D is a valid codeword, the decoder declares the received nbit word errorfree and passes the corresponding information bits on to the user. This is an example of decoder failure.
Naturally, we want the decoder to fail rarely, so we choose codes that have a small probability of undetected error. One of the most popular errordetection codes is the shortened Hamming code, also known as the cyclic redundancy check (CRC).
Comparing the spheres surrounding codewords A and B in Figure 1, we see that the errorcorrecting capability of a code is given by d_{min}=2t +1 (this is the minimum separation that prevents overlapping spheres). In other words, a code with d_{min}=3 can correct all patterns of 1 error, one with d_{min}=5 can correct all patterns of 2 errors, and so on. A code can be used to correct t errors and detect v additional errors as long as d_{min} 2t + v + 1. Now refer to Figure 1 and consider the following errordecoding cases: correct decoding, decoding failure and error detection without correction.

Correct decoding: Assume that codeword C is sent, and the nbit word Y is received. Because Y is inside C's sphere, the decoder will correct all errors.

Decoding failure: This time we send codeword C, and the channel gives us nbit word Z. The decoder has no way of knowing that codeword C was sent and must decode to D since Z is in D's sphere. This is an example of errorcorrection decoder failure.

Error detection without correction: This case shows one way that an errorcorrection code can be used to also detect errors. We send codeword C and receive nbit word X. Since X is not inside any sphere, we won't try to correct it. We do, however, recognize that it is an errored codeword and report this information to the user.
In the last example, we could try to correct nbit word X to the nearest valid codeword, even though X was not inside any codeword's sphere. A decoder that attempts to correct all received nbit words regardless of whether they are in a decoding sphere is called a complete decoder. On the other hand, a decoder that attempts to correct only nbit words that lie inside a decoding sphere is called an incomplete, or boundeddistance decoder. Boundeddistance decoders are much more common than complete decoders.
Now let's apply what we've learned to a simple error correction code, known as the repetition code. Consider a (5, 1) repetition code that repeats each bit four times. Figure 2 depicts such an encoder.
The decoder takes 5 bits at a time and counts the number of 1s. If there are three or more, the decoder selects 1 for the decoded bit. Otherwise, the decoder selects 0. The minimum distance of this code is 5, so it can correct all patterns of two errors. To compute the error performance of this code, consider a random error channel with a probability of bit error of p. After decoding, the probability of bit error is simply the probability of three or more bit errors in a 5bit codeword. This probability is computed for several values of p with results listed in Table 1.
Next month: Errorcontrol coding techniques.
Jay Jacobsmeyer is president of Pericle Communications Co., a consulting engineering firm located in Colorado Springs, Colo. He holds bachelor's and master's degrees in Electrical Engineering from Virginia Tech and Cornell University, respectively, and has more than 20 years experience as a radio frequency engineer.
References:

C. E. Shannon, “A Mathematical Theory of Communication,” Bell System Technical Journal, vol. 27, pp. 379423, 1948.

R. E. Blahut, Theory and practice of error control codes, Reading, Massachusetts: AddisonWesley Publishing Co., 1983.

C. Berrou, A. Glavieux and P. Thitimajshima, “Near Shannon limit errorcorrecting coding and decoding: Turbocodes,” IEEE Transactions on Communications, pp. 126171, October 1996.
Table 1
Input BER  Output BER 

10^{2}  9.9 × 10^{6} 
10^{3}  1.0 × 10^{8} 
10^{4}  1.0 × 10^{11} 
10^{5}  1.0 × 10^{14} 