The idea behind Hamming codes is to intersperse, or append, extra binary digits to a binary code so that errors in transmission of the code over a channel may be detected and corrected. For example, suppose we transmit the code 01101001, and it is received as 01001001. In this transmission, the third most significant bit is received erroneously. Let's define the following “modulo-2 addition” of binary numbers:
0⊕0=0
0⊕1=1
1⊕0=1
1⊕1=0.
(1)
Multiplication in modulo-2 arithmetic is simply 0·0=0·1=1·0=0 and 1 ·1=1. Then we can say that the error sequence 00100000 is “added” to the transmission 01101001 to produce the erroneous reception:
01101001 transmitted
⊕ 00100000 error
01001001 received. (2)
Hamming error correcting codes will permit us to receive the erroneous transmission and to detect and correct the error. This is obviously of great value in transmitting and storing information. (Imagine how upset you would be to have the binary code for your checking account confused with that of Mrs. Joan Kroc.)
Choosing the Number of Check Bits. Let's suppose we have N bits of information that we wish to transmit and that we wish to intersperse “check bits” that will enable us to detect and correct any single bit error in the transmission. If we use N information bits and n check bits, then we will transmit a code word containing N+n bits. The n check bits can code 2n events, and we want these events to indicate whether or not any errors occurred and, if so, where they occurred. Therefore we require
where (N+n) is the number of single error events that can occur and +1 is the number of no-error events. For example, when N=4, we require n=3 so that 23≥(4+3)+1.
How many check bits do you require to code seven bits of information for single error correction?
Code Construction. Let's suppose we have constructed an (N,n) Hamming code consisting of N information bits and n check bits (or parity bits). We denote the information bits by x1,x2,...,xN and the check bits by c1,c2,...,cn. These bits may be interspersed. When N=4 and n=3, then a typical array of bits within a code word would be one of the following:
[
c1
c2
x1
c3
x2
x3
x4
] or [
x1
x2
x3
x4
c1
c2
c3
] .(3)
The first ordering is “natural” (as we will see), and the second is “systematic” (a term that is used to describe any code whose head is information and whose tail is check). If a single error occurs in an (N,n) code, then the received code word will be the modulo-2 sum of the code word and the error word that contains a 1 in its inormaltnormalh position:
[
c1
c2
xnormall
c3
x2
x3
x4
]⊕[
0
0
0
normall
0
0
0
]. (4)
We would like to operate on this received code word in such a way that the location of the error bit can be determined. If there were no code word, then an obvious solution would be to premultiply the error word by the parity check matrix
AT=
[
1 0 1 0 1 0 1
0 1 1 0 0 1 1
0 0 0 1 1 1 1 ]
[
(1) (2) (3) (4) (5) (6) (7) ]
.(5)
The inormaltnormalh column of AT is just the binary code for i. When AT premultiplies an error word, the error bit picks out the column that codes the error position:
Figure 1
If the error word contains no error bits, then the product is 0, indicating no errors.
This seems like a good idea, but what about the effect of the code word? In Exercise 1, you are asked to show that the effect of the parity check matrix AT applied to the modulo-2 sum of a code word x and an error word e is
normalAT(normalx⊕normale)=normalATnormalx⊕normalATnormale.(6)
In this equation all sums and products obey the rules of modulo-2 arithmetic.
EXERCISE 1
Let normaly=normalx⊕normale denote the modulo-2 sum of a code word x and an error word normale;normalAT is a parity check matrix. Show that
normalATnormaly=normalATnormalx⊕normalATnormale. (7)
We have designed the parity check matrix AT so that the syndromenormalATnormale produces a binary code for the error location. (The location of the error is normaltnormalh'normale syndrome for the error word.) The product normalATnormalx will interfere with this syndrome unless ATx=0. Therefore we will require that the code word x satisfy the constraint
normalATnormalx=0.(8)
This constraint actually defines the Hamming code. Let's illustrate this point by applying the constraint to a code word in its “natural format” normalxT=(c1c2x1c3x2x3x4).
Natural Codes. When the information bits and the check bits are coded in their natural order (c1c2x1c3x2x3x4), then we may determine the check bits by writing normalATnormalx as follows:
[
1 0 1 0 1 0 1
0 1 1 0 0 1 1
0 0 0 1 1 1 1 ][
c1
c2
x1
c3
x2
x3
x4
]=[
0
0
0
](9)
We use the rules of modulo-2 arithmetic to write these constraints as
c1⊕x1⊕x2⊕x4=0
c2⊕x1⊕x3⊕x4=0
c3⊕x2⊕x3⊕x4=0.
(10)
Therefore the check bits c1,c2, and c3 are simply the following modulo-2 sums
c1=x1⊕x2⊕x4
c2=x1⊕x3⊕x4
c3=x2⊕x3⊕x4.
(11)
This finding may be organized into the matrix equation
[
c1
c2
xnormall
c3
x2
x3
x4
]=[
1101
1011
1000
0111
0100
0010
0001
][
x1
x2
x3
x4
].(12)
This equation shows how the code word x is built from the information bits (x1,x2,x3,x4). We call the matrix that defines the construction a coder matrix and write it as H:
x=HΘ
normalxT=(c1c2x1c3x2x3x4)ΘT=(x1x2x3x4)
(13)
H=[
1101
1011
1000
0111
0100
0010
0001
](14)
This summarizes the construction of a Hamming code x.
EXERCISE 2
Check to see that the product of the parity check matrix AT and the coder matrix H is normalATnormalH=0. Interpret this result.
EXERCISE 3
Fill in the following table to show what the Hamming (4,3) code is:
x1 x2 x3 x4 c1 c2 x1 c3 x2 x3 x4
0 0 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 0 1 0 0 1
0 0 1 0
0 0 1 1
0 1 0 0
0 1 0 1
0 1 1 0
0 1 1 1
1 0 0 0
1 0 0 1
1 0 1 0
1 0 1 1
1 1 0 0
1 1 0 1
1 1 1 0
1 1 1 1 (15)
EXERCISE 4
Design a Hamming (11,n) code for coding eleven information bits against single errors. Show your equations for c1,c2,...,cn and write out the coder matrix H for x=normalHΘ.
Decoding. To decode a Hamming code, we form the syndrome normalATnormaly for the received (and possibly erroneous) code word normaly=normalx⊕normale. Because normalATnormalx=0, the syndrome is
normals=normalATnormale.(16)
Convert this binary number into its corresponding integer location and change the bit of y in that location. If the location is zero, do nothing. Now strip off the information bits. This is the decoding algorithm.
EXERCISE 5
Use the table of Hamming (4,3) codes from Exercise 3 to construct a table of received codes that contain either no bit errors or exactly one bit error. Apply the decoding algorithm to construct (x1,x2,x3,x4) and show that all received code words with one or fewer errors are correctly decoded.
Digital Hardware. The tables you have constructed in Exercise 3 and 5 for coding and decoding Hamming (4,3) codes may be stored in digital logic chips. Their functionality is illustrated in Figure 1. The coder chip accepts (x1x2x3x4) as its address and generates a coded word. The decoder chip accepts (c1c2x1c3x2x3x4) as its address and generates a decoded word. In your courses on digital logic you will study circuits for implementing coders and decoders.
Figure 2: Digital Logic for Hamming Code
EXERCISE 6
Discuss the possibility of detecting a received (4,3) code word that is neither a valid code word nor a code word with a single error. How would you use such a detector?
EXERCISE 7
What fraction of received seven-bit words can be correctly decoded as Hamming (4,3) codes?
Systematic Codes. Systematic Hamming codes are codes whose information bits lead and whose check bits trail. The format for a (4,3) code is then (x1x2x3x4c1c2c3). The construction of a (4,3) code word from the information bits may be written as
[
x1
x2
x3
x4
c1
c2
c3
]=[
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
c11 c12 c13 c14
c21 c22 c23 c24
c31 c32 c33 c34 ][
x1
x2
x3
x4
](17)
The coder matrix takes the form
H=[
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
c11 c12 c13 c14
c21 c22 c23 c24
c31 c32 c33 c34 ]=[
I
_____
C
].(18)
The problem is to find the matrix C that defines the construction of check bits. The constraint normalATnormalx=0 produces the constraint normalATnormalH=0 so that normalATnormalHΘ=0. The constraints normalATnormalH=0 may be written out as
{
1 0 1 0 1 0 1
0 1 1 0 0 1 1
0 0 0 1 1 1 1 }=[
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
c11 c12 c13 c14
c21 c22 c23 c24
c31 c32 c33 c34 ]=[
0 0 0 0
0 0 0 0
0 0 0 0 ]. (19)
These constraints produce all the equations we need (twelve equations in twelve unknowns) to determine the cij.
EXERCISE 8
Solve Equation 19 for the cij. Show that the coder matrix for a systematic Hamming (4,3) code is
H=[
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
0 1 1 1
1 0 1 1
1 1 0 1 ].(20)
EXERCISE 9
Show that the coder matrix of Exercise 7 is a permutation of the coder matrix in Equation 14. (That is, the rows are reordered.)
EXERCISE 10
(MATLAB) Write a MATLAB program that builds Hamming (4,3) codes from information bits (x1x2x3x4) and decodes Hamming (4,3) codes (c1c2x1c3x2x3x4) to obtain information bits (x1x2x3x4). Synthesize all seven-bit binary codes and show that your decoder correctly decodes correct codes and one-bit error codes.
Arduino HVAC Servo Thermostat/Controller
-
introArduino HVAC Servo Thermostat/Controller
Welcome to my 'green' instructable! I am going to show you how to use an
Arduino, two servo motors a temperatu...
1 comments:
Please send me matlab code for shannon fano encoding algorithm.
Post a Comment