You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In 1961 William Wesley Peterson published a proposal for the use of cyclic codes for error detection. We now know these codes commonly as the CRC checksum. Within the paper, a representation of a cyclic code is given using polynomials with binary coefficients. Each coefficient corresponds to a bit. For example, the bits 1101 would be represented as (1)x3+(1)x2+(0)x1 + (1)x0 or more simply x3+x2+1. With this model, all information bits can be represented as polynomials. Peterson proposes the impressive error detecting properties of cyclic codes utilizing this notation and long division of irreducible polynomials. These codes are of particular advantage because of their ability to be implemented in hardware. Peterson described the proposal as "Encoding and error-correcting procedures for these codes are relatively easily implemented using shift registers with feedback connections" [1]. With the use of a shift register and an XOR gate, the implementation of cyclic codes is elementary. Since the long division is done in the finite field of order two {0,1}, the field operations are analogous to XOR operations. For example (x2+x) +(x3+x2+x+1) = x3 + 1. This is because our addition is happening mod 2. Notice that this is the same for XOR operations where 1 XOR with 1 is equivalent to 1 + 1 mod 2. Leveraging these properties, the cyclic codes can be implemented seamlessly in hardware. Their functionality in hardware allows them to be optimized and computed at speeds unable to be paralleled in software. William later went on to win the Claude E. Shannon Award and Japan Prize, both of which are prestigious awards for his contributions to Mathematics and Computer Science.
limitations
Since then, the CRC's check has seen limitations regarding the length of the generator. For example, if a message M(X) consists of an error E(X), the only undetectable errors are those of a multiple of our generator polynomial G(X). Thus a rate can be defined to resemble the number of undetectable errors as the number of times the size of our message M(X) (say with a length of n) can be divided by our generator polynomial G(X)(say of length k) k/n. Regardless of their limitations, they are still the most widely used error detection mechanism in our digital landscape.
While these limitations currently persist in digital communication systems, their performance is only limited by the world's need to communicate large amounts of data over the internet. The standard packet size send over the TCP/IP protocol is 65535 bytes or 524280 bits and are largest deployed Generator polynomials only span 32 bits. The amount of times our message divides into our generator becomes more precarious for uncaught errors.
But for small data on Hedera...
The brilliant checksum proposed by Dr. Leemon is regarding a small amount of information not spaning more than several bytes. Employing a Cyclic Redundancy check with a generator polynomial of sufficient length (say 32 bits) would catch nearly all possible errors over transmission.
I want to open the discussion on the matter, particularly concerning comparing error detection properties between the two methods. At the least, this is a fascinating thought exercise that is intellectually stimulating.
Works Cited
[1] John E. McNamara.Error Detection. Technical Aspects of data communication, pages 125–141, 1988.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Cyclic Redundancy Checks for a better CheckSum
In 1961 William Wesley Peterson published a proposal for the use of cyclic codes for error detection. We now know these codes commonly as the CRC checksum. Within the paper, a representation of a cyclic code is given using polynomials with binary coefficients. Each coefficient corresponds to a bit. For example, the bits 1101 would be represented as (1)x3+(1)x2+(0)x1 + (1)x0 or more simply x3+x2+1. With this model, all information bits can be represented as polynomials. Peterson proposes the impressive error detecting properties of cyclic codes utilizing this notation and long division of irreducible polynomials. These codes are of particular advantage because of their ability to be implemented in hardware. Peterson described the proposal as "Encoding and error-correcting procedures for these codes are relatively easily implemented using shift registers with feedback connections" [1]. With the use of a shift register and an XOR gate, the implementation of cyclic codes is elementary. Since the long division is done in the finite field of order two {0,1}, the field operations are analogous to XOR operations. For example (x2+x) +(x3+x2+x+1) = x3 + 1. This is because our addition is happening mod 2. Notice that this is the same for XOR operations where 1 XOR with 1 is equivalent to 1 + 1 mod 2. Leveraging these properties, the cyclic codes can be implemented seamlessly in hardware. Their functionality in hardware allows them to be optimized and computed at speeds unable to be paralleled in software. William later went on to win the Claude E. Shannon Award and Japan Prize, both of which are prestigious awards for his contributions to Mathematics and Computer Science.
limitations
Since then, the CRC's check has seen limitations regarding the length of the generator. For example, if a message M(X) consists of an error E(X), the only undetectable errors are those of a multiple of our generator polynomial G(X). Thus a rate can be defined to resemble the number of undetectable errors as the number of times the size of our message M(X) (say with a length of n) can be divided by our generator polynomial G(X)(say of length k) k/n. Regardless of their limitations, they are still the most widely used error detection mechanism in our digital landscape.
While these limitations currently persist in digital communication systems, their performance is only limited by the world's need to communicate large amounts of data over the internet. The standard packet size send over the TCP/IP protocol is 65535 bytes or 524280 bits and are largest deployed Generator polynomials only span 32 bits. The amount of times our message divides into our generator becomes more precarious for uncaught errors.
But for small data on Hedera...
The brilliant checksum proposed by Dr. Leemon is regarding a small amount of information not spaning more than several bytes. Employing a Cyclic Redundancy check with a generator polynomial of sufficient length (say 32 bits) would catch nearly all possible errors over transmission.
I want to open the discussion on the matter, particularly concerning comparing error detection properties between the two methods. At the least, this is a fascinating thought exercise that is intellectually stimulating.
Works Cited
[1] John E. McNamara.Error Detection. Technical Aspects of data communication, pages 125–141, 1988.
Beta Was this translation helpful? Give feedback.
All reactions