Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

INFOLINK UNIVERSITY COLLAGE

HAWASSA

DEPARTMENT OF COMPUTER SCEINCE

COMPUTER NETWORKING ASSIGNMENT

BY:

1. SOLOMON ABERRA (IHRCS-24516-13)

SUBMITED TO: Mr. Haile Wolde


Hamming Distance
Hamming distance is a measure of the difference between two strings of equal length. It is
defined as the number of positions at which the corresponding symbols differ. For example,
the Hamming distance between the strings "101010" and "100110" is 2 because there are two
positions where the bits differ.

Checksum
A checksum is a value calculated from a data set, which is used to detect errors in data. When
data is sent or stored, a checksum is calculated and sent along with it. When the data is
received or retrieved, a new checksum is calculated and compared with the original
checksum. If they match, the data is considered to be intact; if not, an error is detected.

Example:

 Original data: "Hello, World!"


 Calculate checksum (e.g., simple sum of ASCII values): 1080
 Transmit data and checksum.
 Upon receipt, calculate checksum of received data.
 Compare the new checksum with the original.
 Hamming Distance for Single Bit Error Detection
 Hamming distance is particularly useful for error detection and correction in data
transmission. For single-bit error detection, a minimum Hamming distance of 2
between valid codewords is required. This means that any single-bit error will change
the codeword to one that is not valid, thus allowing the error to be detected.

Hamming Distance for Single Bit Error Detection


Hamming distance is particularly useful for error detection and correction in data
transmission. For single-bit error detection, a minimum Hamming distance of 2 between valid
codewords is required. This means that any single-bit error will change the codeword to one
that is not valid, thus allowing the error to be detected.

Example:
If the valid codewords are "0000" and "1111", the Hamming distance is 4, which is more than
enough to detect a single-bit error.

Convolution Code for Error Detection


Convolution codes are a type of error-correcting code used in data communication systems.
They work by processing the input data stream through a sequence of shift registers and
combining the contents of these registers in a specific way to generate the encoded output.
Convolutional codes spread the information across multiple bits, allowing for error detection
and correction.
Key Features:

 Encoding: The encoder processes the input bit stream using shift registers and
generates the encoded bit stream.
 Trellis Diagram: Used to represent the state transitions and to decode the received
data.
 Viterbi Algorithm: A common decoding algorithm that finds the most likely sequence
of states (and thus the most likely original data) given the received encoded data.

Example:
Input data: "1101"
Encoding with a rate 1/2 convolutional code might produce: "11 10 01 11"
The encoded data is transmitted, and upon receipt, the decoder uses the trellis diagram and
Viterbi algorithm to decode the data and detect/correct errors.

References

Hamming, R. W. (1950). Error Detecting and Error Correcting Codes. Bell System Technical
Journal, 29(2), 147-160.

Stallings, W. (2014). Data and Computer Communications (10th ed.). Pearson.

Proakis, J. G., & Salehi, M. (2007). Digital Communications (5th ed.). McGraw-Hill.

You might also like