Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 60

SYMBOL RATE

In digital communications,

Symbol rate (also known as baud or modulation rate) is the number of symbol changes (waveform
changes or signalling events) made to the transmission medium per second using a
digitally modulated signal or a line code.
The Symbol rate is measured in baud (Bd) or symbols/second.
In the case of a line code,
a) the symbol rate is the pulse rate in pulses/second.
b) Each symbol can represent or convey one or several bits of data.
c) The symbol rate is related to, but should not be confused with, the gross bitrate expressed in
bit/second.
Contents
[hide]

1 Symbols

o 1.1 Relationship to gross bitrate

o 1.2 Modems for passband transmission

o 1.3 Line codes for baseband transmission

o 1.4 Digital television and OFDM example

o 1.5 Relationship to chip rate

o 1.6 Relationship to bit error rate

2 Modulation
o 2.1 Binary Modulation

o 2.2 N-ary Modulation, N greater than 2

o 2.3 Data Rate versus Error Rate

3 Significant condition

4 See also

5 References

6 External links

Symbols

A symbol can be described as either a

pulse (in digital baseband transmission) or a

"tone" (in passband transmission using modems)

representing an integer number of bits.


A theoretical definition of a symbol is

1. a waveform,

2. a state or

3. a significant condition

of the communication channel that persists for a fixed period of time.

A sending device places symbols on the channel at a

fixed and known symbol rate, and

the receiving device has the

job of detecting the sequence of symbols

in order to reconstruct the transmitted data.

There may be

a direct correspondence between a symbol and a small unit of data (for example, each symbol

may encode

one or several binary digits or 'bits') or

the data may be represented

transitions between symbols or

even by a sequence of many symbols.

The symbol duration time, also known as unit interval, can be directly measured as the time between

transitions by looking into an eye diagram of an oscilloscope. The symbol duration time Ts can be calculated

as:

where fs is the symbol rate.

A simple example:

General case: A baud rate of 1 kBd = 1,000 Bd is synonymous to a symbol rate of 1,000

symbols per second.

1. In case of a modem, this corresponds to 1,000 tones per second, and

2. in case of a line code, this corresponds to 1,000 pulses per second.


The symbol duration time is 1/1,000 second = 1 millisecond.

[edit]Relationship to gross bitrate

The term baud rate has sometimes incorrectly been used to mean bit rate, since these rates are

the same in old modems as well as in the simplest digital communication links using only one bit

per symbol, such that binary "0" is represented by one symbol, and binary "1" by another symbol.

In more advanced modems and data transmission techniques, a symbol may have more than two

states, so it may represent more than one binary digit (a binary digit always represents exactly two

states). For this reason, the baud rate value will often be lower than the gross bit rate.

Example of use and misuse of "baud rate": It is correct to write "the baud rate of my COM port is

9,600" if we mean that the bit rate is 9,600 bit/s, since there is one bit per symbol in this case.

It is not correct to write "the baud rate of Ethernet is 100 M baud" or "the baud rate of my modem

is 56,000" if we mean bit rate. since there is not one bit per symbol in this case

See below for more details on these techniques.

If N bits(binary digits) are conveyed per symbol, and the gross bit rate is R, inclusive of

channel coding overhead, the symbol rate can be calculated as:

Symbol rate = fs
Gross bit rate = R
No. of bits per symbol = N

In that case M=2N different symbols are used.

In a modem, these may be sine wave tones with unique combinations of amplitude, phase

and/or frequency. For example, in a 64QAM modem, M=64.

In a line code, these may be M different voltage levels.

By taking information per pulse N in bit/pulse to be the base-2-logarithm of the number of

distinct messages M that could be sent,

Hartley[1]constructed a measure of the gross bitrate R as:

where fs is the baud rate in symbols/second or pulses/second. (See Hartley's law).

[edit]
Modems for passband transmission

Modulation is used in passband filtered channels such as telephone lines, radio

channels and other frequency division multiplex (FDM) channels.

In a digital modulation method provided by a modem, each symbol is typically a sine

wave tone with certain frequency, amplitude and phase.Symbolrate The baud rate is the

number of transmitted tones per second.

One symbol can carry one or several bits of information. In voiceband modems for the

telephone network, it is common for one symbol to carry up to 7 bits.

Conveying more than one bit per symbol (i.e pulse or tone) has advantages.

Why ?

It reduces the time required to send a given quantity of data over a limited bandwidth.

A high spectral efficiency in (bit/s)/Hz can be achieved, i.e. a high bit rate in bit/s

although the bandwidth in hertz may be low.

The maximum baud rate for a passband for common modulation methods such

as QAM, PSK and OFDM is approximately equal to the passband bandwidth.

Voiceband modem examples:

A V.22bis modem transmits 2400 bit/s using 1200 Bd (1200 symbol/s), where

each quadrature amplitude modulation symbol carries two bits ofinformation. The

modem can generate M=22=4 different symbols. It requires a bandwidth of 1200 Hz

(equal to the baud rate). The carrier frequency (the central frequency of the

generated spectrum) is 1800 Hz, meaning that the lower cut off frequency is 1800

-1200/2 = 1200 Hz, and the upper cutoff frequency is 1800 + 1200/2 = 2400 Hz.

A V.34 modem may transmit symbols at a baud rate of 3,420 Bd, and each symbol

can carry up to ten bits, resulting in a gross bit rate of 3420 * 10 = 34,200 bit/s.

However, the modem is said to operate at a net bit rate of 33,800 bit/s, excluding

physical layer overhead.

[edit]Line codes for baseband transmission

In case of a baseband channel such as a telegraph line, a serial cable or a Local Area

Network twisted pair cable, data is transferred using line codes, i.e. pulses rather than
sinewave tones. In this case the baud rate is synonymous to the pulse rate in

pulses/second.

The maximum baud rate or pulse rate for a base band channel is called

the Nyquist rate, and is double the bandwidth (double the cut-off frequency).

The simplest digital communication links (such as individual wires on a motherboard or

the RS-232 serial port/COM port) typically have a symbol rate equal to the gross bit

rate.

Common communication links such as 10 Mbit/s Ethernet (10Base-T), USB,

and FireWire typically have a symbol rate slightly higher than the data bit rate, due to the

overhead of extra non-data symbols used for self-synchronizing code and error

detection.

J. M. Emile Baudot (18451903) worked out a five-level code (five bits per character) for

telegraphs which was standardized internationally and is commonly called Baudot code.

More than two voltage levels are used in advanced techniques such as FDDI and

100/1000 Mbit/s Ethernet LANs, and others, to achieve high data rates.

1000 Mbit/s Ethernet LAN cables use four wire pairs in full duplex (250 Mbit/s per pair in

both directions simultaneously), and many bits per symbol to encode their data payloads.

[edit]Digital television and OFDM example

In digital television transmission the symbol rate calculation is:

symbol rate in symbols per second = (Data rate in bits per second * 204) / (188 * bits per symbol)

The 204 is the number of bytes in a packet including the 16 trailing Reed-

Solomon error checking and correction bytes. The 188 is the number of data bytes

(187 bytes) plus the leading packet sync byte (0x47).

The bits per symbol is the (modulation's power of 2)*(Forward Error Correction). So

for example in 64-QAM modulation 64 = 26 so the bits per symbol is 6.

The Forward Error Correction (FEC) is usually expressed as a fraction, i.e., 1/2,

3/4, etc.

In the case of 3/4 FEC, for every 3 bits of data, you are sending out 4 bits, one of

which is for error correction.

Example:

given bit rate = 18096263


Modulation type = 64-QAM

FEC = 3/4

then

In digital terrestrial digital television (DVB-T, DVB-H and similar

techniques) OFDM modulation is used, i.e. multi-carrier modulation. The above

symbol rate should then be divided by the number of OFDM sub-carriers in

view to achieve the OFDM symbol rate. See the OFDM system comparison

table for further numerical details.

[edit]Relationship to chip rate

Some communication links (such as GPS transmissions, CDMA cell phones,

and other spread spectrum links) have a symbol rate much higher than the data

rate (they transmit many symbols called chips per data bit.

Representing one bit by a chip sequence of many symbols overcomes co-

channel interference from other transmitters sharing the same frequency

channel, including radio jamming, and is common in military radio and cell

phones.

Despite the fact that using more bandwidth to carry the same bit rate gives

low channel spectral efficiency in (bit/s)/Hz, it allows many simultaneous users,

which results in high system spectral efficiency in (bit/s)/Hz per unit of area.

In these systems, the symbol rate of the physically transmitted high-frequency

signal rate is called chip rate, which also is the pulse rate of the

equivalent base band signal.

However, in spread spectrum systems, the term symbol may also be used at a

higher layer and refer to one information bit, or a block of information bits that

are modulated using for example conventional QAM modulation, before the

CDMA spreading code is applied. Using the latter definition, the symbol rate is

equal to or lower than the bit rate.

[edit]Relationship to bit error rate

The disadvantage of conveying many bits per symbol is that the receiver has to

distinguish many signal levels or symbols from each other, which may be
difficult and cause bit errors in case of a poor phone line that suffers from low

signal-to-noise ratio. In that case, a modem or network adapter may

automatically choose a slower and more robust modulation scheme or line

code, using fewer bits per symbol, in view to reduce the bit error rate.

An optimal symbol set design takes into account channel bandwidth, desired

information rate, noise characteristics of the channel and the receiver, and

receiver and decoder complexity.

[edit]Modulation

Many data transmission systems operate by the modulation of a carrier signal.

For example, in frequency-shift- keying (FSK), the frequency of a tone is varied

among a small, fixed set of possible values. In a synchronous data

transmission system, the tone can only be changed from one frequency to

another at regular and well-defined intervals.

The presence of one particular frequency during one of these intervals

constitutes a symbol. (The concept of symbols does not apply to asynchronous

data transmission systems.)

In a modulated system, the term modulation rate may be used synonymously

with symbol rate.

[edit]Binary Modulation

If the carrier signal has only two states, then only one bit of data (i.e., a 0 or 1)

can be transmitted in each symbol.

The bit rate is in this case equal to the symbol rate. For example, a binary FSK

system would allow the carrier to have one of two frequencies, one

representing a 0 and the other a 1. A more practical scheme is differential

binary phase-shift keying, in which the carrier remains at the same frequency,

but can be in one of two phases. During each symbol, the phase either remains

the same, encoding a 0, or jumps by 180, encoding a 1. Again, only one bit of

data (i.e., a 0 or 1) is transmitted by each symbol. This is an example of data

being encoded in the transitions between symbols (the change in phase),

rather than the symbols themselves (the actual phase). (The reason for this in

phase-shift keying is that it is impractical to know the reference phase of the

transmitter.)
[edit]

N-ary Modulation, N greater than 2

By increasing the number of states that the carrier signal

can take,

the number of bits encoded in each symbol

can be greater than one.

The bit rate can then be greater than the symbol rate.

For example, a differential phase-shift keying system might allow four possible

jumps in phase between symbols.

Then two bits could be encoded at each symbol interval,

achieving a data rate of double the symbol rate. In a more complex scheme

such as 16-QAM, four bits of data are transmitted in each symbol, resulting in a

bit rate of four times the symbol rate.

[edit]Data Rate versus Error Rate

Modulating a carrier increases the frequency range, or bandwidth, it occupies.

Transmission channels are generally limited in the bandwidth they can carry.

The bandwidth depends on the symbol (modulation) rate (not directly on the bit

rate). As the bit rate is the product of the symbol rate and the number of bits

encoded in each symbol, it is clearly advantageous to increase the latter if the

former is fixed. However, for each additional bit encoded in a symbol, the

constellation of symbols (the number of states of the carrier) doubles in size.

This makes the states less distinct from one another which in turn makes it

more difficult for the receiver to detect the symbol correctly in the presence of

disturbances on the channel.

The history of modems is the attempt at increasing the bit rate over a fixed

bandwidth (and therefore a fixed maximum symbol rate), leading to increasing

bits per symbol. For example, the V.29 specifies 4 bits per symbol, at a symbol

rate of 2,400 baud, giving an effective bit rate of 9,600 bits per second.

The history of spread spectrum goes in the opposite direction, leading to fewer

and fewer data bits per symbol in order to spread the bandwidth. In the case of
GPS, we have a data rate of 50 bit/s and a symbol rate of 1.023 Mchips/s. If

each chip is considered a symbol, each symbol contains far less than one bit

( 50 bit/s / 1023 ksymbols/s =~= 0.000 05 bits/symbol ).

The complete collection of M possible symbols over a particular channel is

called a M-ary modulation scheme. Most modulation schemes transmit some

integer number of bits per symbol b, requiring the complete collection to

contain M = 2^b different symbols. Most popular modulation schemes can be

described by showing each point on a constellation diagram, although a few

modulation schemes (such as MFSK, DTMF, pulse-position modulation, spread

spectrum modulation) require a different description.

[edit]Significant condition

In telecommunication, concerning the modulation of a carrier, a significant

condition is one of the signal's parameters chosen to represent

information.[2]

A significant condition could be an electrical current (voltage, or power level),

an optical power level, a phase value, or a particular frequency orwavelength.

The duration of a significant condition is the time interval between successive

significant instants.[2]

A change from one significant condition to another is called a signal transition.

Information can be transmitted either during the given time interval, or encoded

as the presence or absence of a change in the received signal.[3]

Significant conditions are recognized by an appropriate device called a

receiver, demodulator, or decoder. The decoder translates the actual signal

received into its intended logical value such as a binary digit (0 or 1), an

alphabetic character, a mark, or a space. Each significant instant is determined

when the appropriate device assumes a condition or state usable for

performing a specific function, such as recording, processing, orgating.[2]

BIT RATE
In telecommunications and computing, bit rate (sometimes written bitrate, data rate or as a R[1]) is

the number of bits that are conveyed or processed per unit of time.
The bit rate is quantified using the bits per second (bit/s) unit, often in conjunction with an SI prefix such
askilo- (kbit/s), mega- (Mbit/s), giga- (Gbit/s) or tera- (Tbit/s).
Note that, unlike many other computer-related units, 1 kbit/s is traditionally defined as 1,000-bit/s, not
1,024-bit/s, etc., also before 1999 when SI prefixes were introduced for units of information in the
standard IEC 60027-2.
Uppercase K as in Kbit/s should never be used.
The formal abbreviation for "bits per second" is "bit/s" (not "bits/s", see writing style for SI units).
In less formal contexts the abbreviations "b/s" or "bps" are sometimes used, though this risks confusion with
"bytesper second" ("B/s", "Bps"), and the use of the abbreviation ps is also inconsistent with the SI symbol
forpicosecond.

1 Byte/s (B/s) corresponds to 8-bit/s (bit/s).


Contents
[hide]

1 Protocol layers

o 1.1 Gross bit rate

o 1.2 Information rate

o 1.3 Network throughput

o 1.4 Goodput (data transfer rate)

o 1.5 Multimedia encoding

2 Prefixes

3 Progress trends

4 Multimedia
o 4.1 Audio

4.1.1 MP3

4.1.2 Other audio


o 4.2 Video

o 4.3 Notes

5 See also

6 References

7 External links
PROTOCOL LAYERS

Gross bit rate = useful data rate + protocol overhead rate.

In digital communication systems,


1. the physical layer gross bitrate,[2]
2. raw bitrate,[3]
3. data signaling rate,[4]
4. gross data transfer rate[5] or
5. uncoded transmission rate[3]
(sometimes written as a variable Rb[2][3] or fb[6]) is the total number of physically
transferred bits per second
over a communication link,
including useful data as well as protocol overhead.
In case of serial communications, the gross bit rate is related to the bit transmission time as:

The gross bit rate is related to, but should not be confused with, the symbol rate or modulation rate
in baud, symbols/s or pulses/s.

Gross bit rate can be used interchangeably with "baud" only when there are two levels per
symbol, representing 0 and 1 respectively,
meaning that each symbol of a data transmission system carries exactly one bit of data; something
not true for modern modem ---- modulation systems and modern LANs, for example.[citation needed]
For most line codes and modulation methods:

Symbol rate Gross bit rate


More specifically,
a line code (or baseband transmission scheme)
representing the data using pulse-amplitude modulation
with 2N different voltage levels,
can transfer N bit/pulse.

A digital modulation method (or passband transmission scheme) using 2N different symbols, for
example 2Namplitudes, phases or frequencies, can transfer N bit/symbol. This results in:

Gross bit rate = Symbol rate N


An exception from the above is some self-synchronizing line codes, for example Manchester
coding and return-to-zero (RTZ) coding,
where each bit is represented by two pulses (signal states), resulting in:

Gross bit rate = Symbol rate/2


A theoretical upper bound for the symbol rate in baud, symbols/s or pulses/s for a certain spectral
bandwidth in hertz is given by
the Nyquist law:

Symbol rate Nyquist rate = 2 bandwidth


In practice this upper bound can only be approached for line coding schemes and for so-called vestigal
sideband digital modulation.
Most other digital carrier-modulated schemes, for
example ASK, PSK, QAM and OFDM, can be characterized as double
sideband modulation, resulting in the following relation:

Symbol rate Bandwidth


In case of parallel communication, the gross bit rate is given by

where n is the number of parallel channels,


Mi is the number of symbols or levels of the modulation in the i-th channel, and
Ti is the symbol duration time, expressed in seconds, for the i-th channel.
INFORMATION RATE
1. The physical layer net bitrate,[7]
2. information rate,[2]
3. useful bit rate,[8]
4. payload rate,[9]
5. net data transfer rate,[5]
6. coded transmission rate,[
7. effective data rate[3] or wire speed (informal language) of a digital communication channel is
The capacity excluding
1. the physical layer protocol overhead, for example
(A) time division multiplex (TDM) framing bits,
(B) redundant forward error correction (FEC) codes,
(C) equalizer training symbols and
(D)other channel coding.

2. Error-correcting codes are common especially in


(a)wireless communication systems,
(b) broadband modem standards and
(c) modern copper-based high-speed LANs.

The physical layer net bit rate is the datarate measured at a reference point in the interface
between the data ink layer and physical layer, and may consequently include data link and
higher layer overhead.
Peak bit rate : In modems and wireless systems, link adaptation (automatic adaption of the data
rate and the modulation and/or error coding scheme to the signal quality) is often applied. In that
context, the term peak bitrate denotes the net bitrate of the fastest and least robust transmission
mode, used for example when the distance is very short between sender and transmitter.[10]
Some operating systems and network equipment may detect the "connection speed"[11] (informal
language) of a network access technology or communication device, implying the current net bit rate.
Note that the term line rate in some textbooks is defined as gross bit rate,[9] in others as net bit rate.
The relationship between the gross bit rate and net bit rate is affected by the FEC code
rate according to the following.

Net bit rate Gross bit rate code rate


The connection speed of a technology that involves forward error correction typically refers to the
physical layer net bit rate in accordance with the above definition.
For example, the net bitrate (and thus the "connection speed") of
a IEEE 802.11a wireless network is the net bit rate of between 6 and
54 Mbit/s, while the gross bit rate is between 12 and 72 Mbit/s
inclusive of error-correcting codes.
The net bit rate of ISDN2 Basic Rate Interface (2 B-channels + 1 D-
channel) of 64+64+16 = 144 kbit/s also refers to the payload data
rates, while the D channel signalling rate is 16 kbit/s.
The net bit rate of the Ethernet 100Base-TX physical layer standard is
100 Mbit/s, while the gross bitrate is 125 Mbit/second, due to
the 4B5B (four bit over five bit) encoding. In this case, the gross bit
rate is equal to the symbol rate or pulse rate of 125 Mbaud, due to
the NRZI line code.

In communications technologies without (a) forward error correction and other (b) physical layer
protocol overhead, there is no distinction between gross bit rate and physical layer net bit rate.
For example, the net as well as gross bit rate of Ethernet 10Base-T is 10 Mbit/s. Due to
the Manchester line code, (A) each bit is represented by two pulses, resulting in a pulse rate of 20
M baud.
The "connection speed" of a V.92 voiceband modem typically refers to the gross bit rate, since
there is no additional error-correction code. It can be up to 56,000-bit/s downstreams and 48,000-
bit/s upstreams.
A lower bit rate may be chosen during the connection establishment phase due toadaptive
modulation - slower but more robust modulation schemes are chosen in case of poor signal-to-noise
ratio.
Due to data compression, the actual data transmission rate or throughput (see below) may be
higher.
The channel capacity, also known as the Shannon capacity, is a theoretical upper bound for the
maximum net bit rate, (a)exclusive of forward error correction coding, that is possible without bit
errors for a certain physical analog node-to-node communication link.

Net bit rate Channel capacity


The channel capacity is proportional to the analog bandwidth in hertz. This proportionality is
called Hartley's law. Consequently the net bit rate is sometimes called digital bandwidth capacity in
bit/s.

Network throughput
Main article: Throughput
The term throughput, essentially the same thing
as digital bandwidth consumption, denotes the
achieved average useful bit rate in a computer
network over a logical or physical communication
link or through a network node, typically measured at a
reference point above the datalink layer.
This implies that the throughput often excludes data
link layer protocol overhead.
The throughput is affected by the- traffic load from
the data source in question, as well as from other
sources sharing the same network resources. See
also Measuring network throughput.
Goodput (data transfer rate)
: Goodput
Goodput or data transfer rate refers to the achieved average net bit rate that is delivered to
the application layer, exclusive of all protocol overhead, data packets retransmissions, etc.
For example, in the case of file transfer, the goodput corresponds to the achieved file transfer rate.
The file transfer rate in bit/s can be calculated as the file size (in bytes), divided by the file transfer
time (in seconds), and multiplied by eight.
As an example, the goodput or data transfer rate of a V.92
voiceband modem is affected by the modem physical layer and
data link layer protocols. It is sometimes higher than the physical
layer data rate due to V.44 data compression, and sometimes
lower due to bit-errors and automatic repeat
request retransmissions.

If no data compression is provided by the network equipment or protocols, we have the following
relation:

Goodput Throughput Maximum throughput Net bit rate


for a certain communication path.
Difference between throughput and goodput?

Goodput measures the rate of data transmission/reception

at the application layer without taking into account

retransmissions.

Applications like iperf typically report the

goodput seen by a typical application layer when operated

in TCP. However if iperf is used with UDP, which is a best

effort protocol, it is a better idea of the throughput seen on

the channel being measured.

Throughput is typically measured at layer 2/3 of the

network stack, thus including protocol overheads and

any retransmission.

Hence in quite a few cases the

useful goodput is <<< throughput measured.

For

example in file transmission, the "goodput" corresponds

to the file size (in bits) divided by the file transmission

time.
When data is transferred over a communications medium, such as the Internet or a local area network
(LAN),
the average transfer speed is often described as throughput.
This measurement includes all the
protocol overhead information,
such as packet headers and
other data that is included in the transfer process.
It also includes packets that are retransmitted because of network conflicts or errors.

Goodput, on the other hand, only measures the throughput of the original data.
Goodput can be calculated by
dividing the size of a transmitted file by the time it takes to transfer the file.
Since this calculation does not include the additional information that is transferred between systems,
the goodput measurement will always be less than or equal to the throughput.
For example, the maximum transmission unit MTU of anEthernet connection is 1,500 bytes. Therefore, any
file over 1,500 bytes must be split into multiple packets.
Each packet includes header information (typically 40 bytes), which adds to the total amount of data that
needs to be transferred.
Therefore, the goodput of an Ethernet connection will always be slightly less than the throughput.
While goodput is typically close to the throughput measurement,

several factors can cause the goodput to decrease.

For example, network congestion may cause data collisions, which requires packets to be resent. Many
protocols also require acknowledgment that packets have been received on the other end, which adds
additional overhead to the transfer process.
Whenever more overhead is added to a data transfer, it will increase the difference between the throughput
and the goodput.

Multimedia encoding
In digital multimedia, bit rate often refers to the number
of bits used per unit of playback time to represent a
continuous signal such as audio orvideo after source
coding (data compression).
The encoding bit rate of a multimedia file is the size of a
multimedia file in bytes divided by the playback time of
the recording (in seconds), multiplied by eight.
For realtime streaming multimedia, the encoding bit rate is
the goodput that is required to avoid interrupt:

Encoding bit rate = Required goodput


The term average bitrate is used in case of variable
bitrate multimedia source coding schemes.
In this context, the peak bit rate is the maximum
number of bits required for any short-term block of
compressed data.[12]
A theoretical lower bound for the encoding bit rate
for lossless data compression is the source
information rate, also known as the entropy rate.

Entropy rate Multimedia bit rate


Prefixes

When quantifying large bit rates, SI prefixes (also


known as Metric prefixes or Decimal prefixes) are
used, thus:
rate = 1 kbit/s (one kilobit or
1,000-bit/s
one thousand bits per second)
rate = 1 Mbit/s (one megabit or
1,000,000-bit/s
one million bits per second)
1,000,000,000- rate = 1 Gbit/s (one gigabit or
bit/s one billion bits per second)
Binary prefixes have almost never been used for
bitrates, although they may occasionally be seen
when data rates are expressed in bytes per
second (e.g. 1 kByte/s is sometimes interpreted
as 1000 Byte/s, sometimes as 1024 Byte/s). A
1999 IEC standard (IEC 60027-2) specifies
different abbreviations for Binary and Decimal (SI)
prefixes (e.g. 1 kiB/s = 1024 Byte/s = 8192-bit/s,
and 1 MiB/s = 1024 kiB/s), but these are still not
very common in the literature, and therefore
sometimes it is necessary to seek clarification of
the units used in a particular context.
[edit]Progress trends
These are examples of physical layer net
bit rates in proposed communication
standard interfaces and devices:

WAN modems Ethernet LAN WiFi WLAN Mobile data

1972: Acoustic 1975: 1997:802.11 1G:

coupler 300 baud Experimental 2 Mbit/s 1981: NMT 1200-

1977: 1200 2.94 Mbit/s 1999:802.11 bit/s

baud Vadic and Bell 1981: 10 b 11 Mbit/s 2G:

212A Mbit/s10BAS 1999:802.11 1991: GSM CSD an

1986: ISDN intr E5 (coax) a 54 Mbit/s d D-AMPS 14.4 kbit/s

oduced with two 64 1990: 10 2003:802.11 2003: GSM

kbit/s channels (144 Mbit/s10BAS g 54 Mbit/s EDGE 296 kbit/s down,

kbit/s gross bit rate) E-T (twisted 118.4 kbit/s up


1990: v.32bis m pair) 2007:802.11 3G:

odems: 2400 / 4800 1995: n600 Mbit/s 2001: UMTS-FDD

/ 9600 / 19200-bit/s 100 (WCDMA) 384 kbit/s


1994: v.34 mod Mbit/sFast 2007:

ems with 28.8 kbit/s Ethernet UMTS HSDPA 14.4 Mbit/s


1995: v.90 mod 1999: Gi 2008:

ems with 56 kbit/s gabit UMTS HSPA 14.4 Mbit/s

downstreams, 33.6 Ethernet down, 5.76 Mbit/s up

kbit/s upstreams 2003: 10 2009: HSPA+ (Witho

1999: v.92 mod Gigabit ut MIMO) 28 Mbit/s

ems with 56 kbit/s Ethernet downstreams (56 Mbit/s with

downstreams, 48 2x2 MIMO), 22 Mbit/s

kbit/s upstreams 2010: 10 upstreams


1998: ADSL up 0 Gigabit 2010:

to 8 Mbit/s, Ethernet CDMA2000 EV-DO Rev. B


2003: ADSL2 u 14.7 Mbit/s downstreams

p to 12 Mbit/s 2011: HSPA+ accele

rated (With MIMO) 42 Mbit/s


2005: ADSL2+ downstreams
up to 24 Mbit/s Pre-4G:

2007: Mobile
WiMAX (IEEE 802.16e) 144

Mbit/s down, 35 Mbit/s up.


2009: LTE 100

Mbit/s downstreams (360

Mbit/s with MIMO 2x2), 50

Mbit/s upstreams

See also Comparison of mobile phone

standards

For more examples, see List of device bit


rates, Spectral efficiency comparison
table and OFDM system comparison table.
[edit]Multimedia

In digital multimedia, bitrate represents the


amount of information, or detail, that is stored per
unit of time of a recording. The bitrate depends on
several factors:

The original material may be sampled at


different frequencies
The samples may use different numbers of
bits
The data may be encoded by different
schemes
The information may be
digitally compressed by different algorithms
or to different degrees

Generally, choices are made about the above


factors in order to achieve the desired trade-off
between minimizing the bitrate and maximizing
the quality of the material when it is played.
If lossy data compression is used on audio or
visual data, differences from the original signal will
be introduced; if the compression is substantial, or
lossy data is decompressed and recompressed,
this may become noticeable in the form
of compression artifacts. Whether these affect the
perceived quality, and if so how much, depends on
the compression scheme, encoder power, the
characteristics of the input data, the listeners
perceptions, the listener's familiarity with artifacts,
and the listening or viewing environment.
The bitrates in this section are approximately
the minimum that the average listener in a typical
listening or viewing environment, when using the
best available compression, would perceive as not
significantly worse than the reference standard:
[edit]Audio
[edit]MP3
The MP3 audio format lossy data compression.
Audio quality improves with increasing bitrate.

32 kbit/s - generally acceptable only for


speech

96 kbit/s - generally used for speech or low-


quality streaming

128 or 160 kbit/s low-to-standard bitrate


quality; difference can sometimes be
obvious[citation needed]

192 kbit/s - a commonly used high-quality


bitrate

320 kbit/s - highest level supported by MP3


standard

[edit]Other audio

800-bit/s minimum necessary for


recognizable speech, using the special-
purpose FS-1015 speech codecs.

1400 bit/s lowest bitrate open-source


speech codec Codec2.[13]

2.15 kbit/s minimum bitrate available


through the open-source Speex codec.

8 kbit/s telephone quality using speech


codecs.

32-500 kbit/s lossy audio as used in Ogg


Vorbis.

256 kbit/s Digital Audio Broadcasting


(DAB.) MP2 bit rate required to achieve a
high quality signal.[14]

400 kbit/s1,411 kbit/s lossless audio as


used in formats such as Free Lossless Audio
Codec, WavPack, or Monkey's Audio to
compress CD audio.

1,411.2 kbit/s Linear PCM sound format


of Compact Disc Digital Audio.

5,644.8 kbit/s - DSD, which is a trademarked


implementation of PDM sound format used
on Super Audio CD.[15]

6.144 Mbit/s - E-AC-3 (Dolby Digital Plus),


which is an enhanced coding system based
on the AC-3 codec.

18 Mbit/s - advanced lossless audio codec


based on Meridian Lossless Packing.

[edit]Video

16 kbit/s videophone quality (minimum


necessary for a consumer-acceptable "talking
head" picture using various video
compression schemes)

128384 kbit/s business-


oriented videoconferencing quality using
video compression

1.5 Mbit/s max VCD quality


(using MPEG1 compression)[16]

3.5 Mbit/s typ Standard-definition
television quality (with bit-rate reduction from
MPEG-2 compression)

9.8 Mbit/s max
DVD (using MPEG2 compression)[17]

8 to 15 Mbit/s typ HDTV quality (with bit-
rate reduction from MPEG-4 AVC
compression)

19 Mbit/s approximate HDV 720p
(using MPEG2 compression)[18]

24 Mbit/s max AVCHD (using MPEG4
AVC compression)[19]

25 Mbit/s approximate HDV 1080i
(using MPEG2 compression)[18]

29.4 Mbit/s max HD DVD

40 Mbit/s max Blu-ray
Disc (using MPEG2, AVC or VC-1 compressi
on)[20]

[edit]Notes
For technical reasons (hardware/software
protocols, overheads, encoding schemes, etc.)
the actual bitrates used by some of the compared-
to devices may be significantly higher than what is
listed above. For example:

Telephone circuits using law or A-


law companding (pulse code modulation)
64 kbit/s
CDs using CDDA PCM 1.4 Mbit/s

INTERSYMBOL INTERFERENCE (ISI)

In telecommunication, intersymbol interference (ISI) is a form of distortion of a signal in which

one symbol interferes with subsequent symbols.

This is an unwanted phenomenon as the previous symbols have similar effect as noise, thus

making the communication less reliable.

ISI is usually caused by

1. multipath propagation or

2. the inherent non-linear frequency response of a channel causing successive symbols to

"blur" together.

The presence of ISI in the system introduces errors in the decision device at the receiver output.
Therefore, in the design of the transmitting and receiving filters, the objective is
1. To minimize the effects of ISI, and
2. thereby deliver the digital data to its destination with the smallest error rate possible.
Ways to fight intersymbol interference include
1. adaptive equalization and
2. error correcting codes.
Contents
[hide]

1 Causes

o 1.1 Multipath propagation

o 1.2 Bandlimited channels

2 Effects on eye patterns

3 Countering ISI

4 See also

5 References

6 Further reading

7 External links

Causes

1. Multipath propagation
Main article: Multipath propagation

One of the causes of intersymbol interference is what is known as multipath propagation in which a

wireless signal from a transmitter reaches the receiver via many different paths.

The causes of this include

1. reflection (for instance, the signal may bounce off buildings),

2. refraction (such as through the foliage of a tree) and

3. atmospheric effects such as

(A) atmospheric ducting and

(B) ionospheric reflection.


Since all of these paths are of different lengths, this results in the different versions of the signal
arriving at the receiver at different times.
These delays mean that part or all of a given symbol will be spread into the subsequent symbols,
thereby interfering with the correct detection of those symbols.
Additionally, the various paths often distort the amplitude and/or phase of the signal thereby
causing further interference with the received signal.

2. Bandlimited channels

Another cause of intersymbol interference is the transmission of a signal through

a bandlimited channel, i.e., one where the frequency response is zero above a certain

frequency (the cutoff frequency).

Passing a signal through such a channel results in the removal of frequency components

above this cutoff frequency; in addition, the amplitude of the frequency components below

the cutoff frequency may also be attenuated by the channel.

This filtering of the transmitted signal affects the shape of the pulse that arrives at the

receiver.

The effects of filtering a rectangular pulse; not only change the shape of the pulse within the

first symbol period, but it is also spread out over the subsequent symbol periods.

When a message is transmitted through such a channel, the spread pulse of each individual

symbol will interfere with following symbols.

Bandlimited channels are present in both wired and wireless communications. The limitation is

often imposed by the desire to operate multiple independent signals through the same

area/cable; due to this, each system is typically allocated a piece of the

total bandwidth available.

For wireless systems, they may be allocated a slice of the electromagnetic spectrum to

transmit in (for example, FM radio is often broadcast in the 87.5 MHz - 108 MHz range). This

allocation is usually administered by a government agency; in the case of the United States this

is the Federal Communications Commission (FCC). In a wired system, such as an optical

fiber cable, the allocation will be decided by the owner of the cable.

The bandlimiting can also be due to the physical properties of the medium - for instance, the

cable being used in a wired system may have a cutoff frequency above which practically none of

the transmitted signal will propagate.

Communication systems that transmit data over bandlimited channels usually

implement pulse shaping to avoid interference caused by the bandwidth limitation.


If the channel frequency response is flat and the shaping filter has a finite bandwidth, it is

possible to communicate with no ISI at all. Often the channel response is not known

beforehand, and an adaptive equalizer is used to compensate the frequency response.


EFFECTS ON EYE PATTERNS

For more details on eye patterns, see eye pattern.

One way to study ISI in a PCM or data transmission system experimentally is to apply the received wave to

the vertical deflection plates of an oscilloscope and to apply a sawtooth wave at the transmitted symbol rate

R (R = 1/T) to the horizontal deflection plates. The resulting display is called an eye pattern because of its

resemblance to the human eye for binary waves. The interior region of the eye pattern is called the eye

opening. An eye pattern provides a great deal of information about the performance of the pertinent system.

1. The width of the eye opening defines the time interval over which the received wave can be

sampled without error from ISI. It is apparent that the preferred time for sampling is the instant of

time at which the eye is open widest.

2. The sensitivity of the system to timing error is determined by the rate of closure of the eye as the

sampling time is varied.

3. The height of the eye opening, at a specified sampling time, defines the margin over noise.

An eye pattern, which overlays many samples of a signal, can give a graphical representation of the signal

characteristics. The first image below is the eye pattern for a binary phase-shift keying (PSK) system in

which a one is represented by an amplitude of -1 and a zero by an amplitude of +1. The current sampling

time is at the center of the image and the previous and next sampling times are at the edges of the image.

The various transitions from one sampling time to another (such as one-to-zero, one-to-one and so forth)

can clearly be seen on the diagram.

The noise margin - the amount of noise required to cause the receiver to get an error - is given by the

distance between the signal and the zero amplitude point at the sampling time; in other words, the further

from zero at the sampling time the signal is the better. For the signal to be correctly interpreted, it must be

sampled somewhere between the two points where the zero-to-one and one-to-zero transitions cross. Again,

the further apart these points are the better, as this means the signal will be less sensitive to errors in the

timing of the samples at the receiver.

The effects of ISI are shown in the second image which is an eye pattern of the same system when

operating over a multipath channel. The effects of receiving delayed and distorted versions of the signal can

be seen in the loss of definition of the signal transitions. It also reduces both the noise margin and the

window in which the signal can be sampled, which shows that the performance of the system will be worse

(i.e. it will have a greater bit error ratio).


The eye diagram of a binary PSK system

The eye diagram of the same system with multipath effects added

Graphical eye pattern showing an example of two power levels in an OOK modulation scheme. Constant
binary 1 and 0 levels are shown, as well as transitions from 0 to 1, 1 to 0, 0 to 1 to 0, and 1 to 0 to 1.

EYE PATTERN
In telecommunication, an eye pattern, also known as an eye diagram, is an oscilloscope display.
In which a digital data signal from a receiver is repetitively sampled and applied to the vertical
input,
while the data rate is used to trigger the horizontal sweep.
It is so called because, for several types of coding, the pattern looks like a series of eyes between a
pair of rails.
Several system performance measures can be derived by analyzing the display.
If the signals are
1. too long,
2. too short,
3. poorly synchronized with the system clock,
4. too high,
5. too low,
6. too noisy, or
7. too slow to change, or
8. too much undershoot or overshoot
this can be observed from the eye diagram.
An open eye pattern corresponds to minimal signal distortion.
Distortion of the signal waveform due to intersymbol interference and noise appears as closure of
the eye pattern.[1][2][3]

The eye diagram of a binary PSK system

The eye diagram of the same system with multipath interference effects added

Eye diagram of a 4 level ASK signal


Measurements

There are many measurements that can be obtained from an Eye Diagram[4]:
Amplitude Measurements

Eye Amplitude
Eye Crossing Amplitude
Eye Crossing Percentage
Eye Height
Eye Level
Eye SNR
Quality Factor
Vertical Eye Opening

Time Measurements

Deterministic Jitter
Eye Crossing Time
Eye Delay
Eye Fall Time
Eye Rise Time
Eye Width
Horizontal Eye Opening
Peak-to-Peak Jitter
Random Jitter
RMS Jitter
Total Jitter

[edit]Interpreting Measurements

Eye-diagram feature What it measures

Eye opening (height, peak to


Additive noise in the signal
peak)

Eye overshoot/undershoot Peak distortion due to interruptions in the signal path

Eye width Timing synchronization & jitter effects

Eye closure Intersymbol interference, additive noise

Countering ISI

There are several techniques in telecommunication and data storage that try to work around the

problem of intersymbol interference.

Design systems such that the impulse response is short enough that very little energy from one

symbol smears into the next symbol.


Consecutive raised-cosine impulses, demonstrating zero-ISI property

Separate symbols in time with guard periods.

Apply an equalizer at the receiver, that, broadly speaking, attempts to undo the effect of the

channel by applying an inverse filter.


Apply a sequence detector at the receiver, that attempts to estimate the sequence of

transmitted symbols using the Viterbi algorithm.

[edit]

The difference between Hz (Hertz) and bps (bits per second) is both a simple distinction and a
complicated one.
So I'll try to keep it on the simple side: Hz applies to a clock frequency that is used to modulate
the electrical signal on the wire (assuming copper).
The higher the rate of modulation (Hz), the more information that can be transmitted per
second.
It is fundamental to the operation of the communication interface but it doesn't tell the most
useful story.
In the early days of modems, the rate of modulation was typically referred to as "baud."
Baud used to be synonymous with bps however encoding techniques have evolved
considerably and the relationship has changed.
Bps is typically different from the modulation rate and is primarily important with respect to data transfer
rates.

Bit rate is a measure of the number of data bits (that's 0's and 1's) transmitted in one second.
A figure of 2400 bits per second means 2400 zeros or ones can be transmitted in one second,
hence the abbreviation 'bps'.
Baud rate by definition means the number of times a signal in a communications channel
changes state.
For example, a 2400 baud rate means that the channel can change states up to 2400 times per
second.
When I say 'change state' I mean that it can change from 0 to 1 up to 2400 times per second.
If you think about this, it's pretty much similar to the bit rate, which in the above example was 2400
bps.
1. Whether you can transmit 2400 zeros or ones in one second (bit rate), or change the
state of a digital signal up to 2400 times per second (baud rate), it the same thing.
2. So we can conclude that in the above example, the bit rate is the same as the baud rate.
Hence, 1 bit rate = 1 baud rate for this example.
There are cases though where a channel can send 4 bits per baud,

meaning that for every 4 bits, we have one change, and in this case, the baud
rate is 1/4th of the bit rate.

Baud was the prevalent measure for data transmission speed until replaced
by a more accurate term, bps (bits per second).

One baud is one electronic state change per second.

Since a single state change can involve more than a single bit of data, the
bps unit of measurement has replaced it as a better expression of data
transmission speed.
The measure was named after a French engineer, Jean-Maurice-Emile Baudot. It was first used to measure
the speed of telegraph transmissions.
n data communications, bits per second (abbreviated bps or bit/sec) is a common measure of data speed for
computer modems and transmission carriers. As the term implies, the speed in bps is equal to the number of
bits transmitted or received each second.
Larger units are sometimes used to denote high data speeds. One kilobit per second (abbreviated Kbps in
the U.S.; kbps elsewhere) is equal to 1,000 bps. One megabit per second (Mbps) is equal to 1,000,000 bps
or 1,000 Kbps.
Computer modems for twisted pair telephone lines usually operate at 57.6 Kbps or, with Digital Subscriber
Line (DSL) service, at 512 Kbps or faster. So-called "cable modems," designed for use with TV cable
networks, can operate at more than 1.5 Mbps. Fiber optic modems can send and receive data at many
Mbps.
The bandwidth of a signal depends on the speed in bps. With some exceptions, the higher the bps number,
the greater is the nominal signal bandwidth. (Speed and bandwidth are, however, not the same thing.)
Bandwidth is measured in standard frequency units of kHz or MHz.
Data speed used to be specified in terms of baud, which is a measure of the number of times a digital signal
changes state in one second. Baud, sometimes called the "baud rate," is almost always a lower figure than
bps for a given digital signal because some signal modulation techniques allow more than one data bit to be
transmitted per change state.
In data communications, bits per second (abbreviated bps or bit/sec) is a common measure of data speed
for computer modems and transmission carriers. As the term implies, the speed in bps is equal to the
number of bits transmitted or received each second.
Larger units are sometimes used to denote high data speeds. One kilobit per second (abbreviated Kbps in
the U.S.; kbps elsewhere) is equal to 1,000 bps. One megabit per second (Mbps) is equal to 1,000,000 bps
or 1,000 Kbps.
Computer modems for twisted pair telephone lines usually operate at 57.6 Kbps or, with Digital Subscriber
Line (DSL) service, at 512 Kbps or faster. So-called "cable modems," designed for use with TV cable
networks, can operate at more than 1.5 Mbps. Fiber optic modems can send and receive data at many
Mbps.
The bandwidth of a signal depends on the speed in bps. With some exceptions, the higher the bps number,
the greater is the nominal signal bandwidth. (Speed and bandwidth are, however, not the same thing.)
Bandwidth is measured in standard frequency units of kHz or MHz.
Data speed used to be specified in terms of baud, which is a measure of the number of times a digital signal
changes state in one second. Baud, sometimes called the "baud rate," is almost always a lower figure than
bps for a given digital signal because some signal modulation techniques allow more than one data bit to be
transmitted per change state.
Baud was the prevalent measure for data transmission speed until replaced by a more accurate term, bps
(bits per second). One baud is one electronic state change per second. Since a single state change can
involve more than a single bit of data, the bps unit of measurement has replaced it as a better expression of
data transmission speed.
The measure was named after a French engineer, Jean-Maurice-Emile Baudot. It was first used to measure
the speed of telegraph transmissions.

The difference between Hz (Herz) and bps (bits per second) is both a simple distinction and a
complicated one. So I'll try to keep it on the simple side: Hz applies to a clock frequency that is
used to modulate the electrical signal on the wire (assuming copper). The higher the rate of
modulation (Hz), the more information that can be transmitted per second. It is fundamental to the
operation of the communication interface but it doesn't tell the most useful story. In the early days of
modems, the rate of modulation was typically referred to as "baud." Baud used to be synonymous
with bps however encoding techniques have evolved considerably and the relationship has
changed. Bps is typically different from the modulation rate and is primarily important with respect
to data transfer rates.
The Bottom Line The true measure of modem speed is the number of data bits transmitted per second.
"Baud" refers to changes in state of a modem's signal. All are defined here ...

Do you know what bits, baud and bps really mean? Modem transmission speed is the source of no little
confusion, even among otherwise informed computer and modem users. The root of the problem is the fact
that the terms "baud" and "bits per second" are used interchangeably and indiscriminately. I strongly suspect
this is a result of the fact that it's easier to say "baud" than "bits per second," though misinformation has a
hand in it, too.
If you've ever found yourself confused by the relationship between bits and baud rate, or if you think that a
modem's baud rate is the same as the number of bits or characters it transmits per second, please read this
article carefully. I guarantee to clear up the confusion and disabuse you of any false concepts ... and just
maybe make the whole matter of modem speed a little less intimidating.

Computer data format


Before we examine how fast data can be moved by modems, we should take a quick look at just what
computer data is.
We perceive most computer data as letters, numerals, spaces, and other symbols (including graphic
elements). These are referred to as characters or bytes. Your PC sees data as more than that, however.
Within a computer, each character is handled as a group of what are called data bits. A data bits is literally
the smallest discreet data unit, and data bits are the building blocks of all computer data. Your computer, the
programs it runs, storage media, and data transmission devices like modems see each character or byte as
a group of 7 or 8 bits.
The computer sees the data bits these as binary digits, represented by digital 0s and 1s. (The word bit is
actually a contraction of Binary digIT.) Depending on the device, the 0 or 1 may be represented by an off
or on state, a positive or negative electrical charge, etc. The important element here is that there are only
two possible states for a bit, represented digitally by 0 or 1.
Thus, all characters or bytes are merely groupings of digital 1s or 0s. The number of possible groupings in
7- or 8-bit strings is exactly whats required to represent all possible letters, numerals, spaces, and other
symbols your PC uses. These groups are what your systems software and your modem deal with--in
different formats, but always representing bytes as groups of digital 1s and 0s. (The letter T at the
beginning of this sentence, for example, is represented digitally by 110110.)
Given that the basic unit of data is the bit, and your system and modem handle data as bits, it should be no
surprise that the true measure of modem speed is the number of bits transmitted per second.

Bits per second


Bits per second is a measure of the number of data bits transmitted each second by a modem. This is
sometimes referred to as the "bit rate, but more commonly referred to as bps (short for bits per second).
A given character (letter, number, space, symbol, or other character) is made up of
Individual characters (letters, numbers, etc.) are made up of 7 or 8 bits, and are called bytes. Thus, bits are
the basic components, or building blocks, of data in your computer.
are composed of several bits.
While a modems bps rate is tied to its baud rate, the two are not the same, as explained below.

Baud rate
Modems transmit data by changing the signal in a communications link (a telephone line). Changes in the
signal (in strength, frequency, or other elements) represent bits. The nature of the signal and how it changes
arent important here; what does matter is the fact that the signal changes in some respect to represent each
bit--one sort of signal change can be a digital 0, another can be a digital 1. Thus, a series of changes
represents each byte (letter, numeral, etc.) transmitted. You dont have to worry about how this is done; your
modem and software take care of that.
Each change is referred to as a baud.
Baud rate is a measure of the number of times per second a signal in a communications link changes. One
baud is one such change. Thus, a 300-baud modem's signal changes state 300 times each second, while a
600-baud modem's signal changes state 600 times per second. This does not necessarily mean that a 300-
baud and a 600-baud modem transmit 300 and 600 bits per second, as you'll learn in a few lines.

Determining bits per second


Depending on the modulation technique used, a modem can transmit one bit--or more or less than one bit--
with each baud, or change in state. Or, to put it another way, one change of state can transmit one bit--or
more or less than one bit.
As I mentioned earlier, the number of bits a modem transmits per second is directly related to the number of
bauds that occur each second, but the numbers are not necessarily the same.
To illustrate this, first consider a modem with a baud rate of 300, using a transmission technique called FSK
(Frequency Shift Keying, in which four different frequencies are turned on and off to represent digital 0 and 1
signals from both modems). When FSK is used, each baud (which is, again, a change in state) transmits
one bit; only one change in state is required to send a bit. Thus, the modem's bps rate is also 300:

300 bauds per second X 1 bit per baud = 300 bps

Similarly, if a modem operating at 1200 baud were to use one change in state to send each bit, that
modem's bps rate would be 1200. (There are no 1200 baud modems, by the way--nor are there any 2400-
baud modems. Remember that. This is only a demonstrative and hypothetical example.)
Now, consider a hypothetical 300-baud modem using a modulation technique that requires two changes in
state to send one bit, which can also be viewed as 1/2 bit per baud. Such a modem's bps rate would be 150
bps:

300 bauds per second X 1/2 baud per bit = 150 bps

To look at it another way, bits per second can also be obtained by dividing the modem's baud rate by the
number of changes in state, or bauds, required to send one bit:

300 baud
--------------- = 150 bps
2 bauds per bit

Now let's move away from the hypothetical and into reality, as it exists in the world of modulation.
First, lest you be misled into thinking that "any 1200 baud modem" should be able to operate at 2400 bps
with a two-bits-per-baud modulation technique, remember that I said there are no 1200 baud modems.
Medium- and high-speed modems use baud rates that are lower than their bps rates. Along with this,
however, they use multiple-state modulation to send more than one bit per baud.
For example, 1200 bps modems that conform to the Bell 212A standard (which includes most 1200 bps
modems used in the U.S.) operate at 300 baud and use a modulation technique called phase modulation
that transmits four bits per baud. Such modems are capable of 1200 bps operation, but not 2400 bps
because they send only four bits per baud, with 300 bauds per second. So:

300 baud X 4 bits per baud = 1200 bps

or

300 baud
------------------ = 1200 bps
1/4 baud per bit

Similarly, 2400 bps modems that conform to the CCITT V.22 recommendation (virtually all of them) actually
use a baud rate of 600 when they operate at 2400 bps. However, they also use a modulation technique that
transmits four bits per baud:

600 baud X 4 bits per baud = 2400 bps

or

600 baud
------------------ = 2400 bps
1/4 baud per bit

Thus, a 1200-bps modem is not a 1200-baud modem, nor is a 2400-bps modem a 2400-baud modem.
Now let's take a look at 9600-bps modems. Most of these operate at 2400 baud, but (again) use a
modulation technique that yields four bits per baud. Thus:

2400 baud X 4 bits per baud = 9600 bps

or

2400 baud
------------------ = 9600 bps
1/4 baud per bit

Characters per second


Characters per second (cps) is a measure of the number of characters (letters, numerals, spaces, and
symbols) transmitted over a communications link in one second. Cps is often the bottom line in rating data
transmission speed, and a more convenient way of thinking about data transfer than baud-rate or bps.
Determining the number of characters transmitted per second is easy: simply divide the bps rate by the
number of bits per character. You must of course take into account the fact that more than just the bits that
make up the binary digit representing a character are transmitted when a character is sent from one system
to another. In fact, up to 10 bits may be transmitted for each character during ASCII transfer, whether 7 or 8
data bits are used for each character. This is because what are called start- and stop-bits are added to
characters by a sending system to enable the receiving system to determine which groups of bits make up a
character.
In addition, a system usually adds a parity bit during 7-bit ASCII transmission. (The computer's serial port
handles the addition of the extra bits, and all extra bits are stripped out at the receiving end.)
So, the number of bits per character is usually 10 (either 7 data bits, plus a parity bit, plus a start bit and a
stop bit, or 8 data bits plus a start bit and a stop bit). Thus:

9600 bps
----------------------- = 960 characters per second
10 bits per character

14,400 bps
----------------------- = 1,440 characters per second
10 bits per character

28,800 bps
----------------------- = 2,880 characters per second
10 bits per character

and so on.

CPS and file-transmission times


Remember: The number of characters per second is the same as the number of bytes transmitted each
second. So, you can use this information to estimate the amount of time required to send or receive a given
file. If, for instance, you want to download a file that is 403,200 bytes in size at 14,400 bps, the download
should take about 4-1/2 minutes.
Note that I should the download should take 4-1/2 minutes; in reality, it will take up to half again as much
time. This is because of overhead time your system and the remote system require. Your system has to
open the file, check incoming data, and write the file to your hard disk, while the remote system has to find,
check, and transmit the data--and both systems carry on data-checking communications as the file is
transmitted. So, the ideal time of 4-1/2 minutes is stretched to 6-1/2 minutes, on the average. This overhead
time becomes less of a percentage of the total expected time the larger the file is, fortunately.
Not incidentally, you may on occasion find that a transmission is taking even longer than you expect, even
allowing for the overhead time. When this is the case, it is usually caused by the fact that your modem and
the remote systems modem have reduced their communication speed due to poor telephone line conditions
(or, in the case of a large download, too many errors during file transmission).
When the difference is enough to be noticeable, it is probably a good idea to log off and reconnect. Quite
often, you can get a bad line and calling back solves the problem. If the problem is persistent, it is probably
due to weather conditions somewhere along the modem link. In that case, all you can do is live with the
slowdown, or wait a few hours and try again.

Baud rate is the number of symbols that can be transmitted over a line in a second. This is similar to bit
rate except that each symbol usually consists of more than 1 bit. So in a system with 3 bit symbols operating
at 1000 baud will have a 3000 bit/s bit rate.

Bandwidth is the difference between the upper and lower frequencies of a given piece of spectrum and is
measured in Hz. This is essentially the amount of space available to transmit data through the air or over a
wire. Think of cable TV; multiple channels are able to be carried over a single wire because each channel
uses its own frequency range, and the width of that frequency range is the channel's bandwidth.

Channel capacity is the maximum data rate that can be carried over a certain medium given several factors
including the amount of bandwidth available. The greater the bandwidth, the greater the channel capacity.

Because of the direct relation between bandwidth and channel capacity, channel capacity and throughput
are often referred to as bandwidth in the computing world.

THROUGHPUT
In communication networks, such as Ethernet or packet radio,

Throughput or network throughput is the average rate of successful message delivery over a
communication channel.
This data may be delivered over a physical or logical link, or pass through a certain network node. The
throughput is usually measured in bits per second (bit/s or bps), and sometimes in data packets per
second or data packets per time slot.
The system throughput or aggregate throughput is the sum of the data rates that are delivered to all
terminals in a network.
The throughput can be analyzed mathematically by means of queueing theory, where the load in packets
per time unit is denoted arrival rate , and the throughput in packets per time unit is denoted departure rate
.
Throughput is essentially synonymous to digital bandwidth consumption.
In computer networking and computer science, the words bandwidth,[1] network bandwidth,[2] data
bandwidth,[3] or digital bandwidth[4][5] are terms used to refer to various bit-rate measures, representing the
available or consumed data communication resources expressed in bits per second or multiples of it (bit/s,
kbit/s, Mbit/s, Gbit/s, etc.).
Note that in textbooks on signal processing, wireless communications, modem data transmission, digital
communications, electronics, etc., the word 'bandwidth' is used to refer to analog signal
bandwidth measured in hertz. The connection is that according to Hartley's law, the digital data rate limit
(or channel capacity) of a physical communication link is proportional to its bandwidth in hertz.

Contents
[hide]

1 Network bandwidth capacity

2 Network bandwidth

consumption
3 Asymptotic bandwidth

4 Multimedia bandwidth

5 Bandwidth in web hosting


6 Internet connection

bandwidths
7 See also

8 References
[edit]Network bandwidth capacity

Bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate or physical layer useful bit
rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital
communication system. For example, bandwidth tests measure the maximum throughput of a computer
network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical
communication link is proportional to its bandwidth in hertz, which is sometimes called frequency
bandwidth, spectral bandwidth, RF bandwidth, signal bandwidthor analog bandwidth.
[edit]Network bandwidth consumption

Bandwidth in bit/s may also refer to consumed bandwidth, corresponding to achieved throughput or goodput,
i.e., the average rate of successful data transfer through a communication path. This sense applies to
concepts and technologies such as bandwidth shaping, bandwidth management,bandwidth
throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic
bandwidth allocation), etc. A bit stream's bandwidth is proportional to the average consumed signal
bandwidth in Hertz (the average spectral bandwidth of the analog signal representing the bit stream) during
a studied time interval.
Channel bandwidth may be confused with data throughput. A channel with x bps may not necessarily
transmit data at x rate, since protocols, encryption, and other factors can add appreciable overhead. For
instance, a lot of internet traffic uses the transmission control protocol (TCP) which requires a three-way
handshake for each transaction, which, though in many modern implementations is efficient, does add
significant overhead compared to simpler protocols. In general, for any effective digital communication, a
framing protocol is needed; overhead and effective throughput depends on implementation. Actual
throughput is less than or equal to the actual channel capacity plus implementation overhead.
[edit]Asymptotic bandwidth

The asymptotic bandwidth for a network is the measure of useful throughput, when the packet size
approaches infinity.[6]
Asymptotic bandwidths are usually estimated by sending a number of very large messages through the
network, measuring the end-to-end throughput. As other bandwidths, the asymptotic bandwidth is measured
in multiples of bits per second.
[edit]Multimedia bandwidth

Digital bandwidth may also refer to: multimedia bit rate or average bitrate after multimedia data
compression (source coding), defined as the total amount of data divided by the playback time.
[edit]Bandwidth in web hosting

In website hosting, the term "bandwidth" is often[citation needed] incorrectly used to describe the amount of data
transferred to or from the website or server within a prescribed period of time, for example bandwidth
consumption accumulated over a month measured in gigabytes per month. The more accurate phrase used
for this meaning of a maximum amount of data transfer each month or given period is monthly data transfer.
[edit]Internet connection bandwidths

This table shows the maximum bandwidth (the physical layer net bitrate) of common Internet access
technologies. For more detailed lists see

list of device bandwidths


bit rate progress trends
list of multimedia bit rates.
56 kbit/s Modem / Dialup

1.5 Mbit/s ADSL Lite

1.544 Mbit/s T1/DS1

2.048 Mbit/s E1 / E-carrier

10 Mbit/s Ethernet

11 Mbit/s Wireless 802.11b

44.736 Mbit/s T3/DS3

54 Mbit/s Wireless 802.11g

100 Mbit/s Fast Ethernet

155 Mbit/s OC3

600 Mbit/s Wireless 802.11n

622 Mbit/s OC12

1 Gbit/s Gigabit Ethernet

2.5 Gbit/s OC48

9.6 Gbit/s OC192

10 Gbit/s 10 Gigabit Ethernet


100 Gbit/s 100 Gigabit Ethernet

Contents
[hide]

1 Maximum throughput

o 1.1 Maximum theoretical throughput

o 1.2 Peak measured throughput

o 1.3 Maximum sustained throughput

2 Channel utilization - Channel efficiency - Normalized

throughput
3 Factors affecting throughput
o 3.1 Analog limitations

o 3.2 IC hardware considerations

o 3.3 Multi-user considerations

4 Goodput and overhead

5 Other uses of throughput for data


o 5.1 Integrated Circuits

o 5.2 Wireless and cellular networks

o 5.3 Over analog channels

6 See also

7 Footnotes

8 References
[edit]Maximum throughput

See also Peak Information Rate (pir)


Users of telecommunications devices, systems designers, and researchers into communication theory
are often interested in knowing the expected performance of a system. From a user perspective, this is
often phrased as either "which device will get my data there most effectively for my needs?", or "which
device will deliver the most data per unit cost?". Systems designers are often interested in selecting the
most effective architecture or design constraints for a system, which drive its final performance. In
most cases, the benchmark of what a system is capable of, or its 'maximum performance' is what the
user or designer is interested in.
When examining throughput, the term 'Maximum Throughput' is frequently used where end-user
maximum throughput tests are discussed in detail.
Maximum throughput is essentially synonymous to digital bandwidth capacity.
Four different values have meaning in the context of "maximum throughput", used in
comparing the 'upper limit' conceptual performance of multiple systems.
They are 'maximum theoretical throughput', 'maximum achievable throughput', and 'peak
measured throughput' and 'maximum sustained throughput'.
These represent different quantities and care must be taken that the same definitions are used when
comparing different 'maximum throughput' values. Comparing throughput values is also dependent on
each bit carrying the same amount of information.
Data compression can significantly skew throughput calculations, including generating
values greater than 100%. If the communication is mediated by several links in series with
different bit rates, the maximum throughput of the overall link is lower than or equal to the
lowest bit rate. The lowest value link in the series is referred to as the bottleneck.
Maximum theoretical throughput
This number is closely related to the channel capacity of the system, and is the maximum
possible quantity of data that can be transmitted under ideal circumstances.
In some cases this number is reported as equal to the channel capacity, though this can be
deceptive, as only non-packetized systems (asynchronous) technologies can achieve this
without data compression.
Maximum theoretical throughput is more accurately reported to take into account format and
specification overhead with best case assumptions. This number, like the closely related term
'maximum achievable throughput' below, is primarily used as a rough calculated value, such as
for determining bounds on possible performance early in a system design phase.
Peak measured throughput
The above values are theoretical or calculated values.
Peak measured throughput is throughput measured by a real, implemented system, or a simulated
system. The value is the throughput measured over a short period of time; mathematically, this is the
limit taken with respect to throughput as time approaches zero. This term is synonymous with
"instantaneous throughput". This number is useful for systems that rely on burst data transmission,
however, for systems with a high duty cycle this is less likely to be a useful measure of system
performance.
[edit]Maximum sustained throughput
This value is the throughput averaged or integrated over a long time (sometimes considered infinity).
For high duty cycle networks this is likely to be the most accurate indicator of system performance. The
maximum throughput is defined as the asymptotic throughput when the load (the amount of incoming
data) is very large. In packet switched systems where the load and the throughput always are equal
(where packet loss does not occur), the maximum throughput may be defined as the minimum load in
bit/s that causes the delivery time (the latency) to become unstable and increase towards infinity. This
value can also be used deceptively in relation to peak measured throughput to conceal packet shaping.
[edit]Channel utilization - Channel efficiency - Normalized throughput

Throughput is sometimes normalized and measured in percentage,


but normalization may cause confusion regarding what the percentage is related to.
Channel utilization, Channel efficiency and packet drop rate in percentage are less
ambiguous terms.
The channel efficiency, also known as bandwidth utilization efficiency, in percentage is
the achieved throughput related to the net bitrate in bit/s of a digital communication channel.
For example, if the throughput is 70 Mbit/s in a 100 Mbit/s Ethernet connection, the channel
efficiency is 70%. In this example, effective 70Mbits of data are transmitted every second.
Channel utilization is instead a term related to the use of the channel disregarding the
throughput.
It counts not only with the data bits but also with the overhead that makes use of the
channel.
The transmission overhead consists of
1. preamble sequences,
2. frame headers and
3. acknowledge packets.

The definitions assume a noiseless channel. Otherwise, the throughput would not be only associated
to the nature (efficiency) of the protocol but also to retransmissions resultant from quality of the
channel.
In a simplistic approach, channel efficiency can be equal to channel utilization assuming that
acknowledge packets are zero-length and that the communications provider will not see any bandwidth
relative to retransmissions or headers. Therefore, certain texts mark a difference between channel
utilization and protocol efficiency.
In a point-to-point or point-to-multipoint communication link, where only one terminal is transmitting, the
maximum throughput is often equivalent to or very near the physical data rate (the channel capacity),
since the channel utilization can be almost 100% in such a network, except for a small inter-frame gap.

For example, in Ethernet the maximum frame size 1526 bytes (maximum 1500 byte payload + 8 byte

preamble + 14 byte header + 4 Byte trailer). An additional minimum interframe gap corresponding to 12

byte is inserted after each frame. This corresponds to a maximum channel utilization of 1526/

(1526+12)100% = 99.22%, or a maximum channel use of 99.22 Mbit/s inclusive of Ethernet datalink

layer protocol overhead in a 100 Mbit/s Ethernet connection. The maximum throughput or channel

efficiency is then 1500/(1526+12) = 97.5 Mbit/s exclusive of Ethernet protocol overhead.


[edit]Factors affecting throughput

The throughput of a communication system will be limited by a huge number of factors. Some of these
are described below:
[edit]Analog limitations
The maximum achievable throughput (the channel capacity) is affected by the bandwidth in hertz and
signal-to-noise ratio of the analog physical medium.
Despite the conceptual simplicity of digital information, all electrical signals traveling over wires are
analog. The analog limitations of wires or wireless systems inevitably provide an upper bound on the
amount of information that can be sent. The dominant equation here is the Shannon-Hartley theorem,
and analog limitations of this type can be understood as factors that affect either the analog bandwidth
of a signal or as factors that affect the signal to noise ratio. It should be noted that the bandwidth of
wired systems can be in fact surprisingly narrow, with the bandwidth of Ethernet wire limited to
approximately 1 GHz, and PCB traces limited by a similar amount.
Digital systems refer to the 'knee frequency',[2] the amount of time the digital voltage to rise from 10% of
a nominal digital '0' to a nominal digital '1' or vice-verse. The knee frequency is related to the required
bandwidth of a channel, and can be related to the 3 db bandwidth of a system by the equation:
[3]
Where Tr is the 10% to 90% rise time, and K is a constant of proportionality
related to the pulse shape, equal to 0.35 for exponential rise, and 0.338 for Gaussian rise.


RC losses: wires have an inherent resistance, and an inherent capacitance when measured with
respect to ground. This leads to effects calledparasitic capacitance, causing all wires and cables to
act as RC lowpass filters.

Skin effect: As frequency increases, electric charges migrate to the edges of wires or cable. This
reduces the effective cross sectional area available for carrying current, increasing resistance and
reducing the signal to noise ratio. For AWG 24 wire (of the type commonly found in Cat 5ecable),
the skin effect frequency becomes dominant over the inherent resistivity of the wire at 100 kHz. At
1 GHz the resistivity has increased to 0.1 ohms/inch.[4]

Termination and ringing: For long wires (wires longer than 1/6 wavelengths can be considered long)
must be modeled as transmission lines and take termination into account. Unless this is done,
reflected signals will travel back and forth across the wire, positively or negatively interfering with
the information-carrying signal.[5]

Wireless Channel Effects: For wireless systems, all of the effects associated with wireless
transmission limit the SNR and bandwidth of the received signal, and therefore the maximum
number of bits that can be sent.

[edit]IC hardware considerations


Computational systems have finite processing power, and can drive finite current. Limited current drive
capability can limit the effective signal to noise ratio for high capacitance links.
Large data loads that require processing impose data processing requirements on hardware (such as
routers). For example, a gateway router supporting a populated class B subnet, handling 10 x 100
Mbit/s Ethernet channels, must examine 16 bits of address to determine the destination port for each
packet. This translates into 81913 packets per second (assuming maximum data payload per packet)
with a table of 2^16 addresses this requires the router to be able to perform 5.368 billion lookup
operations per second. In a worse case scenario, where the payloads of each Ethernet packet are
reduced to 100 bytes, this number of operations per second jumps to 520 billion. This router would
require a multi-teraflop processing core to be able to handle such a load.

CSMA/CD and CSMA/CA "backoff" waiting time and frame retransmissions after detected
collisions. This may occur in Ethernet bus networks and hub networks, as well as in wireless
networks.
flow control, for example in the Transmission Control Protocol (TCP) protocol, affects the
throughput if the bandwidth-delay product is larger than the TCP window, i.e. the buffer size. In that
case the sending computer must wait for acknowledgement of the data packets before it can send
more packets.
TCP congestion avoidance controls the data rate. So called "slow start" occurs in the beginning of
a file transfer, and after packet drops caused by router congestion or bit errors in for example
wireless links.

[edit]Multi-user considerations
Ensuring that multiple users can harmoniously share a single communications link requires some kind
of equitable sharing of the link. If a bottle neck communication link offering data rate R is shared by "N"
active users (with at least one data packet in queue), every user typically achieves a throughput of
approximately R/N, if fair queuing best-effort communication is assumed.

Packet loss due to Network congestion. Packets may be dropped in switches and routers when the
packet queues are full due to congestion.
Packet loss due to bit errors.
Scheduling algorithms in routers and switches. If fair queuing is not provided, users that send large
packets will get higher bandwidth. Some users may be prioritized in a weighted fair queuing (WFQ)
algorithm if differentiated or guaranteed quality of service (QoS) is provided.
In some communications systems, such as satellite networks, only a finite number of channels may
be available to a given user at a given time. Channels are assigned either through preassignment
or through Demand Assigned Multiple Access (DAMA).[6] In these cases, throughput is quantized
per channel, and unused capacity on partially utilized channels is lost..

[edit]Goodput and overhead

Main article: Goodput


The maximum throughput is often an unreliable measurement of perceived bandwidth, for example the
file transmission data rate in bits per seconds. As pointed out above, the achieved throughput is often
lower than the maximum throughput. Also, the protocol overhead affects the perceived bandwidth.
The throughput is not a well-defined metric when it comes to how to deal with protocol overhead. It is
typically measured at a reference point below the network layer and above the physical layer.
The most simple definition is the number of bits per second that are physically delivered. A typical
example where this definition is practiced is an Ethernet network. In this case the maximum throughput
is the gross bitrate or raw bitrate.
However, in schemes that include forward error correction codes (channel coding), the redundant error
code is normally excluded from the throughput. An example in modem communication, where the
throughput typically is measured in the interface between the Point-to-Point Protocol(PPP) and the
circuit switched modem connection. In this case the maximum throughput is often called net
bitrate or useful bitrate.
To determine the actual data rate of a network or connection, the "goodput" measurement definition
may be used. For example in file transmission, the "goodput" corresponds to the file size (in bits)
divided by the file transmission time.
The "goodput" is the amount of useful information that is delivered per second to the application
layer protocol. Dropped packets or packet retransmissions as well as protocol overhead are excluded.
Because of that, the "goodput" is lower than the throughput. Technical factors that affect the difference
are presented in the "goodput" article.
[edit]Other uses of throughput for data

[edit]Integrated Circuits
Often, a block in a data flow diagram has a single input and a single output, and operate on discrete
packets of information. Examples of such blocks are FFT modules or binary multipliers. Because the
units of throughput are the reciprocal of the unit for propagation delay, which is 'seconds per message'
or 'seconds per output', throughput can be used to relate a computational device performing a
dedicated function such as an ASIC orembedded processor to a communications channel, simplifying
system analysis.
[edit]Wireless and cellular networks
In wireless networks or cellular systems, the system spectral efficiency in bit/s/Hz/area unit, bit/s/Hz/site
or bit/s/Hz/cell, is the maximum system throughput (aggregate throughput) divided by the analog
bandwidth and some measure of the system coverage ara.
[edit]Over analog channels
Throughput over analog channels is defined entirely by the modulation scheme, the signal to noise
ratio, and the available bandwidth. Since throughput is normally defined in terms of quantified digital
data, the term 'throughput' is not normally used; the term 'bandwidth' is more often used instead

Frame synchronization

From Wikipedia, the free encyclopedia


(Redirected from Framing bits)

While receiving a stream of framed data, frame synchronization or framing is the process by which
incoming frame alignment signals (i.e., a distinctive bit sequences or syncwords),
are identified (that is,
distinguished from data bits), permitting the data bits within the frame to be extracted for
decoding or retransmission.
Contents
[hide]

1 Framing

2 Frame synchronizer
o 2.1 Television

o 2.2 Telemetry

3 See also
4 References
o 4.1 Scientific articles

5 External links

[edit]Framing

If the transmission is temporarily interrupted, or a bit slip event occurs, the receiver must re-synchronize.

Frame synchronized PCM stream telemetry application

The transmitter and the receiver must agree ahead of time on which frame synchronization scheme they will

use.

Common frame synchronization schemes are:

Framing bit

A common practice in telecommunications, for example in T-carrier, is to insert, in a dedicated time

slot within the frame, a noninformation bit orframing bit that is used for synchronization of the

incoming data with the receiver. In a bit stream, framing bits indicate the beginning or end of a

frame. They occur at specified positions in the frame, do not carry information, and are usually

repetitive.

Syncword framing

Some systems use a special syncword at the beginning of every frame.

CRC-based framing

Some telecommunications hardware uses CRC-based framing.

[edit]Frame synchronizer

[edit]Television

Further information: Time base corrector


A frame synchronizer is a device used in live television production to match the timing of an

incoming video source to the timing of an existing video system. They are often used to "time

in" consumer video equipment to a professional system but can be used to stabilize any

video. The frame synchronizer essentially takes a picture of each frame of incoming video and

then immediately outputs it with the correct synchronization signals to match an existing video

system. A genlock signal is required to provide a means for video synchronizing with the

house reference.

[edit]Telemetry

PCM Stream prior to frame synchronization

In telemetry applications, a frame synchronizer is used to frame align a serial pulse code

modulated (PCM) binary stream.

Different types of commutation within a frame synchronized PCM stream

The frame synchronizer immediately follows the bit synchronizer in most telemetry

applications. Without frame synchronization decommutation is impossible.


Frame synchronized PCM stream

The frame syncronisation pattern is a known binary pattern which repeats at a regular interval

within the PCM stream. The frame synchronizer recognizes this pattern and aligns the data

into minor frames or sub-frames. Typically the frame sync pattern is followed by a counter
(Sub-Frame ID) which dictates which minor or sub frame in the series is being transmitted.

This becomes increasingly important in the decommutation stage where all data is deciphered

as to what attribute was sampled. Different commutations require a constant awareness of

which section of the major frame is being decoded.

In telecommunication, data signaling rate (DSR), also known as gross bit rate, is the aggregate rate at
which data pass a point in the transmissionpath of a data transmission system.
Notes:

1. The DSR is usually expressed in bits per second.

2. The data signaling rate is given by where m is the number of parallel channels, ni is
the number of significant conditions of themodulation in the i-th channel, and Ti is the unit interval,
expressed in seconds, for the i-th channel.
3. For serial transmission in a single channel, the DSR reduces to (1/T)log2n; with a two-condition
modulation, i. e. n = 2, the DSR is 1/T, according to Hartley's law.
4. For parallel transmission with equal unit intervals and equal numbers of significant conditions on
each channel, the DSR is (m/T)log2 n; in the case of a two-condition modulation, this reduces
to m/T.
5. The DSR may be expressed in bauds, in which case, the factor log2ni in the above summation
formula should be deleted when calculating bauds.
6. In synchronous binary signaling, the DSR in bits per second may be numerically the same as
the modulation rate expressed in bauds. Signalprocessors, such as four-phase modems, cannot
change the DSR, but the modulation rate depends on the line modulation scheme, in accordance
with Note 4. For example, in a 2400 bit/s 4-phase sending modem, the signaling rate is 2400 bit/s
on the serial input side, but the modulation rate is only 1200 bauds on the 4-phase output side.

Contents
[hide]

1 Maximum rate

2 Transmission Data Rate


Terminology
3 Data Rate and Standard

4 See also

5 References
[edit]Maximum rate

The maximum user signaling rate, synonymous to gross bitrate or data signaling rate, is the maximum rate,
in bits per second, at which binaryinformation can be transferred in a given direction between users over the
telecommunications system facilities dedicated to a particular information transfer transaction, under
conditions of continuous transmission and no overhead information.
For a single channel, the signaling rate is given by[clarification needed], where SCSR is the single-channel signaling
rate in bits per second, T is the minimum time interval in seconds for which each level must be maintained,
and n is the number of significant conditions of modulation of the channel.
In the case where an individual end-to-end telecommunications service is provided by parallel channels, the
parallel-channel signaling rate is given by[clarification needed], where PCSR is the total signaling rate
for m channels, m is the number of parallel channels, Ti is the minimum interval between significant instants
for the I-th channel, and ni is the number of significant conditions of modulation for the I-th channel.
In the case where an end-to-end telecommunications service is provided by tandem channels, the end-to-
end signaling rate is the lowest signaling rate among the component channels.
[edit]Transmission Data Rate Terminology

Data Rate Abbreviation Lower Upper

Extremely Low Data Rate ELDR 300 bit/s 3 kbit/s

Very Low Data Rate VLDR 3 kbit/s 30 kbit/s

Low Data Rate LDR 30 kbit/s 300 kbit/s

Medium Data Rate MDR 300 kbit/s 3 Mbit/s

High Data Rate HDR 3 Mbit/s 30 Mbit/s

Very High Data Rate VHDR 30 Mbit/s 300 Mbit/s

Ultra High Data Rate UHDR 300 Mbit/s 3 Gbit/s

Super High Data Rate SHDR 3 Gbit/s 30 Gbit/s


Extremely High Data
EHDR 30 Gbit/s 300 Gbit/s
Rate

Based upon proposal from davisnetworks.com. 1 Mbit/s is defined as 1,000,000 bits per second signal data
rate (OSI Layer 1).
[edit]Data Rate and Standard

Data Rate Standard

155 Mbit/s OC-3

622 Mbit/s OC-12

1063 Mbit/s Fibre Channel

1250 Mbit/s GbE

2125 Mbit/s 2xFibre Channel

2488 Mbit/s OC-48

2500 Mbit/s 2xGbE, infiniband

2666 Mbit/s OC-48(FEC)

3125 Mbit/s 10 DbE LX-4

4250 Mbit/s 4xFibre Channel

9.953 Gbit/s OC-192

10.3125 Gbit/s 10 GbE


10.51875
10 G Fibre Channel
Gbit/s

10.664 Gbit/s OC-192 (FEC)

10.709 Gbit/s OC-192 (ITU-T G.709)

11.100 Gbit/s 10 GbE FEC

11.300 Gbit/s 10 G Fibre Channel

ShannonHartley theorem

From Wikipedia, the free encyclopedia


(Redirected from Hartley's law)

In information theory, the ShannonHartley theorem tells the maximum rate at which

Information can be transmitted over a communications channel of a --specified bandwidth

in the presence of noise.

It is an application of the noisy channel coding theorem to the archetypal case of

a continuous-timeanalog communications channel subject to Gaussian noise.

The theorem establishes Shannon's channel capacity for such a communication link, a

bound on the maximum amount of error-free digital data (that is, information) that can be

transmitted with a specified bandwidth in the presence of the noise interference,

assuming that the signal power is bounded, and that the Gaussian noise process is

characterized by a known power or power spectral density. The law is named after Claude
Shannon and Ralph Hartley.

Contents
[hide]

1 Statement of the theorem


2 Historical development
o 2.1 Nyquist rate

o 2.2 Hartley's law

o 2.3 Noisy channel coding theorem and capacity

o 2.4 ShannonHartley theorem

3 Implications of the theorem


o 3.1 Comparison of Shannon's capacity to Hartley's law

4 Alternative forms
o 4.1 Frequency-dependent (colored noise) case

o 4.2 Approximations

5 Examples

6 See also

7 Notes

8 References

9 External links

[Statement of the theorem

Considering all possible multi-level and multi-phase encoding techniques, the Shannon

Hartley theorem states the channel capacity C, meaning the theoretical tightest upper

bound on the information rate (excluding error correcting codes) of clean data- (or

arbitrarily low bit error rate) that can be sent with a given average signal power S through

an analog communication channel subject to additive white Gaussian noise of power N, is:

where

C is the channel capacity in bits per second;

B is the bandwidth of the channel in hertz (passband bandwidth in case of a modulated signal);

S is the average received signal power over the bandwidth (in case of a modulated signal, often

denoted C, i.e. modulated carrier), measured in watts (or volts squared);

N is the average noise or interference power over the bandwidth, measured in watts (or volts

squared); and

S/N is the signal-to-noise ratio (SNR) or the carrier-to-noise ratio (CNR) of the communication

signal to the Gaussian noise interference expressed as a linear power ratio (not as

logarithmic decibels).
[edit]Historical development

During the late 1920s, Harry Nyquist and Ralph Hartley developed a handful of

fundamental ideas related to the transmission of information, particularly in the

context of the telegraph as a communications system. At the time, these

concepts were powerful breakthroughs individually, but they were not part of a

comprehensive theory. In the 1940s, Claude Shannon developed the concept

of channel capacity, based in part on the ideas of Nyquist and Hartley, and then

formulated a complete theory of information and its transmission.

[edit]Nyquist rate

Main article: Nyquist rate

In 1927, Nyquist determined that the number of independent pulses that could

be put through a telegraph channel per unit time is limited to twice the

bandwidth of the channel. In symbols,

where fp is the pulse frequency (in pulses per second) and B is the

bandwidth (in hertz).

The quantity 2B later came to be called the Nyquist rate, and transmitting

at the limiting pulse rate of 2B pulses per second as signalling at the

Nyquist rate.

Nyquist published his results in 1928 as part of his paper "Certain topics in

Telegraph Transmission Theory."

[edit]Hartley's law

During that same year,

Hartley formulated a way to

quantify information and

its line rate (also known as data signalling rate or gross bitrate inclusive

oferror-correcting code 'R' across a communications channel).[1]

This method, later known as Hartley's law, became an important precursor

for Shannon's more sophisticated notion of channel capacity.

Hartley argued that


the maximum number of distinct pulses that can be transmitted

and received reliably over a communications channel is limited

by the dynamic range of the

signal amplitude and the

precision with which the receiver can distinguish amplitude levels.

Specifically,

if the amplitude of the transmitted signal is

restricted to the range of [ A ... +A ] volts, and

the precision of the receiver is V volts, then the maximum number of

distinct pulses M is given by

By taking information per pulse in bit/pulse to be the base-2-

logarithm of the number of distinct messages M that could be sent,

Hartley[2] constructed a measure of the line rate R as:

where fp is the pulse rate, also known as the symbol rate, in

symbols/second or baud.

Hartley then combined the above quantification with Nyquist's

observation that the number of independent pulses that could be

put through a channel of bandwidth B hertz was 2B pulses per

second, to arrive at his quantitative measure for achievable line

rate.

Hartley's law is sometimes quoted as just a proportionality

between the analog bandwidth, B, in Hertz and

what today is called the digital bandwidth, R, in bit/s.[3] Other

times it is quoted in this more quantitative form, as an achievable

line rate of R bits per second:[4]

Hartley did not work out exactly how the number M should

depend on the noise statistics of the channel, or how the


communication could be made reliable even when individual

symbol pulses could not be reliably distinguished

to M levels; with Gaussian noise statistics, system

designers had to choose a very conservative value of M to

achieve a low error rate.

The concept of an error-free capacity awaited Claude

Shannon, who built on Hartley's observations about a

logarithmic measure of information and Nyquist's

observations about the effect of bandwidth limitations.

Hartley's rate result can be viewed as the capacity of an

errorless M-ary channel of 2B symbols per second. Some

authors refer to it as a capacity. But such an errorless

channel is an idealization, and the result is necessarily less

than the Shannon capacity of the noisy channel of

bandwidth B, which is the HartleyShannon result that

followed later.

[edit]Noisy channel coding theorem and capacity

Main article: noisy-channel coding theorem

Claude Shannon's development of information theory during

World War II provided the next big step in understanding

how much information could be reliably communicated

through noisy channels. Building on Hartley's foundation,

Shannon's noisy channel coding theorem (1948) describes

the maximum possible efficiency of error-correcting

methods versus levels of noise interference and data

corruption.[5][6] The proof of the theorem shows that a

randomly constructed error correcting code is essentially as

good as the best possible code; the theorem is proved

through the statistics of such random codes.

Shannon's theorem shows how to compute a channel

capacity from a statistical description of a channel, and

establishes that given a noisy channel with capacity C and

information transmitted at a line rate R, then if


there exists a coding technique which allows the

probability of error at the receiver to be made arbitrarily

small. This means that theoretically, it is possible to

transmit information nearly without error up to nearly a

limit of C bits per second.

The converse is also important. If

the probability of error at the receiver increases

without bound as the rate is increased. So no


useful information can be transmitted beyond the

channel capacity. The theorem does not address

the rare situation in which rate and capacity are

equal.

[edit]ShannonHartley theorem

The ShannonHartley theorem establishes what

that channel capacity is for a finite-

bandwidth continuous-time channel subject to

Gaussian noise. It connects Hartley's result with

Shannon's channel capacity theorem in a form

that is equivalent to specifying the M in Hartley's

line rate formula in terms of a signal-to-noise ratio,

but achieving reliability through error-correction

coding rather than through reliably distinguishable

pulse levels.

If there were such a thing as an infinite-bandwidth,

noise-free analog channel, one could transmit

unlimited amounts of error-free data over it per

unit of time. Real channels, however, are subject

to limitations imposed by both finite bandwidth and

nonzero noise.
So how do bandwidth and noise affect the rate at

which information can be transmitted over an

analog channel?

Surprisingly, bandwidth limitations alone do not

impose a cap on maximum information rate. This

is because it is still possible for the signal to take

on an indefinitely large number of different voltage

levels on each symbol pulse, with each slightly

different level being assigned a different meaning

or bit sequence. If we combine both noise and

bandwidth limitations, however, we do find there is

a limit to the amount of information that can be

transferred by a signal of a bounded power, even

when clever multi-level encoding techniques are

used.

In the channel considered by the Shannon-Hartley

theorem, noise and signal are combined by

addition. That is, the receiver measures a signal

that is equal to the sum of the signal encoding the

desired information and a continuous random

variable that represents the noise. This addition

creates uncertainty as to the original signal's

value. If the receiver has some information about

the random process that generates the noise, one

can in principle recover the information in the

original signal by considering all possible states of

the noise process. In the case of the Shannon-

Hartley theorem, the noise is assumed to be

generated by a Gaussian process with a known

variance. Since the variance of a Gaussian

process is equivalent to its power, it is

conventional to call this variance the noise power.

Such a channel is called the Additive White

Gaussian Noise channel, because Gaussian noise

is added to the signal; "white" means equal


amounts of noise at all frequencies within the

channel bandwidth. Such noise can arise both

from random sources of energy and also from

coding and measurement error at the sender and

receiver respectively. Since sums of independent

Gaussian random variables are themselves

Gaussian random variables, this conveniently

simplifies analysis, if one assumes that such error

sources are also Gaussian and independent.

[edit]Implications of the theorem

[edit]Comparison of Shannon's capacity to

Hartley's law

Comparing the channel capacity to the information

rate from Hartley's law, we can find the effective

number of distinguishable levels M:[7]

The square root effectively converts the

power ratio back to a voltage ratio, so

the number of levels is approximately

proportional to the ratio of rmssignal

amplitude to noise standard deviation.

This similarity in form between

Shannon's capacity and Hartley's law

should not be interpreted to mean

that M pulse levels can be literally sent

without any confusion; more levels are

needed, to allow for redundant coding

and error correction, but the net data

rate that can be approached with coding


is equivalent to using that M in Hartley's

law.

[edit]Alternative forms

[edit]Frequency-dependent (colored

noise) case

In the simple version above, the signal

and noise are fully uncorrelated, in which

case S + N is the total power of the

received signal and noise together. A

generalization of the above equation for

the case where the additive noise is not

white (or that the S/N is not constant

with frequency over the bandwidth) is

obtained by treating the channel as

many narrow, independent Gaussian

channels in parallel:

where

C is the channel capacity in bits per second;

B is the bandwidth of the channel in Hz;

S(f) is the signal power spectrum

N(f) is the noise power spectrum

f is frequency in Hz.
: the theorem only applies
to Gaussian stationary
process noise. This
formula's way of
introducing frequency-
dependent noise cannot
describe all continuous-
time noise processes. For
example, consider a noise
process consisting of
adding a random wave
whose amplitude is 1 or -1
at any point in time, and a
channel that adds such a
wave to the source signal.
Such a wave's frequency
components are highly
dependent. Though such a
noise may have a high
power, it is fairly easy to
transmit a continuous
signal with much less
power than one would
need if the underlying
noise was a sum of
independent noises in
each frequency band.
[edit]Approximations
For large or small and
constant signal-to-noise
ratios, the capacity formula
can be approximated:
If S/N >> 1, then

where


Similarly, if S/N << 1, then


In this low-SNR approximation, capacity is independent of bandwidth if the noise is white,
of spectral density watts per hertz, in which case the total noise power is .


[edit]Examples

If the SNR is 20 dB, and the bandwidth available is 4 kHz, which is appropriate for telephone
communications, then C = 4 log2(1 + 100) = 4 log2 (101) = 26.63 kbit/s. Note that the value of S/N =
100 is equivalent to the SNR of 20 dB.
If the requirement is to transmit at 50 kbit/s, and a bandwidth of 1 MHz is used, then the minimum
S/N required is given by 50 = 1000 log2(1+S/N) so S/N = 2C/B -1 = 0.035, corresponding to an SNR
of -14.5 dB (10 x log10(0.035)).
Lets take the example of W-CDMA (Wideband Code Division Multiple Access), the bandwidth = 5
MHz, you want to carry 12.2 kbit/s of data (AMR voice), then the required SNR is given by 212.2/5000 -
1 corresponding to an SNR of -27.7 dB for a single channel. This shows that it is possible to
transmit using signals which are actually much weaker than the background noise level, as
in spread-spectrum communications. However, in W-CDMA the required SNR will vary based on
design calculations.
As stated above, channel capacity is proportional to the bandwidth of the channel and to the
logarithm of SNR. This means channel capacity can be increased linearly either by increasing the
channel's bandwidth given a fixed SNR requirement or, with fixed bandwidth, by using higher-order
modulations that need a very high SNR to operate. As the modulation rate increases, the spectral
efficiency improves, but at the cost of the SNR requirement. Thus, there is an exponential rise in
the SNR requirement if one adopts a 16QAM or 64QAM (see: Quadrature amplitude modulation);
however, the spectral efficiency improves.

In computer networking and computer science, the words bandwidth,[1] network bandwidth,[2] data
bandwidth,[3] or digital bandwidth[4][5] are terms used to refer to various bit-rate measures, representing the
available or consumed data communication resources expressed in bits per second or multiples of it (bit/s,
kbit/s, Mbit/s, Gbit/s, etc.).
Note that in textbooks on signal processing, wireless communications, modem data transmission, digital
communications, electronics, etc., the word 'bandwidth' is used to refer to analog signal
bandwidth measured in hertz. The connection is that according to Hartley's law, the digital data rate limit
(or channel capacity) of a physical communication link is proportional to its bandwidth in hertz.

Contents
[hide]

1 Network bandwidth capacity

2 Network bandwidth

consumption
3 Asymptotic bandwidth

4 Multimedia bandwidth

5 Bandwidth in web hosting

6 Internet connection

bandwidths
7 See also

8 References
[edit]Network bandwidth capacity

Bandwidth sometimes defines the net bit rate (aka. peak bit rate, information rate or physical layer useful bit
rate), channel capacity, or the maximum throughput of a logical or physical communication path in a digital
communication system. For example, bandwidth tests measure the maximum throughput of a computer
network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical
communication link is proportional to its bandwidth in hertz, which is sometimes called frequency
bandwidth, spectral bandwidth, RF bandwidth, signal bandwidthor analog bandwidth.
[edit]Network bandwidth consumption

Bandwidth in bit/s may also refer to consumed bandwidth, corresponding to achieved throughput or goodput,
i.e., the average rate of successful data transfer through a communication path. This sense applies to
concepts and technologies such as bandwidth shaping, bandwidth management,bandwidth
throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic
bandwidth allocation), etc. A bit stream's bandwidth is proportional to the average consumed signal
bandwidth in Hertz (the average spectral bandwidth of the analog signal representing the bit stream) during
a studied time interval.
Channel bandwidth may be confused with data throughput. A channel with x bps may not necessarily
transmit data at x rate, since protocols, encryption, and other factors can add appreciable overhead. For
instance, a lot of internet traffic uses the transmission control protocol (TCP) which requires a three-way
handshake for each transaction, which, though in many modern implementations is efficient, does add
significant overhead compared to simpler protocols. In general, for any effective digital communication, a
framing protocol is needed; overhead and effective throughput depends on implementation. Actual
throughput is less than or equal to the actual channel capacity plus implementation overhead.
[edit]Asymptotic bandwidth

The asymptotic bandwidth for a network is the measure of useful throughput, when the packet size
approaches infinity.[6]
Asymptotic bandwidths are usually estimated by sending a number of very large messages through the
network, measuring the end-to-end throughput. As other bandwidths, the asymptotic bandwidth is measured
in multiples of bits per second.
[edit]Multimedia bandwidth

Digital bandwidth may also refer to: multimedia bit rate or average bitrate after multimedia data
compression (source coding), defined as the total amount of data divided by the playback time.
[edit]Bandwidth in web hosting

In website hosting, the term "bandwidth" is often[citation needed] incorrectly used to describe the amount of data
transferred to or from the website or server within a prescribed period of time, for example bandwidth
consumption accumulated over a month measured in gigabytes per month. The more accurate phrase used
for this meaning of a maximum amount of data transfer each month or given period is monthly data transfer.
[edit]Internet connection bandwidths

This table shows the maximum bandwidth (the physical layer net bitrate) of common Internet access
technologies. For more detailed lists see

list of device bandwidths


bit rate progress trends
list of multimedia bit rates.

56 kbit/s Modem / Dialup

1.5 Mbit/s ADSL Lite

1.544 Mbit/s T1/DS1

2.048 Mbit/s E1 / E-carrier

10 Mbit/s Ethernet

11 Mbit/s Wireless 802.11b

44.736 Mbit/s T3/DS3

54 Mbit/s Wireless 802.11g

100 Mbit/s Fast Ethernet

155 Mbit/s OC3

600 Mbit/s Wireless 802.11n


622 Mbit/s OC12

1 Gbit/s Gigabit Ethernet

2.5 Gbit/s OC48

9.6 Gbit/s OC192

10 Gbit/s 10 Gigabit Ethernet

100 Gbit/s 100 Gigabit Ethernet

Channel capacity

From Wikipedia, the free encyclopedia

In electrical engineering, computer science and information theory, channel capacity is the tightest upper

bound on the amount of information that can be reliably transmitted over a communications channel. By

the noisy-channel coding theorem, the channel capacity of a given channel is the limiting information rate (in

units of information per unit time) that can be achieved with arbitrarily small error probability.[1] [2]

Information theory, developed by Claude E. Shannon during World War II, defines the notion of channel

capacity and provides a mathematical model by which one can compute it. The key result states that the

capacity of the channel, as defined above, is given by the maximum of the mutual information between the

input and output of the channel, where the maximization is with respect to the input distribution.[3]

Contents
[hide]

1 Formal definition

2 Noisy-channel coding theorem

3 Example application

4 Channel capacity in wireless communications


o 4.1 AWGN channel
o 4.2 Frequency-selective channel

o 4.3 Slow-fading channel

o 4.4 Fast-fading channel

5 See also
o 5.1 Advanced Communication Topics

6 References

[edit]Formal definition

Let X represent the space of signals that can be transmitted, and Y the space of signals received, during a

block of time over the channel. Let

be the conditional distribution function of Y given X. Treating the channel as a known statistic

system, is an inherent fixed property of the communications channel (representing the

nature of the noise in it). Then the joint distribution

of X and Y is completely determined by the channel and by the choice of

the marginal distribution of signals we choose to send over the channel. The joint distribution

can be recovered by using the identity

Under these constraints, next maximize the amount of information, or the message, that

one can communicate over the channel. The appropriate measure for this is the mutual

information , and this maximum mutual information is called the channel

capacity and is given by

[edit]Noisy-channel coding theorem


The noisy-channel coding theorem states that for any > 0 and for any rate R less

than the channel capacity C, there is an encoding and decoding scheme that can be

used to ensure that the probability of block error is less than for a sufficiently long

code. Also, for any rate greater than the channel capacity, the probability of block

error at the receiver goes to one as the block length goes to infinity.

[edit]Example application

An application of the channel capacity concept to an additive white Gaussian

noise (AWGN) channel with B Hz bandwidth and signal-to-noise ratio S/Nis

the ShannonHartley theorem:

C is measured in bits per second if the logarithm is taken in base 2, or nats per

second if the natural logarithm is used, assuming B is in hertz; the signal and

noise powers S and N are measured in watts or volts2, so the signal-to-noise

ratio here is expressed as a power ratio, not in decibels (dB); since figures are

often cited in dB, a conversion may be needed. For example, 30 dB is a power

ratio of .

[edit]Channel capacity in wireless communications

This section[4] focuses on the single-antenna, point-to-point scenario. For

channel capacity in systems with multiple antennas, see the article onMIMO.

[edit]AWGN channel

If the average received power is [W] and the noise power spectral

density is [W/Hz], the AWGN channel capacity is

[bits/s],

where is the received signal-to-noise ratio (SNR).

When the SNR is large (SNR >> 0 dB), the

capacity is logarithmic in power and


approximately linear in bandwidth. This is called the bandwidth-limited

regime.

When the SNR is small (SNR << 0 dB), the

capacity is linear in power but insensitive to

bandwidth. This is called the power-limited regime.

The bandwidth-limited regime and power-limited regime are illustrated in

the figure.

AWGN channel capacity with the power-limited regime and bandwidth-

limited regime indicated. Here, .

[edit]Frequency-selective channel

The capacity of the frequency-selective channel is given by so-

called waterfilling power allocation,

where and is

the gain of subchannel , with chosen to meet the power

constraint.

[edit]Slow-fading channel

In a slow-fading channel, where the coherence time is greater than

the latency requirement, there is no definite capacity as the maximum

rate of reliable communications supported by the

channel, , depends on the random

channel gain . If the transmitter encodes data at rate


[bits/s/Hz], there is a certain probability that the decoding error

probability cannot be made arbitrarily small,

in which case the system is said to be in outage. With a non-zero

probability that the channel is in deep fade, the capacity of the

slow-fading channel in strict sense is zero. However, it is

possible to determine the largest value of such that the

outage probability is less than . This value is known as

the -outage capacity.

[edit]Fast-fading channel

In a fast-fading channel, where the latency requirement is

greater than the coherence time and the codeword length spans

many coherence periods, one can average over many

independent channel fades by coding over a large number of

coherence time intervals. Thus, it is possible to achieve a reliable

rate of communication of

[bits/s/Hz] and it is meaningful to speak of this value as the

capacity of the fast-fading channel.

You might also like