Short Notes On The Technical Terms Used

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

SHORT NOTES ON THE TECHNICAL TERMS USED :

1. CONTINUOUS PILOT & SCATTERED PILOT

Continuous pilots are pilots that occur at the same frequency location in every OFDM symbol.

A receiver recovers data from Orthogonal Frequency Division Multiplexed (OFDM) symbols, the
OFDM symbols including a plurality of sub-carrier signals. Some of the sub-carrier signals
carrying data symbols and some of the sub-carrier signals carrying pilot symbols, the pilot
symbols comprising scattered pilots symbols and continuous pilot symbols. The continuous pilot
symbols are distributed across the sub-carrier signals in accordance with a continuous pilot
symbol pattern and the scattered pilot symbols are distributed across the sub-carrier signals in
accordance with a scattered pilot signal pattern. 

All active subcarriers with the exception of pilots are transmitted with the same average power.
Pilots are transmitted boosted by a factor of 2 -TBD in amplitude (approximately 6TBD dB).
Scattered pilots do not occur at the same frequency in every symbol; in some cases scattered
pilots will overlap with continuous pilots. If a scattered pilot overlaps with a continuous pilot,
then that pilot is no longer considered to be a scattered pilot. It is treated as a continuous pilot.
Because the locations of scattered pilots change from one OFDM symbol to another, the
number of overlapping continuous and scattered pilots changes from symbol to symbol. Since
overlapping pilots are treated as continuous pilots, the number of scattered pilots changes from
symbol to symbol.
http://www.ieee802.org/3/bn/public/nov13/zhang_3bn_03_1113.pdf

https://patentscope.wipo.int/search/en/detail.jsf?docId=WO2014140520

http://www.ieee802.org/3/bn/public/nov13/zhang_3bn_03_1113.pdf

http://www.ijsr.net/archive/v3i5/MDIwMTMxOTg3.pdf

2. ENERGY SPREADING

A satellite communications system for dispersing energy over a wide bandwidth includes a
transmitter, a communication link, and a receiver. The transmitter takes a digital data signal and
modulates that signal at a prescribed carrier frequency. The modulated digital data signal is
then spread over M adjacent digital channels (M≥2 and being an integer multiple of 2), each
channel containing the same information, to disperse the energy over a wide frequency range.
The spectral bandwidth of the adjacent digital channels is chosen with compressed spacing to
conserve bandwidth. Next, the spread modulated data signal is transmitted via the
communication link to the receiver. In particular, a waveform generator at the transmitter
generates a phase-aligned multichannel frequency diversity waveform according to a data clock
at a predetermined phase relationship to the digital data.

At the receiver, the spread modulated data signal received is mixed with a de-spreading
waveform generated in a similar manner to the waveform spectrum generated at the
transmitter to recover the modulated data signal. The de-spreading waveform is generated
according to a symbol clock signal recovered from the received modulated data signal. A
demodulator recovers the original digital data from the modulated data signal. To achieve
higher spreading factors, multichannel frequency diversity may be utilized with known spread
spectrum techniques to achieve high data recovery rates during adverse weather (fading)
conditions at high radio frequencies in the microwave and higher regions of the radio spectrum.

Method and apparatus for providing energy dispersal using frequency diversity in a satellite
communications system

http://www.freepatentsonline.com/5454009.html

Why Energy Dispersal Method is needed


It is clear from studies of frequency sharing between the fixed-satellite service and terrestrial
radio-relay systems and between different fixed-satellite networks that, to ensure that mutual
interference between the systems is kept to a tolerable level, it will be essential in most cases
to use energy dispersal techniques to reduce the spectral energy density of the transmissions of
the fixed-satellite service during periods of light loading. The reduction of the maximum energy
density will also facilitate:

– efficient use of the geostationary-satellite orbit by minimizing the orbital separation


needed between satellites using the same frequency band; and
– multiple-carrier operation of broadband transponders.
The amount of energy dispersal required obviously depends on the characteristics of the
systems in each particular case. It is clear, however, that it is desirable that the maximum
energy density under light loading conditions should be kept as close as possible to the value
corresponding to the conditions of busy hour loading.

3. MPEG

MPEG-2 (aka H.222/H.262 as defined by the ITU) is a standard for "the generic coding of
moving pictures and ISO/IEC 13818 MPEG-2 at the ISO Store. It describes a combination of lossy
video compression and lossy audio data compression methods, which permit storage and
transmission of movies using currently available storage media and transmission bandwidth.
While MPEG-2 is not as efficient as newer standards such as H.264(/ MPEG-4 Part 10, Advanced
Video Coding (MPEG-4 AVC)) and H.265/HEVC

-------{High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video
compression standard, one of several potential successors to the widely used AVC (H.264 or
MPEG-4 Part 10). In comparison to AVC, HEVC offers about double the data compression ratio
at the same level of video quality, or substantially improved video quality at the same bit rate.
It supports resolutions up to 8192×4320, including 8K UHD.

In most ways, HEVC is an extension of the concepts in H.264/MPEG-4 AVC. Both work by
comparing different parts of a frame of video to find areas that are redundant, both within a
single frame as well as subsequent frames. These redundant areas are then replaced with a
short description instead of the original pixels. The primary changes for HEVC include the
expansion of the pattern comparison and difference-coding areas from 16×16 pixel to sizes up
to 64×64, improved variable-block-size segmentation, improved "intra" prediction within the
same picture, improved motion vector prediction and motion region merging, improved motion
compensation filtering, and an additional filtering step called sample-adaptive offset filtering.
Effective use of these improvements requires much more signal processing capability for
compressing the video, but has less impact on the amount of computation needed for
decompression},----------------------

backwards compatibility with existing hardware and software means it is still widely used, for
example in over-the-air digital television broadcasting and in the DVD-Video standard.

https://en.wikipedia.org/wiki/MPEG-2

https://en.wikipedia.org/wiki/H.264/MPEG-4_AVC

https://en.wikipedia.org/wiki/High_Efficiency_Video_Coding

3.1. Lossy vs Lossless transmission


3.1.1. LOSSLESS

Lossless compression is a class of data compression algorithms that allows the original data to
be perfectly reconstructed from the compressed data. By contrast, lossy compression permits
reconstruction only of an approximation of the original data, though this usually improves
compression rates (and therefore reduces file sizes).

Lossless data compression is used in many applications. For example, it is used in the ZIP file
format and in the GNU tool gzip.

Lossless compression is used in cases where it is important that the original and the
decompressed data be identical, or where deviations from the original data could be
deleterious. Typical examples are executable programs, text documents, and source code.
Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF
and MNG may use either lossless or lossy methods. Lossless audio formats are most often used
for archiving or production purposes, while smaller lossy audio files are typically used on
portable players and in other cases where storage space is limited or exact replication of the
audio is unnecessary.

Any lossless compression algorithm that makes some files shorter must necessarily make some
files longer, but it is not necessary that those files become very much longer. Most practical
compression algorithms provide an "escape" facility that can turn off the normal coding for files
that would become longer by being encoded. In theory, only a single additional bit is required
to tell the decoder that the normal coding has been turned off for the entire input; however,
most encoding algorithms use at least one full byte (and typically more than one) for this
purpose.

https://en.wikipedia.org/wiki/Lossless_compression

3.1.2. LOSSY

In information technology, lossy compression or irreversible compression is the class of data


encoding methods that uses inexact approximations and partial data discarding to represent
the content. These techniques are used to reduce data size for storage, handling, and
transmitting content. Different versions of the photo of the cat at the right show how higher
degrees of approximation create coarser images as more details are removed. This is opposed
to lossless data compression (reversible data compression) which does not degrade the data.
The amount of data reduction possible using lossy compression is often much higher than
through lossless techniques.

Lossy compression is most commonly used to compress multimedia data (audio, video, and
images), especially in applications such as streaming media and internet telephony. By contrast,
lossless compression is typically required for text and data files, such as bank records and text
articles

It is possible to compress many types of digital data in a way that reduces the size of a
computer file needed to store it, or the bandwidth needed to transmit it, with no loss of the full
information contained in the original file. A picture, for example, is converted to a digital file by
considering it to be an array of dots and specifying the color and brightness of each dot. If the
picture contains an area of the same color, it can be compressed without loss by saying "200
red dots" instead of "red dot, red dot, ...(197 more times)..., red dot."

In many cases, files or data streams contain more information than is needed for a particular
purpose. For example, a picture may have more detail than the eye can distinguish when
reproduced at the largest size intended; likewise, an audio file does not need a lot of fine detail
during a very loud passage. Developing lossy compression techniques as closely matched to
human perception as possible is a complex task. Sometimes the ideal is a file that provides
exactly the same perception as the original, with as much digital information as possible
removed; other times, perceptible loss of quality is considered a valid trade-off for the reduced
data.

The compression ratio (that is, the size of the compressed file compared to that of the
uncompressed file) of lossy video codecs is nearly always far superior to that of the audio and
still-image equivalents.

 Video can be compressed immensely (e.g. 100:1) with little visible quality loss
 Audio can often be compressed at 10:1 with imperceptible loss of quality
 Still images are often lossily compressed at 10:1, as with audio, but the quality loss is
more noticeable, especially on closer inspection

There are two basic lossy compression schemes:

a) In lossy transform codecs, samples of picture or sound are taken, chopped into small
segments, transformed into a new basis space, and quantized. The resulting quantized
values are then entropy coded.
b) In lossy predictive codecs (portmanteau of coder-decoder), previous and/or subsequent
decoded data is used to predict the current sound sample or image frame. The error
between the predicted data and the real data, together with any extra information
needed to reproduce the prediction, is then quantized and coded.

In some systems the two techniques are combined, with transform codecs being used to
compress the error signals generated by the predictive stage.

Lossy methods are most often used for compressing sound, images or videos. This is because
these types of data are intended for human interpretation where the mind can easily "fill in the
blanks" or see past very minor errors or inconsistencies – ideally lossy compression is
transparent (imperceptible), which can be verified via an ABX test.

Transparency: When a user acquires a lossily compressed file, (for example, to reduce
download time) the retrieved file can be quite different from the original at the bit level while
being indistinguishable to the human ear or eye for most practical purposes. Many compression
methods focus on the idiosyncrasies of human physiology, taking into account, for instance,
that the human eye can see only certain wavelengths of light. The psychoacoustic model
describes how sound can be highly compressed without degrading perceived quality. Flaws
caused by lossy compression that are noticeable to the human eye or ear are known as
compression artifacts.
4. MPEG vs AAC

Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression.
Designed to be the successor of the MP3 format, AAC generally achieves better sound quality
than MP3 at similar bit rates.AAC has been standardized by ISO and IEC, as part of the MPEG-2
and MPEG-4 specifications

  AlgorithmFile Extension Compatibility Popularity


AAC Lossy .m4a, .m4b, iPhone, iPod, iPad, iTunes, Zune, Popular among iTunes
.m4p, .m4v, .m4r, . DivX Plus Web Player and and iPod users, not so
3gp, .mp4, .aac PlayStation 3 (not all media player) popular as MP3
MP3Lossy .mp3 Compatible with almost all the De facto standard for
music devices and players audio files

AAC vs MP3: Their Own Advantages and Disadvantages

AAC Advantages: smaller file size and higher quality sound than MP3

AAC is a newer, adopting more sophisticated codec than MP3 so as to offer you an audio file
with even smaller size and a tad higher quality. See, if your AAC audio file is inherent with
160kbps, then you should adjust your MP3 file with 256 kbps to reach the same audio quality.
But honestly, it is true that you can tell the difference between lossy and lossless. But there
comes a point where it's indistinguishable to the human ear between a small difference of kbps.

MP3 Advantages: Works with Every Music Player and Mobile Device

PC vs Apple, PC wins due to open(ish) standard. AAC vs MP3, MP3 wins because of open(ish)
standard, as well. The argument is never as cut and dried as which is best. More like which is
"good enough" and easiest to work with. MP3 wins, which is friendly with almost all the music
players no matter Windows Media Player, VLC, or Kmplayer and all the Apple Android Microsoft
devices. So in terms of audio compatibility, who can top MP3?

AAC Disadvantages: inferior to MP3 because of its compatible issue

Compared with MP3, the most obvious shortcoming of AAC rests with its compatibility limits,
which refuses many music players and various handheld devices like Android Samsung, Sony,
Blackberry, Nokia, etc. Thus, for Apple users, they feel fine with AAC audio files, but for the
Android users, they would be scratch their head when it comes to playback failure caused by
AAC incompatibility issue.

MP3 Disadvantages: Lower Sound Quality than AAC


Generally speaking, MP3 would be loss more data to compress audio file compared with AAC
due to its built-in compression rules. So if MP3 also wants to reach the same quality as AAC, it
must use a slightly higher bit rate to encode audio file.

https://www.macxdvd.com/mac-dvd-video-converter-how-to/aac-vs-mp3-comparison.htm

Improvements include:

 More sample frequencies (from 8 to 96 kHz) than MP3 (16 to 48 kHz)


 Up to 48 channels (MP3 supports up to two channels in MPEG-1 mode and up to 5.1
channels in MPEG-2 mode)
 Arbitrary bit-rates and variable frame length. Standardized constant bit rate with bit
reservoir.
 Higher efficiency and simpler filter bank (rather than MP3's hybrid coding, AAC uses a
pure MDCT - modified discrete cosine transform)
 Higher coding efficiency for stationary signals (AAC uses a blocksize of 1024 or 960
samples, allowing more efficient coding than MP3's 576 sample blocks)
 Higher coding accuracy for transient signals (AAC uses a blocksize of 128 or 120 samples,
allowing more accurate coding than MP3's 192 sample blocks)
 Can use Kaiser-Bessel derived window function to eliminate spectral leakage at the
expense of widening the main lobe
 Much better handling of audio frequencies above 16 kHz
 More flexible joint stereo (different methods can be used in different frequency ranges)
 Adds additional modules (tools) to increase compression efficiency: TNS, Backwards
Prediction, PNS( Noise shaping is a technique typically used in digital audio, image, and
video processing, usually in combination with dithering, as part of the process of
quantization or bit-depth reduction of a digital signal. Its purpose is to increase the
apparent signal-to-noise ratio of the resultant signal. It does this by altering the spectral
shape of the error that is introduced by dithering [Dither is an intentionally applied form
of noise used to randomize quantization error, preventing large-scale patterns such as
color banding in images] and quantization; such that the noise power is at a lower level
in frequency bands at which noise is considered to be more undesirable and at a
correspondingly higher level in bands where it is considered to be less undesirable), etc.
These modules can be combined to constitute different encoding profiles.

Overall, the AAC format allows developers more flexibility to design codecs than MP3 does, and
corrects many of the design choices made in the original MPEG-1 audio specification. This
increased flexibility often leads to more concurrent encoding strategies and, as a result, to
more efficient compression. However, in terms of whether AAC is better than MP3, the
advantages of AAC are not entirely decisive, and the MP3 specification, although antiquated,
has proven surprisingly robust in spite of considerable flaws. AAC and HE-AAC are better than
MP3 at low bit rates (typically less than 128 kilobits per second.) This is especially true at very
low bit rates where the superior stereo coding, pure MDCT, and better transform window sizes
leave MP3 unable to compete.

5. REED SOLOMON

Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S.
Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which
include consumer technologies such as CDs, DVDs, Blu-ray Discs, QR Codes, data transmission
technologies such as DSL and WiMAX, broadcast systems such as DVB and ATSC, and storage
systems such as RAID 6. They are also used in satellite communication.

In coding theory, the Reed–Solomon code belongs to the class of non-binary cyclic error-
correcting codes. The Reed–Solomon code is based on univariate polynomials over finite fields.

It is able to detect and correct multiple symbol errors. By adding t check symbols to the data, a
Reed–Solomon code can detect any combination of up to t erroneous symbols, or correct up to
⌊t/2⌋ symbols. As an erasure code, it can correct up to t known erasures, or it can detect and
correct combinations of errors and erasures. Furthermore, Reed–Solomon codes are suitable as
multiple-burst bit-error correcting codes, since a sequence of b + 1 consecutive bit errors can
affect at most two symbols of size b. The choice of t is up to the designer of the code, and may
be selected within wide limits.

Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors
associated with media defects.

Reed–Solomon coding is a key component of the compact disc. It was the first use of strong
error correction coding in a mass-produced consumer product, and DAT and DVD use similar
schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional
interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC).

Reed–Solomon coding in Bar code

Almost all two-dimensional bar codes such as PDF-417, MaxiCode, Datamatrix, QR Code, and
Aztec Code use Reed–Solomon error correction to allow correct reading even if a portion of the
bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will
treat it as an erasure.
Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar
symbology.

Reed–Solomon coding in Space transmission

One significant application of Reed–Solomon coding was to encode the digital pictures sent
back by the Voyager space probe.

Voyager introduced Reed–Solomon coding concatenated with convolutional codes, a practice


that has since become very widespread in deep space and satellite (e.g., direct digital
broadcasting) communications.

Viterbi decoders tend to produce errors in short bursts. Correcting these burst errors is a job
best done by short or simplified Reed–Solomon codes

6. MULTI -2 SCRAMBLING

MULTI2 is the block cipher used in the ISDB standard for scrambling digital multimedia content.
MULTI2 is used in Japan to secure multimedia broadcasting, including recent applications like
HDTV and mobile TV. It is the only cipher specified in the 2007 Japanese ARIB standard for
conditional access systems.

MULTI2 in ISDB. In ISDB,MULTI2 is mainly used via the B-CAS (B-CAS i.e. BS Conditional Access
Systems Co., Ltd.- a vendor and operator of the ISDB CAS system in Japan. All ISDB receiving
apparatus such as DTT TV, tuner, and DVD recorder except 1seg-only devices require a B-CAS
card under regulation and B-CAS cards are supplied with most units at purchase. B-CAS cards
cannot be purchased separately.) card for copy control to ensure that only valid subscribers are
using the service. MULTI2 encrypts transport stream packets in CBC or OFB mode. The same
system key is used for all conditional-access applications, and another system key is used for
other applications (DTV, satellite, etc.). The 64-bit data key is refreshed every second, sent by
the broadcaster and encrypted with another block cipher. Therefore only the data key is really
secret, since the system key can be obtained from the receivers.

You might also like