MC 10

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 29

Multimedia Communication (SW-416)

VIDEO

Dr. Areej Fatemah email: areej.fatemah@faculty.muet.edu.pk


Video
• Of all the Multimedia elements, video places the highest performance demand on
your computer and its memory. It delivers more information per second than the
others.
• Analog video is used as broadcast medium.
◦ The video that you see on TV, on cable TV, or a VCR is broadcast in analog format.
• It combines brightness, colour and synchronisation information in the one signal.
• This combining can result in loss of quality especially when copied repeatedly.
Analog Video
• An analog signal f(t) samples a time-varying image.
• So-called “progressive” scanning traces through a complete picture (a frame)
row-wise for each time interval.
• In TV, and in some monitors and multimedia standards as well, another
system, called “interlaced” scanning is used:
◦ a) The odd-numbered lines are traced first, and then the even-numbered lines are traced. This
results in “odd” and “even” fields — two fields make up one frame.
◦ b) In fact, the odd lines (starting from 1) end up at the middle of a line at the end of the odd field,
and the even scan starts at a half-way point.
◦ c) First the solid (odd) lines are traced, P to Q, then R to S, etc., ending at T; then the even field starts
at U and ends at V.
◦ d) The jump from Q to R, etc. in Figure 5.1 is called the horizontal retrace, during which the
electronic beam in the CRT is blanked. The jump from T to U or V to P is called the vertical retrace.
Analog Video

Because of interlacing, the odd


and even lines are displaced in
time from each other — generally
not noticeable except when very
fast action is taking place on
screen, when blurring may occur.
Analog Video Broadcast Standards
NTSC
• NTSC (National Television System Committee )
The first color TV broadcast system was implemented in the United States in 1953.
• National Television System Committee is used in US and Japan.
• It displays 30 frames per second. Each frame contains 16 million different colors.
• Each full-Screen frame is composed of 525 horizontal lines drawn onto the inside face
of a phosphor-coated picture tube every 1/30th of a second by a fast moving electron
beam.
Analog Video Broadcast Standards
PAL
• The PAL (Phase Alternating Line) standard was introduced in the early 1960's and
implemented in most European countries (except for France), China, India and many
other parts of the world.

• The PAL standard utilizes a wider channel bandwidth than NTSC which allows for
better picture quality.
• PAL runs on 625 lines/frame.
• It displays 25 frames per second.
• 4:3 aspect ratio and interlaced fields.
Analog Video Broadcast Standards
PAL
• (a) PAL uses the YUV color model. It uses an 8 MHz channel and allocates a bandwidth
of 5.5 MHz to Y, and 1.8 MHz each to U and V.
◦ The color subcarrier frequency is fsc ≈ 4.43 MHz.
• (b) In order to improve picture quality, chroma signals have alternate signs (e.g., +U
and -U) in successive scan lines, hence the name “Phase Alternating Line”.
• (c) This facilitates the use of a (line rate) comb filter at the receiver — the signals in
consecutive lines are averaged so as to cancel the chroma signals (that always carry
opposite signs) for separating Y and C and obtaining high quality Y signals.
Analog Video Broadcast Standards
SECAM
• The SECAM (Sequential Couleur Avec Memoire; French for "Sequential Color with
Memory") standard was introduced in the early 1960's and implemented first in
France.
• SECAM uses the same bandwidth as PAL but transmits the color information
sequentially. SECAM runs on 625 lines/frame at 25 frames per second, with a 4:3 aspect
ratio and interlaced fields.

• It is, historically, the first European color television standard.


Analog Video Broadcast
Technical Details
• Just as with the other color standards adopted for broadcast usage over the world,
SECAM is a compatible standard, which means that monochrome television receivers
predating its introduction are still able to show the programs.
• Because of this compatibility requirement, color standards add a second signal to the
basic monochrome signal, and this signal carries the color information, called
chrominance or C in short, while the black and white information is called the
luminance (Y in short).
• Old TV receivers only display the luminance, while color receivers process both
signals.
Analog Video Broadcast
Technical Details
• Additionally, for compatibility, it is required to use no more bandwidth than the
monochrome signal alone; the color signal has to be somehow inserted into the
monochrome signal, without disturbing it.
• This insertion is possible because the spectrum of the monochrome TV signal is not
continuous, hence empty space exists which can be utilized.
Digital Video Broadcast
• DVB, short for Digital Video Broadcasting, is a suite of Internationally accepted open
standards for digital television.
• DVB standards are maintained by the DVB Project, and they are published by a Joint
Technical Committee (JTC) of European Telecommunications Standards Institute (ETSI)
and European Broadcasting Union (EBU).
• It is not subject to generational loss because each copy is an identical copy of the
original. Its advantages are
◦ direct access to any part of the video
◦ seamless editing
◦ integration with computer software
◦ transmit via internet etc
◦ manipulate image digitally
Digital Video Broadcast
• MPEG-2 was selected for the source coding of audio and video data.
• Multichannel microwave Distribution Systems (MMDS) was suited for data
distribution to households over telephone connections.
• ETSI standards also describe MMDS for use at frequencies above 10 GHz (DVB-MS).
• ETSI 749 is applicable to frequencies below 10 GHz. This specification is based on DVB-
C technology (Satellite connections).
Digital Video Features
• Frame Rate: (Number of frames per second)
• Frame rate, the number of still pictures per unit of time of video, ranges from six or
eight frames per second (frame/sec) for old mechanical cameras to 120 or more
frames per second for new professional cameras.
• The minimum frame rate to achieve the illusion of a moving image is about fifteen
frames per second.

• Bit rate is a measure of the rate of information content in a video stream. It is


quantified using the bit per second (bps) or Megabits per second (mbps). A higher bit
rate allows better video quality. For example VideoCD, with a bit rate of about 1 mbps,
is lower quality than DVD, with a bit rate of about 5 mbps. HD (High Definition Digital
Video and TV) has a still higher quality, with a bit rate of about 20 mbps.
Digital Video Features
• Frame size
• Colour depth
◦ 8 bit, 16, 24 bit colour
• Data rate
◦ lossy compression - quality issues
• Non-interlaced
Video Compression
• Several standard video compression algorithms (codecs), are being widely used on
various platforms. Codecs (compressor/decompressors) are drivers that convert from
one format type to another.
• By using codecs for compressing audio and video data into smaller packages, network
and multimedia applications provide richer and fuller content and don't consume as
much hard disk space or network bandwidth as non-compressed methods.

• File Size(bits)= fps x resolution x bit-depth x no. of seconds


Video Compression
Factors Affecting Compression
• Frames Per Second(fps)
◦ The maximum rate of the playback system depends on the speed of its components, CPU,
hard drive, and display card.
◦ If fps rate is higher than maximum rate of the playback system, it will result in jerky motion
since system will not display certain frames. This is known as frame dropping.
• Key Frames
◦ A key frame is a base line frame against which the following frames are compared for
differences.
◦ More frequently the key frame is displayed, the better the quality of video.
• Data Rate
◦ Single-speed CD-ROM, data rate: 90-100 KB
◦ Double speed CD-ROM, data rate: 150-200 KB
◦ Triple speed CD-ROM, data rate 300KB
◦ Quad speed CD-ROM, data rate 550-600 KB
Video Compression
• Microsoft Video 1 codec: This compressor is best used in converting analog video into digital
format. This is lossy compressor and supports color depth of 8 bits or 16 bits.
• Microsoft RLE codec: This compressor is used often for compressing animation and computer
generated images. It supports only 256 colors.
• Cinepak Codec: This compressor is used for compressing 24 bits for playback from CD-ROM
disk. In Cinepak decompression is mush faster than compression since it is asymmetric.
• Intel Indeo Video codec: this compressor is very similar to Cinepak, and is used for digitizing
video.
• MPEG-1,MPEG-2,MPEG-4, MPEG-7
• H.261
Picture Frames
• There are three types of pictures (or frames) used in video compression: I-frames, P-frames,
and B-frames.
• An I-frame is an 'Intra-coded picture', in effect a fully-specified picture, like a conventional
static image file. P-frames and B-frames hold only a particular part of the image information,
so they need less space to store than an I-frame, and thus improve video compression rates.

• A P-frame ('Predicted picture') holds only the changes in the image from the previous frame.
For example, in a scene where a car moves across a stationary background, only the car's
movements need to be encoded. The encoder does not need to store the unchanging
background pixels in the P-frame, thus saving space. P-frames are also known as delta-frames.

• B-frame ('Bi-predictive picture') saves even more space by using differences between the
current frame and both the preceding and following frames to specify its content.
Picture Frames
I

B
B
P

B
B

I
Time Axis
Picture Frames
• The MPEG standard allows as many as three B-pictures in a row, the number of I-pictures is
typically two per second which means that P-pictures are used to forward predict two-to-five
following P-pictures before another I picture is coded.
Picture Frames
• The MPEG standard allows as many as three B-pictures in a row, the number of I-pictures is
typically two per second which means that P-pictures are used to forward predict two-to-five
following P-pictures before another I picture is coded.
• I frames (intra coded pictures) are coded without using information about other frames
(intraframe coding). An I frame is treated as a still image. Here MPEG falls back on the result of
JPEG.
• P frames (predictive coded pictures) can be the next frame, two or four frames later. Decoding
of a P frame requires decompression of the last I frame and any intervening P frames. Its
compression ratio is high as compared to I frame.
• B frames (bidirectionally predictive coded pictures) that fill in the jumped frames.
MPEG
• The Moving Picture Experts Group (MPEG) is a working group of experts that was
formed by ISO and IEC to set standards for audio and video compression and
transmission.

• The MPEG compression methodology is considered asymmetric as the encoder is


more complex than the decoder. The encoder needs to be algorithmic or adaptive
whereas the decoder is 'dumb' and carries out fixed actions.
MPEG
• This is considered advantageous in applications such as broadcasting where the
number of expensive complex encoders is small but the number of simple inexpensive
decoders is large.
• The MPEG's (ISO's) approach to standardization is novel, because it is not the encoder
that is standardized, but the way a decoder interprets the bitstream. A decoder that
can successfully interpret the bitstream is said to be compliant. The advantage of
standardizing the decoder is that over time encoding algorithms can improve, yet
compliant decoders continue to function with them.
MPEG
Video compression is composed of several basic processes. The first four
• Filtering
• Color conversion
• Digitizing
• Scaling are essentially preprocessing steps. Though they are not considered compression,
they can have a dramatic effect on data reduction. The processes that directly result in
compression are
• Data transforms
• Quantization
• Compaction encoding
• Interframe compression using motion compensation and predictive coding.
MPEG-1
• The first MPEG compression standard for audio and video. It was basically designed to allow
moving pictures and sound to be encoded into the bitrate of a Compact Disc. It is used on
Video CD, SVCD and can be used for low-quality video on DVD Video. It was used in digital
satellite/cable TV services before MPEG-2 became widespread.
• Coding of moving pictures and associated audio for digital storage media at up to about 1.5
Mbit/s
• This was based on CD-ROM video applications, and is a popular standard for video on the
Internet, transmitted as .mpg files. In addition, level 3 of MPEG-1 is the most popular standard
for digital compression of audio--known as MP3.
MPEG-2
• Designed for between 1.5 and 15 Mbit/sec.
• Transports video and audio standards for broadcast-quality television. MPEG-2 standard was
considerably broader in scope and of wider appeal – supporting interlacing and high
definition.
• MPEG-2 is considered important because it has been chosen as the compression scheme for
over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like Dish
Network, digital cable television signals, SVCD and DVD Video.
• The most significant enhancement from MPEG-1 is its ability to efficiently compress interlaced
video.
MPEG-4
• Standard for multimedia and Web compression.
• MPEG-4 is based on object-based compression, similar in nature to the Virtual Reality
Modeling Language.
• Individual objects within a scene are tracked separately and compressed together to create an
MPEG4 file.
• This results in very efficient compression that is very scalable, from low bit rates to very high.
• It also allows developers to control objects independently in a scene, and therefore introduce
interactivity.
MPEG-4
• Place media objects anywhere in a given coordinate system;
• Apply transforms to change the geometrical or acoustical appearance of a media object;
• Group primitive media objects in order to form compound media objects;
• Apply streamed data to media objects, in order to modify their attributes (e.g. a sound, a
moving texture belonging to an object; animation parameters driving a synthetic face);
• Change, interactively, the user’s viewing and listening points anywhere in the scene.
MPEG-7
• MPEG-7, the Multimedia Content Description Interface Standard, is the standard for rich
descriptions of multimedia content, enabling highly sophisticated management, search, and
filtering of that content.

• The main tools used to implement MPEG-7 descriptions are the Description Definition
Language (DDL), Description Schemes (DSs), and Descriptors (Ds).

You might also like