Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

VIDEO FORMATS

Compression: Refers to the rearrangement or elimination of redundant picture information for


more-efficient storage and signal transport.

Codec: An electronic circuit or computer software that does the encoding and decoding of the
video file. It also compresses the data according to the given limitations and specifications.
Example: H.262 H.263, Indeo, etc.

Container: The container is like a box that contains your video, audio, and metadata (vital data
such as captions, SEO, and vital information that pieces the video together for playback). It can
also be called a file extension since they are often seen as file names, such as .AVI, .MOV,
or .MP4.

Intraframe compression: Looks at each frame individually and throws away all video
information that is unnecessary to perceiving pretty much the same picture as the original. In
technical terms it eliminates spatial redundancy. It is primarily designed for still images but can
also be applied to individual video frames. The JPEG system—a video compression method
used mostly for still pictures—employs this intraframe compression technique.

Interframe compression: Looks for redundancies from one frame to the next, rather than
compressing each frame independently of all the others. Basically, the system compares each
frame with the preceding one and keeps only the pixels that constitute a change. For example, if
you see a cyclist moving against a cloudless blue sky, the system will not bother with repeating
all the information that makes up the blue sky but only with the position change of the cyclist. It
looks for temporal redundancy (change from frame to frame) rather than spatial redundancy
within a single frame.

Chroma subsampling: It involves the reduction of color resolution in video signals in order to
save bandwidth. The color component information (chroma) is reduced by sampling them at a
lower rate than the brightness (luma). Although color information is discarded, human eyes are
much more sensitive to variations in brightness than in color.

Format: Standardized set of rules for storing the containers, codecs, metadata and sometimes
even folder structure of video files so that it is easy to support them across a large number of
devices and players. All formats can be divided into two groups: Analog and Digital.

● ANALOG

Analog signals use a continuous range of values to represent information. The analog system
directly records the variations of the video and audio signals with the receiving device
interpreting and translating the signal into video and audio on a monitor.
This process can introduce a progressive loss of data leading to a general loss of video quality
and has a tendency to deteriorate when copies are dubbed. Only tape can be used to record the
signal from analog systems.

Some popular analog formats are:

● Open reel tapes

1. 2” Quad

2-inch quadruplex videotape, also called 2″ quad videotape, was the first practical and
commercially successful analog recording video tape format. It was developed and released for
the broadcast television industry particularly for in-studio use by Ampex in 1956.

It was magnetic so it was reusable and thus cost effective. It was readymade i.e processing was
not required as in film, so it was ready to broadcast.

However, it was bulky and therefore, was not used for outdoor recordings (16mm films were
used for that).

● Cassette tapes

2. U-Matic (3/4")

Developed by Sony in 1971, it emerged as the first successful cassette format as it was safe &
portable. As its usage grew, machines became lighter.

U-matic was named after the shape of the tape path when it was threaded around the helical
scan video head drum, which resembles the letter U. The helical scan made the picture more
efficient as it covers more surface area, so the quality also improved over the Quadruplex
format.

This model ushered in the era of ENG, or electronic news gathering, which eventually made the
previous 16mm film cameras normally used for on-location television news gathering obsolete.
Film required developing which took time, compared to the instantly available playback of
videotape, making faster breaking news possible. The location tape was capable of upto 20
minutes of Video and the desktop tape 60 minutes.

Outside the industrial market, it was a primary format for many artists, community activists,
academic institutions, and production houses. Many artist and community videos are in this
format; it was a preferred format for edit masters in the 1980s.

3. VHS
The Video Home System format was an analog cassette tape released by Victor Company of
Japan (JVC) in 1976, for the consumer market. The VHS deck was primarily for home viewing
and has been the most popular consumer deck ever produced.

It was initially used as a camera and mastering format, but more recently has been used
primarily for distribution of multiple viewing copies. Until DVDs began to build in popularity, most
video rental stores rented out VHS tapes.

Other variants include VHS (Super), VHS (Compact). S-VHS was used as a camera and
mastering format and was geared towards consumer, industrial, and educational markets.

4. Betacam (1/2’’)

Timecode was incorporated and it had omega shaped helical scanning which resulted in more
contact, thus better quality.

Later in 1986, Betacam-SP was introduced.

The Betacam and BetacamSP formats were developed for broadcast industrial, educational,
and professional markets. BetacamSP has been used extensively as a broadcast format, and
as a mastering format by commercial and independent producers, and by artists.

5. Hi-8 Format (Sony, 1985)

Introduced with the brand- ‘Handycam’, it was much smaller than VHS. It was highly popular as
a subcompact camera geared towards consumer, industrial, and educational markets. Usage of
Hi8 in industrial and educational markets has now decreased as use of digital formats (such as
MiniDV) has increased.

However, for much of the 1990s, Hi8 was a popular format for artists, community video centers,
the media arts, and colleges/universities. In the consumer market Video8 is the lowest cost
format, followed by Hi8, with digital formats priced higher. This may account for the format’s
continuing popularity.

6. Digi-beta (Sony, 1993)

Although it offered better quality, it was very expensive.

● DIGITAL

A digital signal uses discrete (discontinuous) values. Digital video regularly samples the
waveforms and converts them into binary data of '0's and '1's. On the receiving end of this data
transmission, there is no translation or interpretation, just the delivery of pure data.
This allows many generations of copies to be made without affecting the quality of the image.
Digital systems also allow the data to be recorded on media other than tape, such as computer
hard disks, flash memory devices, optical discs (CDs and DVDs), which do not sustain quality
loss even after a great number of generations. They offer more compatibility as there is no
physical restriction that comes in the case of tapes.

1. DVCAM

The DVCAM format was developed by Sony for industrial, educational, and professional
markets. It is used extensively for electronic news gathering, cable television, and other field
production. It is also used as a mastering format by artists and independent producers,
especially for long-form programming (such as documentaries), because the maximum tape
length on a single cassette is 184 minutes.

2. Mini DV

This format was originally called DV, but is commonly known as MiniDV. The MiniDV is a
relatively new format that was developed for consumer, industrial, and educational markets. It is
used extensively by artists and community activists, both in the educational sector and in
independent production. Its small size and high visual quality make it popular for field camera
recording.

3. DVCpro

The DVCPro format was developed by Panasonic for industrial, educational, and professional
markets. It is used for electronic news gathering, cable television, and other field production,
including independent production. One of the first small digital formats, it was initially popular,
but more recently has lost ground to other DV products.

4. Digital-8 (Sony, 1999)

It is the digital variant of Sony Hi-8The Digital 8 format was developed for the consumer market,
and is sometimes used in the educational sector.

Digital 8 cameras are marketed primarily to consumers who already had 8mm or Hi-8 tapes -
which could be played on the Digital 8 cameras.

Analogue to Digital conversion process

It is a four-step process:

1. Anti-aliasing: Extreme frequencies of the analog signals that are unnecessary for its
proper sampling is filtered out.
2. Sampling: Measures how often the values of the analog video signal are converted into
a digital code. The sampling rate of a video signal is usually expressed in megahertz
(MHz).

3. Quantizing: It changes the sampling points into discrete values.

4. Coding: Changes the quantization numbers of each step to binary numbers, consisting
of 0’s and 1’s.

Bit depth: Bit depth quantifies how many unique colors are available in an image's color palette
in terms of the number of 0's and 1's, or "bits," which are used to specify each color.
LINEAR EDITING
Linear editing is basically selecting shots from one tape and copying them in a specific
sequential order onto another tape. It does not allow random access or selection and
arrangement of shots. All tape-based editing systems are therefore called linear, regardless of
whether the tapes contain analog or digital signals.

In linear editing, videotapes are used as the source material and the final edit master tape.

This method of editing is called "linear" because it must be done in a linear fashion; that is,
starting with the first shot and working through to the last shot. If the editor changes their mind
or notices a mistake, it is almost impossible to go back and re-edit an earlier part of the video.

All linear editing systems give the choice between assemble and insert editing mode.

● Assemble editing

In assemble editing the record VTR erases everything (video, audio, control, and timecode
tracks) on the edit master tape to make room for the copied video and audio material supplied
by the source tape.

The record VTR regenerates the control tracks of the copied shots and tries to form a
continuous control track. If the newly assembled control track is not perfectly aligned, the edits
will cause brief video breakups—or sync rolls—at the edit points.

A camcorder edits in the assemble mode each time you press the record button.

● Insert editing

The process of using a continuous control track is called insert editing. The entire control track
is pre recorded continuously on the edit master tape before any editing takes place. It prevents
breakups at the edit points and allows separate video and audio editing.

To prepare the edit master tape for insert editing, you need to first record a continuous control
track on it. The simplest way to do this is to record "black," with the video and audio inputs in the
opposition. This is because insert editing uses the sync pulse of the underlying video. If you try
to insert edit over unrecorded video tape the picture will be unstable. The "blackened" tape has
now become an empty edit master, ready to receive the momentous scenes from your source
tapes.

You can easily insert new video and/or audio material anywhere in the tape without affecting
anything preceding or following the insert.
All linear editing systems work on the same basic principle: one or several VTRs play back
portions of the tape with the original footage, and another VTR records on its own tape the
selected material from the original tape.

The different tape-based systems fall into three categories:

1. The singlesource/cuts only system

The simplest linear editing system consists of a source VTR and a record VTR. It has the
following equipments:

Monitors: To see what is on both the source and edit master tapes while editing.

Source/Play VTR: The machine that plays back the tape with the original footage. It displays
the source material to be edited.

Record/Edit VTR: The machine that copies the selected material/frames at predetermined
points. It displays the edited video portion of the edit master tape.

Source Tape: The videotape with the original footage.

Edit Master Tape: Tape onto which the selected portions are recorded in a specific editing
sequence.

Edit Controller: It is the interface between the source and record VTRs that acts like an editing
assistant. It automates the editing process to an extent.

- Displays elapsed tape time and frames


- Controls source and record VTR rolls
- Stores edit-in and edit-out points and tells the VTRs to locate them on the tape
- Backs up, or "backspaces," both VTRs to precisely the same preroll point
- Offers previewing before the edit and reviewing after the edit
- Simultaneously starts both machines and synchronizes their tape speeds
- Can perform separate edits for video and audio tracks without one affecting the other
- Can produce intelligible sounds at various fast-forward tape speeds

2. The expanded single-source system

The expanded single-source linear system integrates special effects, a video switcher, a CD
player, and an audio mixer. The line-outs from the audio mixer and the video switcher go
directly to the record VTR and not through the edit controller.
Switcher: Enables instantaneous editing by selecting and mixing various video inputs and
creating transitions and special effects.

Audio mixer: Allows to control the volume of a limited number of sound inputs and mix them
into a single or stereo output signal.

3. Multiple-source systems

The tape-based multiple-source editing system consists of two or more source VTRs, a record
VTR, computer-assisted edit controller, audio mixer, switcher, and special-effects equipment.

The multiple-source editing system allows you to synchronously run two or more source VTRs
and combine the shots and audio tracks from any of them quickly and effectively through a
variety of transitions or other special effects.

The multiple VTRs supply the source material to the single record VTR. The video output of
both source machines is routed through the switcher for transitions such as dissolves and wipes
and the audio output of both source VTRs is routed through the audio mixer. The whole system
is usually managed by a computer-driven edit controller.

Control track system: Counts the control track pulses and translates this count into elapsed
time and frame numbers. This is called the pulse count system, it is not frame accurate. Every
thirty pulses mark one second of elapsed tape time.

Time code: Gives each television frame a unique address—a number that shows hours,
minutes, seconds, and frames. The time code system is frame-accurate and is used by more
sophisticated linear systems.

● STEPS OF LINEAR EDITING

Step 1: Use the source VTR to find the exact in- and out-points of the footage you want to copy
to the edit master tape. These edit points are determined with the help of a control track system
that counts the control track pulses and translates this count into elapsed time and frame
numbers. More sophisticated linear systems use time code to accomplish this.

Step 2: Tell the record VTR when to start recording (copying) the source material and when to
stop recording by marking the "out " or "exit" through the edit controller.

Step 3: The video output of both source machines is routed through the switcher for transitions
such as dissolves and wipes. The audio output of both source VTRs is routed through an audio
mixer.
Step 4: Add the desired audio track using the audio mixer or other special effects using the
video switcher.

Two techniques can be used while editing footage. AB rolling lets you cut between the two
source VTRs via a switcher as though they were live video sources. It can greatly speed up the
editing process and it allows you to redo the editing a number of times until you are satisfied
with the shot sequence. If you don't like the editing you have just done , you can rewind the
source VTRs and cut the production again. AB rolling works best when the edit points between
the A and B rolls do not have to be too precise.

AB-roll editing lets you use the edit controller to set up the transitions between the A and B
source VTRs. Although this method is considerably slower than AB rolling, it is more precise.
The advantage of AB-roll editing is that you can access the source material from two sources
rather than just one, which allows a great variety of transitions. The A and B rolls do not have to
run in sync from beginning to end, and you can advance either tape to a specific edit point and
copy the material over to the record VTR without having to change videotapes on the source
machine.

Step 5: The record VTR copies all the selected material onto the tape.

NON LINEAR EDITING


Nonlinear editing (NLE) allows you to select and rearrange specific frames and shots in a
random order. The video and audio information is stored in digital form on computer hard disks
or read/write optical discs. All disk-based systems are, therefore, called nonlinear.

Because they are computer-driven, they can operate only with digital signals. It can display any
two or more frames side-by side on a single computer screen so you can see how well the shots
will edit together.

Most nonlinear editing systems produce an edit decision list (EDL) and high-quality video and
audio sequences that can be transferred directly to the edit master tape.

● STEPS OF NON LINEAR EDITING

Step 1. Capturing all analog and digital source tapes on memory cards, the hard disk,
flashdrives or optical disc storage systems.

Step 2. Import the footage into the software and save them in a digital format.

Step 3. Preview the clips in the source panel. Trim (clean up) each video segment or clip and
delete the unwanted video frames by marking the In and Out points.
Step 4. Place the clips into the timeline to make a sequence. The timeline is where all the
editing functions take place and usually includes multiple tracks of video, audio, and graphics. It
allows the editor to view the production then arrange, delete, or trim the audio and video
segments to fit the script and create a flow in the narrative.

Step 5. Add video special effects, graphics, sound, and transitions. Nonlinear edit systems allow
all kinds of effects such as ripple, slow/fast motion, and color correction. Transitions include
dissolves, cuts, and a variety of wipes.

Step 6. Insert additional audio, if desired, at this point. All audio sounds can be adjusted in the
Audio Track Mixer. Music or voice overs may be added at different points in the project.

Step 7. Save and export the final program in the appropriate format to the desired destination.
You can now play the final production on different devices, e-mail it to your friends and family,
upload it on various social media channels or your website.
CUTS
1. Hard/Standard cut

Hard cut is frequently used in filmmaking and television. It is an instant displacement of visual
and audio elements in a frame and provides instant visible discontinuity. It gives very little time
for viewers to process and question the change. So, a hard cut in the middle of this
conversation would be seamless.

However, if you want to transition to another part in the story, it won't give the viewer much time
to acclimate to the new scene. This is why most hard cuts are contained within a scene and
usually don't go from scene to scene. It is generally used for cutting to a different time span in a
story. In modern editing it is changing and is being used as a transition like in the movie
‘Inception’.

2. Jump cut

A jump cut is when a single shot is broken with a cut that makes the subject appear to jump
instantly forward in time. It is a stylistic choice that makes the edit completely visible and does
not give a seamless appearance of time and space to the story.

Eg: Georges Melies utilized the technique to create the illusion of magic occurring on-screen.
One prominent scene is when the mushroom blooms on the moon in his movie Trip to the
moon.

3. J&L cut

They are generally used in conversations and joining scenes. It helps in creating conversations
as cutting to another character’s reaction to what the other person is speaking adds to the
conversational flow.

J Cut: You hear the audio from the next clip before you see its visuals. Audio transition
happens/or precedes the visuals.

L cut: Reverse of J cut. So the visuals of the next clip first as the audio from the first clip
overlaps it.

4. Match cut

Uses elements from the previous scene to fluidly bring the viewer through to the next scene. It
joins two different clips with one or more qualities of action, composition, content or subject
matter. The juxtaposition of the clips, which while not directly related, are united by a shared
aspect.
There are three types of match cut:

1. Graphic Match Cuts — shapes, colors, compositions


2. Match on Action Cuts — action, movement
3. Sound Bridge — sound effects, dialogue, music

Eg: Shower scene in Psycho, the shot of the water circling down the drain crossfades into a
close-up of the actress' eye, which is roughly the same size and in the same position within the
frame as the drain.

Another example is of the edit in 2001: A Space Odyssey where the bone thrown by a
prehistoric ape cuts to a futuristic space station. This is a graphic match cut.

5. Cross cut

This cut is used in parallel editing. It is a technique where the editor alternatively shows two or
more different pieces of action that are supposed to be happening simultaneously in different
locations or time periods.

It is often used during phone-conversation sequence so viewers see both characters' facial
expressions in response to what is said.

Christopher Nolan uses cross-cutting extensively in films such as Inception in which sequences
depict multiple simultaneous levels of consciousness.

In the movie 'Birth of a nation', director D.W. Griffith has made use of parallel editing extensively
throughout the duration of the movie to build dramatic tension as well as build the relationships
of characters within the world.

6. Cutting on action

It simply means cutting in the middle of your subject's action, whether it's a jump, a punch, or
even someone reaching for a doorknob in one shot and then opening the door in the next.

The cut is made before the action has been completed. As the audience’s attention and
complete eye focus is on the movement, the change in the shot goes unnoticed by them. Thus it
helps you make your cuts invisible and draws viewers into your story. Eg: Kung Fu training
scene in KIll Bill Vol 2, cuts take place before the feet touch on ground after taking a jump.

7. Invisible Cut
An invisible cut gives rise to perfect visual continuity as that there is no break in what the
audience sees. The cuts are made to match two shots so perfectly that the switch is not
noticeable.

Eg: The 12 minute fight scene in the movie Extraction was cut in such a way that there was no
visible interruption, making it look like a continuous one take shot. The movie birdman has
deployed a similar shooting and editing technique that makes it seem like a two-hour-plus single
take.

8. Smash Cut

One scene abruptly cuts to another for emotional or narrative purpose. It usually occurs at a
moment in a scene where a cut would not be expected.

9. Axial Cut

An axial cut is a type of jump cut, where the camera suddenly moves closer to or further away
from its subject, along an invisible line drawn straight between the camera and the subject. The
orientation of the camera, however, remains the same. As a result, these two spatially
discontinuous shots appear as if they are ‘clipped out' from a continuous shot such as a zoom
or a dolly.

TRANSITIONS
Transitions is a way to join one shot with another used in film or video editing.

They are also used to convey a particular mood, jump between storylines, switch to another
point of view, spice up the narrative, or move backward or forward in time.

There are four basic transitions:

1. Cut

This is the most basic and common type of general-purpose transition. It is an instantaneous
switch from one shot to another in the least obstructive way. This powerful dynamic transition is
easiest to make and is used for the clarification (show the viewer the event as clearly as
possible) and intensification (sharpen the impact of the screen event).

For example, if the subject drops a glass due to shock, you will first show the glass in his hand
then intensify the drop by showing a close up of the broken glass and then the subject's
shocked expression.
2. Dissolve

It is the gradual transition from shot to shot with the two images temporarily overlapping. It is a
quiet and restful transition used to provide a smooth bridge for action or to indicate the passage
of time. A slow dissolve indicates a relatively long passage of time (months and years) whereas
a fast dissolve, a short one (minutes and hours). If a dissolve is stopped halfway, the result is a
superimposition.

3. Wipe.

In a wipe transition, a shot travels from one side of the frame to the other, replacing the previous
scene. It is a relatively novel transition that can have many different shapes such as expanding
diamonds, boxes, or circles. For instance, you must have seen powerpoint transitions where the
top picture is horizontally peeled off a stack of others, or a diamond expanding from the center
of the top picture gradually shows the one underneath. Wipes are often used to transition
between storylines taking place in different locations, and to establish tension or conflict. Use of
wipes has been made extensively in the original Star Wars trilogy.

4. Fade.

A fade is a gradual change between black or any solid colour and a video image. Traditionally it
has been used to begin or conclude films. It is also used to separate scenes and convery a
longer passage of time. A slow fade suggests the peaceful end of action. A fast fade is rather
like a “gentle cut,” used to conclude a scene.

There are two types of fade: Fade in and Fade out.

A fade in transitions from solid colour to a video image whereas the fade out transitions from a
video image to solid colour.

With the advent of sophisticated non linear editing systems, a new category of digital transitions
has emerged. This includes an endless list of effects such as paper peel, paper crush, shatter,
etc. as they can be digitally created.

To sum up, while the basic purpose of all these transitions is to provide an acceptable link
between shots, they differ in function, that is, how we are to perceive the transition in a shot
sequence.

You might also like