Combined VE SLM

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 150

BA (JMC) 209 Unit 1 Lesson 1

MADHU BALA INSTITUTE OF COMMUNICATION &


ELECTRONIC MEDIA
(Guru Gobind Singh Indraprastha University, New Delhi)

BACHELOR of Arts (JOURNALISM & MASS COMMUNICATION)

BA (JMC): 2017-18
___________________________________________________________________

UNIT 1 BA (JMC) 209


___________________________________________________________________

UNIT 1: Introduction to Video Editing


___________________________________________________________________

LESSON 1

Video Editing: Evolution, meaning and concept 2

LESSON 2

Why do we edit? Its objectives and significance 9

LESSON 3

Different kinds of editing techniques 15

LESSON 4

Meaning of continuity in editing and its rules 26

LESSON 5

Video editor as a profession and vital role he/she plays 35

1
BA (JMC) 209 Unit 1 Lesson 1

Lesson 1 Video Editing: Meaning, how it evolved and its basic


concepts

STRUCTURE
1.0 Objectives
1.1 Introduction
1.2 Evolution of editing and its concepts
1.3 Assignments
1.3.1 Class Assignments
1.3.2 Home Assignments
1.4 Summing Up
1.5 Terminal questions
1.6 Suggested further readings
1.7 Keywords

2
BA (JMC) 209 Unit 1 Lesson 1

1. Video Editing: Meaning & basic concepts

This lesson deals with understanding the evolution of the video editing and its concepts
________________________________________________________________________
1.0 Objectives
After going through this lesson, you should be able to:
 Understand basic idea of video editing and significance of the technique
 Understanding the evolution of the video editing
________________________________________________________________________
1.1 Introduction

“Video editing is the process of manipulating and rearranging video shots to create
a new work. Editing is usually considered to be one part of the post production process —
other post-production tasks include titling, colour correction, sound mixing, etc.”

Shooting in many single-camera or multi-cam productions happens in a non-sequential


manner. Scenes are often shot many times and from many angles. Video editing is about
stringing them in such a manner so as to ‘construct’ a story for the viewers.

Almost all programmes you see on television have been edited in some way. Although
editing equipment and techniques change almost from day to day, the basic editing
functions remain the same.

 Video editing is similar to editing and designing a publication to give it shape and form
with good content.

 We write the content (shoot the rushes), edit the copy (arrange video and audio in
comprehendible manner), and design (add special effects, balance and sweeten audio,
add graphics/titles), and print the publication (publish it on tape or a CD/DVD).

Video editing is used to structure and present all video information, including films and
television shows, video advertisements and video essays. Video editing is one of the most
important phases in single-camera post-production. We may have the best shots on tape
or memory card, but unless they are ‘well organised’, into a programme, they remain
useless.
Actually, the editing process begins even as we are shooting. We shoot different shot sizes
and camera angles keeping in mind what these are shots would convey when joined
together.

3
BA (JMC) 209 Unit 1 Lesson 1

There is an inextricable relationship between shooting and editing, which has a vital impact
on the end product and, in turn, on the audience. While live multiple-camera productions
involve a spontaneous and simultaneous selection or switching between shots, non-live
single-camera or multiple-camera productions involve painstaking choice from the material
available which is known as post-production editing.

There is always a need for enhancement and fine-tuning during post-production. Video
editing has been dramatically democratized in recent years by editing software available
for personal computers. Video editing includes cutting segments (trimming), re-sequencing
clips, and adding transitions and other Special Effects.

1.2 Evolution of Editing (Background)

Video editing is the process of editing segments of motion video production footage,
special effects and sound recordings in the post-production process. Motion picture film
editing is a predecessor to video editing and, in several ways, video editing simulates
motion picture film editing, in theory and the use of linear video editing and video editing
software on non-linear editing systems (NLE).
Using video, a director can communicate non-fictional and fictional events. The goal of
editing is to manipulate these events to bring the communication closer to the original goal.

1895 – Lumiere Brothers invented Cinematographe. Cinematographe was a three-in-one


device that recorded, captured and projected motion picture.

1898 - British film pioneer Robert W. Paul's Come Along, Do!, made in 1898 and one of the
first films to feature more than one shot.

1900 – There was no editing involved and the entire film was filmed in the order would be
seen in theatre, just one reel of film played at once.

4
BA (JMC) 209 Unit 1 Lesson 1

1903 – The Great Train Robbery, written, produced and directed by Edwin S. Porter, used
a number of innovative techniques including cross cutting, double exposure composite
editing and camera movement.

1915 - David Wark Griffith, considered to be the father of narrative cinema, invented some
techniques like parallel editing, pushing them to higher levels of complexity and depth. His
film ‘The Birth of a Nation’ had spatial continuity, 180 degree rule, establishing shot and
reverse shot.

1920 - Russian director Lev Kuleshov from Russia did an experiment that proves this
point. He took an old film clip of a head shot of a noted Russian actor and intercut the shot
with a shot of a bowl of soup, then with a child playing with a teddy bear, then with a shot
an elderly woman in a casket. Introduced cross cutting and theory of Montage in editing.

1927 - Philo Farnsworth's video camera tube converts images into electrical signals.

1951 - The first video tape recorder captured live images from television cameras by
converting the camera's electrical impulses and saving the information onto magnetic
video tape.

1971 - Sony began selling videocassette recorder (VCR) decks and tapes into the
consumer market.

1985 - Quantel released The “Harry.” The Harry was the first all-digital video editing and
effects compositing system. Due to technical limitations, it could record and apply effects
to a maximum of 80 seconds of 8-bit uncompressed digital video.

1987: Avid Technology created the Avid/1 Media Composer. It was designed using the
Apple Macintosh II computer platform, as well as proprietary Avid hardware and software.

1989: Avid Technology introduced the Avid/1 Media Composer at NAB. The codec used
for editing on Avid was the Motion JPEG (M-JPEG) codec, which became the primary
video editing codec of the early 90’s. It was not very high quality, but it worked fine for
offline editing.

1991: Adobe released Premiere 1.0 for the Mac.

1992: First feature film, Let’s Kill All the Lawyers, was digitally edited using the Avid. Up
until this point, only short-form videos and commercials could be edited because of hard
drive capacity limitations.

1993: Media 100 entered the market as a low-cost digital video editing solution. Media
100 offered steady advancements in compression technology, and continued to develop
higher video resolutions focusing primarily via software innovation, rather than hardware.

5
BA (JMC) 209 Unit 1 Lesson 1

1995: The DV codec and IEEE-1394 (FireWire 400) brought huge advancements to digital
video recording, capturing and editing.

1996: The English Patient was the first digitally edited film to win an Oscar for Best Editing
(edited by Walter Murch on the Avid).

1999: Apple released Final Cut Pro, which soon became a chief competitor to Avid.

2001: The Rules of Attraction, was the first feature film edited using Final Cut Pro.

http://youtu.be/Bzy94vWUitE
http://www.youtube.com/watch?v=KS6pjJ4eiVY

Basic concepts of editing

Video Editing or Non-Linear Video Editing is the process of taking video that is raw,
meaning untouched or newly recorded, and taking away clips of that video that are not
necessary to your story or point of the video. This taking away of footage is called cutting.
Tools used in Non-Linear Editing include a recorder to initially record the video, a digitizer
to convert the video to a digital format that can be used by a Non-Linear editing program,
and a Non-Linear editing program, and a computer to run the program.
There are many options to choose from for these four categories. Whichever options of
tools you use only matters as far as the quality of the final piece will be concerned.

6
BA (JMC) 209 Unit 1 Lesson 1

We use the word editing to mean any of the following:


• Rearranging, adding and/or removing sections of video clips and/or audio
clips.
• Applying colour correction, filters and other enhancements.
• Creating transitions between clips.
• Multiple-camera productions involve a spontaneous and simultaneous
selection. Single camera production involves script based shot material.
There are many reasons to edit a video and your editing approach will
depend on the desired outcome. Before you begin you must clearly define
your editing goals.

The general definition of video editing, however, can just be summarized by the following:
it is the process of manipulating and modifying video images to create something new.
Manipulation and modification include cutting segments, re-sequencing video clips, adding
audio clips, applying enhancements, creating transitions between clips and adding special
effects. It can be as simple as stitching together different scenes and shots with simple
video transitions, and can become as complicated as adding different computer-generated
imagery (CGI), audio and tying together different elements, which may take years,
thousands of man-hours and millions of dollars to accomplish, as is the case with big-
budget motion pictures

1.3 Assignments
1.3.1 Class Assignments
1. What is the concept of editing?
1.3.2 Home Assignments
1. What is a difference between single-camera and multi-camera editing

1.4 Summing Up
In this chapter, we learnt about the evolution of the editing as a concept.

1.5 Terminal questions

1. What is video editing and its concept?


2. At which stage does video editing come in production and what all does it
constitute?

1.6 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication

7
BA (JMC) 209 Unit 1 Lesson 1

1.7 Keywords

1. Video Editing: It is the process of manipulating and modifying video images to create
something new. Manipulation and modification include cutting segments, re-sequencing
video clips, adding audio clips, applying enhancements, creating transitions between clips
and adding special effects.

2. Footage: A raw, unedited material as originally filmed by a movie camera or recorded


by a video camera, which typically must be edited to create a motion picture or a video
product.

8
BA (JMC) 209 Unit 1 Lesson 2

Lesson 2 Why do we edit? Its objectives and significance.

STRUCTURE
2.0 Objectives
2.1 Introduction
2.2 Need of editing
2.3 Objectives
2.4 Importance
2.5 Assignments
2.5.1 Class Assignments
2.5.2 Home Assignments
2.6 Summing Up
2.7 Terminal questions
2.8 Suggested further readings
2.9 Keywords

9
BA (JMC) 209 Unit 1 Lesson 2

2. Why do we edit? Its objectives and significance.

This lesson deals with understanding the need and importance of the video editing.
________________________________________________________________________

2.0 Objectives
After going through this lesson, you should be able to:
 Understand the objectives of the editing.
 Understand what can be achieved through editing process.
___________________________________________________________________
2.1 Introduction

It's normally very difficult (-if not entirely impossible) to get perfect video footage in a single
take or shot. The solution is to shoot multiple takes and manipulate them into a cohesive
whole in the editing suite. There are many reasons to edit a video and your editing
approach will depend on the desired outcome.

Editing is the process of selecting and preparing written, visual, audible, and film media
used to convey information. The editing process can involve correction, condensation,
organization, and many other modifications performed with an intention of producing a
correct, consistent, accurate and complete work.

2.2 Need of editing

The editing process often begins with the author's idea for the work itself, continuing as a
collaboration between the author and the editor as the work is created. As such, editing
can involve creative skills, human relations and a precise set of methods.

Shooting in many single-camera or multi-cam productions happens in a non-sequential


manner. Scenes are often shot many times and from many angles. Video editing is about
stringing them in such a manner so as to ‘construct’ a story for the viewers.

Almost all programmes you see on television have been edited in some way. Although
editing equipment and techniques change almost from day to day, the basic editing
functions remain the same.

10
BA (JMC) 209 Unit 1 Lesson 2

A good video, whether it be a music video, marketing campaign video, corporate video, or
anything else, must follow 3 important stages

 Pre-production
 Production
 Post-production

Post-production is primarily the editing stage. But why is video editing important?
Without editing, you’ll be left with a complete mess. It’s true that all roles of video
production are important. Without a good cameraman, your shot is ruined. Without a
good director, your scenes are disorganized and confusing. But without an editor, all of
the other aspects of video production can no longer come together to create the
masterpiece that you’re trying to create.

Video editing is important because it is the key to blending images and sounds to make
us feel emotionally connected and sometimes truly there in the film we’re watching. It’s
a safe assumption to say that video editing is among the most important jobs in the film
industry. With professional video editing you can create an emotion-evoking
masterpiece, and it can make or break your film, which is why it’s just as important to
choose the right video editor as it is to choose the right camera equipment.

Editing, at its most basic, can help you put all your shots into the proper sequence. You
use editing tools to weed out or fix any mistakes made during the production process. It
can be used to trim the video to the length you want and it can also be used
communicate the right aesthetic to the audience.

When we refer to editing, we’re not just talking about cutting out necessary footage.
We’re also talking about the value adding aspect of the editing process such as
including graphics, placing text, or adding a background score.

Since video editing is a cumbersome process, a lot can go wrong with the final output.
Even the most skilled editors are not immune to this
.

2.3 Objectives / goals of editing

Video editing is essentially the process of editing segments of motion video production
footage by cutting, trimming and overlaying them, and adding special effects and sound
recordings. Following are some objectives which can be achieved with editing process:

 Remove unwanted footage

This is the simplest and most common task in editing. Many videos can be dramatically
improved by simply getting rid of the flawed or unwanted bits.
11
BA (JMC) 209 Unit 1 Lesson 2

 Choose the best footage

It is common to shoot far more footage than you actually need and choose only the best
material for the final edit. Often you will shoot several versions (takes) of a shot and
choose the best one when editing.

 Create a flow

Most videos serve a purpose such as telling a story or providing information. Editing is a
crucial step in making sure the video flows in a way which achieves this goal.

 Add effects, graphics, music, etc.

This is often the "wow" part of editing. You can improve most videos (and have a lot of fun)
by adding extra elements.

 Alter the style, pace or mood of the video

A good editor will be able to create subtle mood prompts in a video. Techniques such as
mood music and visual effects can influence how the audience will react.

 Give the video a particular "angle"

Video can be tailored to support a particular viewpoint, impart a message or serve an


agenda.

2.4 Importance of video editing

What makes a film or television show worth watching? What do you want to watch the film
for, what are your values and needs, what is your emotional state at the time, are you
looking for distraction or escape, are you watching for the educational value, all these
question boil down to the one answer, what makes a film worth watching by you, the
answer is you. But to get the movie to the point where the viewer will want to see if the
production is worth watching, it must be edited for enhancement.

Now what enhances the T.V. programme to be viewed and memorable are its elements,
the story, the performances, the lighting, the choice and quality of film or digital medium,
the sound quality, the choice of music, especial the underneath (background) mood music,
colour corrections, and the cuts.

12
BA (JMC) 209 Unit 1 Lesson 2

Video editing is one of the most important components of the entire video and film
production industry. It is no coincidence that there is an Academy Award category for
editing in film, and it can drastically alter and improve the quality of the entire project.
Editing is the assembly of clips filmed placed in the order on video editing software’s
timeline specified by the director to tell a story. Post production is very important. It is the
final polish of a process that may have taken a long time to put together. It is the magic
and seamlessness of sound and image bits, shot and running simultaneously, that form
the smooth flowing visual story presented to an audience.

 The Perfect Flow


The video can be really appealing with virtually no errors or room for improvement up until
the post-production process; however, the flow you have in mind is entirely dependent on
the editors working with the director and cinematographer. When an editor receives the
footage, their primary goal is to make it as smooth and organized as possible. Basically,
they want to contain the flow that you established. It’s capturing this flow with cuts, pace,
and sound that makes a stunning film.

 Cutting Your Way to Success


In video editing, one of the most common and most effective ways to achieve the perfect
flow is through cuts. Cutting in and of itself is not difficult: you choose a start and end time
and there you go, it’s gone. But that’s not all that goes into a cut. To make your video’s
flow ideal, you have to cut shots at the perfect time. Timing is critical! Cut it too soon and
you end up with a sudden stop that the audience was not expecting. Cut it too late and you
end up with a shot that seems to drag on forever.

 Switching Gears…or Scenes


Transitions make all the difference to a scene. Sometimes, without a transition you end up
with a jerky, fast-paced mess that nobody wants to watch. Video editing makes these
transitions smooth and elegant. The flow that the editor is trying to preserve is what makes
the video great, and transitions are a way to keep the pace of the film controlled. With the
pace under control, the editor can focus on equally as important changes, such as
continuity editing, coloring, layering, and sound editing.

 Sound Editing
Everyone thinks of video editing as the visual part. Although this is true, it’s not the whole
story. Editing a film so that the pictures flow does not necessarily mean the audio flows as
well. Adjusting volumes levels and synchronizing audio clips with video clips can be
anything but a walk in the park; however, it can also make the video exactly as it was
intended. Sound editing is how video can set the mood and evoke emotions from your
audience. Replacing a sound track with another, unintended one can really open your
eyes to what it is sound editing truly does for us
Video editing is important because it is the key to blending images and sounds to make us
feel emotionally connected and sometimes truly there in the film we’re watching. It’s a safe
assumption to say that video editing is among the most important jobs in the film industry.
With professional video editing you can create an emotion-evoking masterpiece, and it can
make or break your film, which is why it’s just as important to choose the right video editor
as it is to choose the right camera equipment.

13
BA (JMC) 209 Unit 1 Lesson 2

 Colour

What is shot on one camera may not be colour aligned or balanced properly to match
another camera or even previously shot footage shot on a previous day, therefore an
editor is need to give it a professional edge. Color corrections might be as easy as
lightening a frame or two, or it can be a very time consuming and expensive colour edit,
requiring everything from colour matching, lightening, balancing the background, stepping
up the image and re-digitizing after the bump.

2.5 Assignments

2.5.1 Class Assignments


1. What are objectives of video editing?
2.5.2. Home Assignments
1. What is importance of editing video making process?

2.6 Summing Up
In this chapter, we learnt about the objectives or goals of video editing along with its
significance in bringing fineness to the final product.

2.7 Terminal questions


1. What are various objectives of editing process?
2. What are various steps or factors which come under importance of editing?
What difference can it bring in the final product?

2.8 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. The Technique of Film and Video Editing: History, Theory, and Practice, Ken
Dancyger, Focal Press

2.9 Keywords
1. Post-production: Post-production is primarily the editing stage.

14
BA (JMC) 209 Unit 1 Lesson 3

Lesson 3 Different types of editing

STRUCTURE
3.0 Objectives
3.1 Introduction
3.2 Different editing techniques
3.3 Assignments
3.3.1 Class Assignments
3.3.2 Home Assignments
3.4 Summing Up
3.5 Terminal questions
3.6 Suggested further readings
3.7 Keywords

15
BA (JMC) 209 Unit 1 Lesson 3

3. Different types of editing

This lesson discusses various kinds of editing techniques used in past till present.
________________________________________________________________________

3.0 Objectives
After going through this lesson, you should be able to:
 Understand the various types of editing machines and software.
 How editing was done in the past and what changes has been brought to
video editing.
_______________________________________________________________________
3.1 Introduction

Video editing is a major part of the video post-production process. Typically in post-
production you capture video from a camera, view your raw footage, trim the clips for the
best parts, sequence them on a timeline, and add transitions, titles, music, sound effects,
and special effects.

3.2 Different Editing Techniques

Although most editors opt for digital non-linear editing for most projects, it makes sense to
have an understanding of how each method works.

a) Film Splicing

16
BA (JMC) 209 Unit 1 Lesson 3

https://www.youtube.com/watch?v=bzhST6WdIk0
https://www.youtube.com/watch?v=dUxbfiZ_-9Y

It was the first way to edit moving pictures and conceptually it forms the basis of all video
editing. Traditionally, film is edited by cutting sections of the film and rearranging or
discarding them. The process is very straightforward and mechanical.
In theory a film could be edited with a pair of scissors and some splicing tape, although in
reality a splicing machine is the only practical solution. A splicing machine allows film
footage to be lined up and held in place while it is cut or spliced together.

A film splicer (also called a film joiner, usually in Europe) is a device which can be used
to physically join together lengths of photographic film. It is mostly used in film motion
pictures. The units are made in various types depending on the usage: Single-8, Super 8
film, 16mm, 9,5 mm,35mm and 70mm. Used in film editing to make a cut (transition).

Cement splicers join films together by using chemical called film cement which is made of
film base dissolved in a solvent. The Photographic emulsion is removed from the area to
be joined and the base of the other end is brought into contact with it.
Film editors use a version with a very small overlaps at the top and bottom of the picture
frame to edit film negatives, although units with a longer overlap are preferred for
projection release prints.

Tape splicers
Here a piece of thin transparent adhesive tape is used to join the two ends. The tape may
be pre-perforated for the film perforations, or the splicer may make perforations as the
splice is made (this type of splicer is sometimes referred to as guillotine splicer).
Tape splicers can be used on most types of film. This is the most popular way to join
polyester prints in theatres.

b) Tape to Tape (Linear)

Linear editing was the original method of editing electronic video tapes, before editing
computers became available in the 1990s. Although it is no longer the preferred option, it
is still used in some situations.

17
BA (JMC) 209 Unit 1 Lesson 3

OR

In linear editing, video is selectively copied from one tape to another. It requires at least
two video machines connected together — one acts as the source and the other is the
recorder.
Place the video to be edited in the source machine and a blank tape in the recorder.
Press play on the source machine and record on the recorder.

The basic process is to play the original tape in the source machine and record a new
(edited) version on the record machine.
Begin by choosing which of your VCRs (Video cassette Recoders) will be your source and
which will be the record. In most cases it won't matter which is used for each purpose, but
if one machine is strong in one area it might make a difference. For example, if one
machine has better recording features it will be the obvious choice for the record machine.

18
BA (JMC) 209 Unit 1 Lesson 3

Connect the VCRs

To connect the two VCRs together, plug the video and audio outputs of the source
machine into the video and audio inputs of the record machine. Digital video machines
may also have connectors such as Firewire or USB, which are the best quality of all.

Connect the Monitors


Once the tape machines are connected, connect each machine to its own monitor.
Although you can edit with one monitor, it is significantly easier with two. If you absolutely
must stick to one monitor, it will be connected as the record monitor.
Play a tape in the record machine and make sure it appears on the record monitor. Also
check the audio.
Stop the record tape and play a tape in the source machine. It should appear on both the
source and record monitors.

Linear Editing Features and Techniques

Most professional VTRs (Video tape recorders) let editor switch between two major editing
modes.

 Assemble Editing

 Insert Editing

In the assemble mode, the record VTR erases everything on its tape (video, audio,
control, and address tracks) just ahead of copying the material supplied by the source
VTR.
Every time a new scene is recorded, it will simply erase what was there before and replace
it with anew audio and video.
19
BA (JMC) 209 Unit 1 Lesson 3

The edit process is fast. Control Track - Stream of electronic pulses recorded on the tape,
sort of like the ticking of a clock. It is what the VTR uses to control the speed of the tape
and the speed of the video record/playback heads.

Assemble Editing

A slight mismatch of sync pulses will cause some edits to “tear”, causing a sync roll, which
means that the picture will break up or roll momentarily at the edit point during playback.

Insert Editing

To prepare the edit master tape for insert editing, you need to record a continuous control
track on it. The simplest way is to record “black”, with the videos and audio inputs in the off
position. VTR faithfully lays down a control track in the process and makes the tape ready.
The recording of black happens in real time.

20
BA (JMC) 209 Unit 1 Lesson 3

Insert Editing

Advantages

It is faster way of editing. All edits are tear free.

Very useful in news broadcast. New video or audio can be inserted


anywhere in the tape.

A shot can be inserted without


effecting audio or sound track.

c) Digital/Computer (Non-linear)

A non-linear editing system (NLE) is a video (or audio editing) digital audio workstation
system that performs non-destructive editing on source material. The name is in contrast
to 20th century methods of linear video editing and film editing.
Video footage is recorded (captured) onto a computer hard drive and then edited using
specialized software. Once the editing is complete, the finished product is recorded back
to tape or optical disk.

21
BA (JMC) 209 Unit 1 Lesson 3

Non-linear editing is the most natural approach when all assets are available as files on
video servers or hard disks, rather than recordings on reels or tapes—while linear editing
is tied to the need to sequentially video and audio. Non-linear editing enables direct
access to any video frame in a digital video clip, without needing to play or scrub/shuttle
through adjacent footage to reach it, as is necessary with video tape linear editing
systems. It is now possible to access any frame by entering directly the timecode or the
descriptive metadata. An editor can, for example at the end of the day in the Olympic
Games, easily retrieve all the clips related to the players who received a gold medal.
So instead of going in a set order, you are able to work on any segment of the project at
any time, in any order you want. In nonlinear video editing, the original source files are not
lost or modified during editing.

Digital/Computer (Non-linear) Softwares


 Windows Live Movie Maker (Microsoft)
 Pinnacle Studio (Pinnacle Systems)
 Premiere Pro (Adobe)
 Vegas Pro (Sony Creative Software)
 Apple iMovie (Apple)
 Final Cut Pro (Apple)
 Corel Video Studio Pro
 Movie Edit Pro (Magix)

22
BA (JMC) 209 Unit 1 Lesson 3

Name of the video Basic layout of editing software


Programme Monitor
Editing software

Source or
Preview Monitor

Timecode

Audio level
Project files monitor
window
Timeline
Imported files

Video track Audio clip


Editing toolbar
Video clip
Audio track

d) Live Editing

Multiple cameras and other video sources are routed through a central mixing console and
edited in real time. Sports events and award functions are few common examples.

23
BA (JMC) 209 Unit 1 Lesson 3

The “live production edit” approach involves mixing all of the camera feeds together (using
a video mixer/selector) and then having a director select shots/edit, in real time, to produce
a complete sequence at the end of the show.

A big advantage of the live event approach is that it produces an edited sequence of the
show very quickly (it could technically be done at the end of the show). Some
disadvantages with this approach are that it requires special equipment (video
mixer/switchers) to connect and mix/select the video feeds and special skills to
communicate (and direct) the different cameras to coordinate their shots and the ability to
make split second decisions to select the right edits in real time as the show progresses.

This also requires that all of the cameras be wired back to a central mixer, meaning that
the mobile camera operators need to drag a lot of wire along with their cameras as they
move around the venue. Often, this means hiring another person to manage the wire as
the camera operator scurries around. This approach requires a lot more coordination and
communication between the director and the different cameras – to make sure the desired
shots are available to select when needed and so the director does not decide to cut to a
specific camera right when the operator decides to change shots (to catch a different angle
or zoom in to get a closer shot).
In addition, to really capture the most powerful sequences, the director really needs to
know the musical arrangement and even specific stage action (like where the performers
will be moving) in great detail to capture the best edit in real time.

http://concertvideo411.com/2011/10/30/live-edit-or-post-production-edit/

3.3 Assignments

3.3.1 Class Assignments


1. What is a film splicer?

3.3.2 Home Assignments


1. What is a linear editing?

3.4 Summing Up
In this chapter, we learnt about various editing techniques from start to today’s times. .

3.5 Terminal questions

1. What is a non-liner editing? What are various softwares available to do it?

2. What is difference between linear and live-editing?

24
BA (JMC) 209 Unit 1 Lesson 3

3. What is difference between insert and assemble editing?

3.6 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing

2. Video Production, Vasuki Belavadi, Oxford Publication

3. The Technique of Film and Video Editing: History, Theory, and Practice, Ken
Dancyger, Focal Press

3.7 Keywords

1. Film Splicer: A film splicer (also called a film joiner, usually in Europe) is a
device which can be used to physically join together lengths of photographic
film. It is mostly used in film motion pictures.

2. Live Edit - The “live production edit” approach involves mixing all of the
camera feeds together (using a video mixer/selector) and then having a
director select shots/edit, in real time, to produce a complete sequence at the
end of the show.

25
BA (JMC) 209 Unit 1 Lesson 4

Lesson 4 Meaning of continuity in editing and its rules

STRUCTURE
4.0 Objectives
4.1 Introduction
4.2 Meaning of continuity and its importance
4.3 Rules of continuity
4.4 Assignments
4.4.1 Class Assignments
4.4.2 Home Assignments
4.5 Summing Up
4.6 Terminal questions
4.7 Suggested further readings
4.8 Keywords

26
BA (JMC) 209 Unit 1 Lesson 4

4. Meaning of continuity in editing and its rules

This lesson explains the concept of continuity in editing and its significance.
_______________________________________________________________________

4.0 Objectives
After going through this lesson, you should be able to:
 Understand why continuity is important to be followed in video editing.
 Understand the application of the carious rules of continuity while shooting and
editing.

4.1 Introduction

Continuity in shooting and editing is the practice of ensuring that details in a shot are
consistent from shot to shot within a video film. When there is continuity between shots,
then audiences have a greater suspension of disbelief and will be more engaged in the
video film.

4.2 Meaning of continuity and its importance

Continuity editing is the predominant style of film editing and video editing in the post-
production process of filmmaking of narrative films and television programs. The purpose
of continuity editing is to smooth over the inherent discontinuity of the editing process and
to establish a logical coherence between shots.

Continuity editing is the dominant editing technique found in narrative feature films,
television shows and web content. It is used to unify a series of disconnected shots into a
scene that plays out in a logical fashion. Movies and television are relatively new mediums
of storytelling completely different from anything we've ever seen before. Part of what
makes them so unique is that editing allows the viewer to see a wide shot cut to a close
up--something our eyes don't see in real life. This could make a story hard to follow, but
continuity editing combined with solid narration allows the viewer to easily get immersed in
the story.

27
BA (JMC) 209 Unit 1 Lesson 4

4.3 Rules of Continuity


4.3.1 Action Match Cut – One sequence shot from two angles and merged in one so that
it provides continuous flow of the actions and story.

Entire scenes and montages can move between time, but the shots that compose the
scene should have temporal continuity. An individual scene needs to feel as if it is
happening right now in real time. The most common way of maintaining this illusion is to
cut your shots on actions so that they match up to each other.

For example, let's say that we're editing two shots together of a man throwing a football.
We can start with the close up where he begins to throw the ball and then cut to the wide
shot where we see the ball leaving his hand and traveling across the field. We would want
to cut the two shots together so that they meet at a point when the man's arm is in the
same position. This way the action appears to be seamless when edited together.

Cutting on action or matching on action refers to film editing and video editing techniques
where the editor cuts from one shot to another view that matches the first shot's action.
A common example is a man walking up to a door and reaching for the knob. Just as his
hand touches the knob, the scene cuts to a shot of the door opening from the other side.
Although the two shots may have actually been shot hours apart from each other, cutting
on action gives the impression of continuous time when watching the edited film. By
having a subject begin an action in one shot and carry it through to completion in the next,
the editor creates a visual bridge, which distracts the viewer from noticing the cut or
noticing any slight continuity error between the two shots.

28
BA (JMC) 209 Unit 1 Lesson 4

4.3.2 180 degree axis - An imaginary line called the axis connects the characters, and
by keeping the camera on one side of this axis for every shot in the scene, the first
character is always frame right of the second character, who is then always frame left of
the first.

The first rule that any filmmaker needs to learn before he picks up his camera is the 180
degree rule. Adherence to this rule is necessary to maintain continuity in your scene. What
you do is create an imaginary line across your set that you will not cross with the camera.
This way if the actor is on the left side of the frame and the actress is one right side in the
master shot, they will stay in those established positions throughout the scene as the
medium shots and close ups are editing together.

If the camera crossed the line and the actress appeared frame left and the actor frame
right, then this would cause the audience to become disoriented because the established
spatial continuity had been violated. Once the spatial distance and positions has been
established, you should not violate it if you want to maintain continuity.

29
BA (JMC) 209 Unit 1 Lesson 4

4.3.3 Audio Continuity – Maintaining a smooth flow of the audio. It can be ambience
sound or dubbed audio or narration.
Audio Continuity Problems
Audio continuity problems can be caused by a wide range of factors including shot-to-shot
variations in:

 background sound
 sound ambiance (reverberation within a room, mic distance, etc.)
 frequency response of mic or audio equipment
 audio levels

In single-camera production most of these inconsistencies may not be easy to detect on


location. It's only when the various shots or takes start to be assembled during editing that
you discover the problem. As you cut from one scene to another you may discover that the
talent suddenly seems to move closer or farther away from the mic, or that the level or
type of background sound changes (passing traffic, the hum of an air conditioner, or
whatever).
Some problems can be helped with the skilled use of graphic equalizers or reverberation
units. Changes in background sound can sometimes be masked by recording a bed of
additional sound in the audio. This could be music or street noise. As in most of life, it's
easier to avoid problems than to fix them -- assuming there even is a way to fix them.

 Things to Be Alert For

30
BA (JMC) 209 Unit 1 Lesson 4

First, be aware that mics used at different distances reproduce sounds differently. This is
due to changes in surrounding acoustics, as well as the fact that specific frequencies
diminish over distance.
Although some expensive directional mics will minimize the effect of distance, most mics
exhibit proximity or presence effects.
A good pair of padded earphones placed on top of a set of well-trained ears can detect
these differences.
With the increased reliability of wireless mics many production facilities are equipping
actors with their own personal mics. The distance of the mic -- it's generally hidden in the
person's clothes -- can't change, and because of the proximity of the mic, background
sounds tend to be eliminated. Some of the things we talked about in using personal mics
should be kept in mind here.
Finally, you need to watch for changes in background sounds. For example, the sound of a
passing car or a motorcycle may abruptly appear or disappear when you cut to a shot that
was recorded at a different time.
Even if an obvious background sound doesn't disappear, its level may change when you
cut from one person to another. This may be due to differences in microphone distance
coupled with the level adjustments needed to compensate for the different strength of
voices.

Continuity Issues in Background Music


Music can smooth the transition between segments and create overall production unity.
Background music should add to the overall mood and effect of the production without
calling attention to itself. The music selected should match the mood, pace and time period
of the production.
Vocals should be avoided when the production contains normal (competing) dialogue.
Ideally, the beginning of a musical selection should coincide with the start of a video
segment and end as the segment ends. In the real world, this almost never happens, at
least without a little production help.
To a limited degree you can electronically speed up and slow down instrumental segments
with digital editing equipment, especially if the music is not well known.

Because a kind of continuity issue arises when music has to be faded out "midstream" to
conclude at the end of a video segment, you can back time the music. If the music is
longer than the video, during editing you can start the music a predetermined amount of
time before starting the video. You can then fade in the music as the video starts. This will
be less noticeable if the segment starts with narration and the music is subtly brought in
behind it.

31
BA (JMC) 209 Unit 1 Lesson 4

4.3.4 Story Telling continuity – Series of sequences to tell a continuous story without
breaking or disrupting the events or narrative. Most narrative videos or film is made with
the goal to construct a clear and coherent event structure so that the viewer can readily
understand the sequence of events depicted in it. Film continuity is managed in the service
of telling a story and thus creating a sense of continuity and discontinuity of events within
the story world. To this end, directors rely on formal devices for editing together the distinct
camera shots that make up a film. Some of these devices are quite dramatic, such as the
fade-out, fade-in, or dissolve, and are intended to signal a shift in scenes. However,
approximately 95% of editing boundaries are cuts (Cutting, 1995), which constitutes the
splicing together of two camera frames. The juxtaposition of the content of two edited
frames can be jarring and contain little feature overlap, but most often, a cut is intended to
convey a continuous flow of events and be “invisible” to viewer. Some authors have
argued that cuts are invisible because they correspond to visual interruptions that occur
naturally due to the movements of the eyes, in particular blinks and saccades.

Continuity editing is important for the management of perception of spatial and temporal
ellipsis. Films rarely depict all of the sub-events that make up a larger event. For example,
one shot may show an actor approaching the stairs of a building and the next shot may
show the actor entering that building. In such cases, viewers are intended to perceive
these events as being roughly continuous in space and time. The editing technique most
likely to be used in this situation would be the cut. In contrast, filmmakers may use fade-
ins, fade-outs, or dissolves across shots to indicate that there is a significant amount of
missing narrative time between the two shots.

4.3.5 Light continuity - Lighting for Time, Date, and Location


You may use lighting to indicate time, date, and location as a more subtle but important
consideration when designing and editing a scene. Early morning and late afternoon light
is different from that at high noon. The colours are different (early and late in the day, the
light is warmer, redder), and the angle of the light is lower. Winter sun is bluer and colder;
the lights of summer, fall, and spring each have their own colours and contrast levels.
Intercutting scenes from cameras with noticeably different colour characteristics (colour
balance) in a dramatic production will immediately be apparent to viewers.

To alleviate this problem all cameras should be carefully colour-balanced and


compared before a production. Once cameras are colour balanced and matched,
an electronic test pattern with all of the primary and secondary colours is often
recorded at the beginning of the recording. This has traditionally been used to
colour balance the video playback. However today, many systems can
electronically adjust color from the recording's integrated colour reference signal.

32
BA (JMC) 209 Unit 1 Lesson 4

4.4 Assignments

4.4.1 Class Assignments


1. Define continuity in video editing. Why is it significant?

4.4.2 Home Assignments


1. What is difference between story continuity and 180 degree axis rule in editing?

4.5 Summing Up
In this chapter, we learnt about maintaining continuity while shooting and editing through
various set of rules being applied.

4.6 Terminal questions

1. What is continuity? Why is it significance in video editing?


2. What are different rules of continuity in editing? How can it improve the quality of the
video product if applied correctly?

33
BA (JMC) 209 Unit 1 Lesson 4

4.7 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. The Technique of Film and Video Editing: History, Theory, and Practice, Ken
Dancyger, Focal Press

4.8 Keywords

180 degrees axis - An imaginary line called the axis connects the characters, and by
keeping the camera on one side of this axis for every shot in the scene, the first
character is always frame right of the second character, who is then always frame left
of the first.

Audio continuity - Maintaining a smooth flow of the audio. It can be ambience sound
or dubbed audio or narration.

34
BA (JMC) 209 Unit 1 Lesson 5

Lesson 5 Role of the video editor

STRUCTURE
5.0 Objectives
5.1 Introduction
5.2 Video Editor
5.3 Challenges of video editor
5.4 What is the significance of the role of the editor?
5.5 Assignments
5.5.1 Class Assignments
5.5.2 Home Assignments
5.6 Summing Up
5.7 Terminal questions
5.8 Suggested further readings
5.9 Keywords

35
BA (JMC) 209 Unit 1 Lesson 5

5. Role of the video editor


___________________________________________________________________
This lesson deals with understanding role of the video editor in improving the quality of the
final product.
___________________________________________________________________
5.0 Objectives
After going through this lesson, you should be able to:
 Understand roles and duties of a video editor.
 What are the challenges of an editor in improving the quality of the programme.
___________________________________________________________________

5.1 Introduction
Video editing is a major part of the video post-production process. Typically in post-
production you capture video from a camera, view your raw footage, trim the clips for the
best parts, sequence them on a timeline, and add transitions, titles, music, sound effects,
and special effects.

5.2 Video Editor

A film and video editor is a highly skilled film industry employee who edits movies or
videos. The success or ultimate failure of the production lies in their hands. The final
production must be a coherent project that incorporates the storyline and personality of the
starring actors. Many in the industry consider film editing to be an art that often goes
unnoticed and unappreciated, with some dubbing film editing as 'the silent art'. The history
of video editing is a long trek, going back to the early heydays of Hollywood. As technology
grew, the job descriptions of film editors expanded, to include the field of video editors.

5.2.1 Roles and Responsibilities of Video Editor

The job duties of film and video editors are numerous. An employee might find himself
studying scripts to understand the storyline and collaborating with directors, producers,
and film staff regarding the script and director's goals. Throughout the filming, the film
editor will examine tapes for editing purposes, looking for errors, segments that run long or
parts that do not match the story or go with the storyline. He / she will work with others
adding sounds, voices and music that match the script and place them in the appropriate
place.

Film and video editors complete these tasks with digital equipment and computer software
to create high-quality sound effects. Varying camera angles and shots will be looked at
and the best ones added to the reels. The reels will be reviewed several times before the
editor comes up with a final version called the director's cut. During the process, he works

36
BA (JMC) 209 Unit 1 Lesson 5

with other staff including sound and lighting technicians, costume designers and makeup
artists, actors, directors and other editors. Making a movie is truly a team effort.

The film and video editor’s job has changed over the years. When movies were black and
white, editing was simple. With computer and advanced technology, a film and video
editor's job became increasingly more complex using computer graphics to aid in editing
films and supplying the necessary elements to create the finished product.

Edit raw footage content for producing film and video. Perform video and audio editing
based on story sequence and continuity. Use creativity techniques in designing graphics.
Work closely with producers and directors during production. Create the voiceover text and
other commentary for the video. Cut video sequences effectively to ensure the scenes are
seamless and flow logically. Review the script to better understand the video production
requirements. Perform all editing work including inserting music, sound effects,
storyboarding, etc. Prepare a logical storyboard by combination of the most effective
scenes. Review all edited tapes to identify any issues and recommend changes as
necessary. Develop superior skills and expertise in handling computer editing equipment,
video switching devices, etc. Discuss with directors and producers about video layouts and
editing styles. Collaborate closely with others in audio and visual teams to create
continuous and complete story. Organize and assemble video segments to deliver
continuous and sequential story of specified length. Develop post-production models for
films. Organize video screenings for directors and producers to get their feedback.

5.2.2 Steps followed by the editor in editing process

 Choose the best footage - It is common to shoot far more footage than you actually
need and choose only the best material for the final edit. Often you will shoot
several versions (takes) of a shot and choose the best one when editing.
 Create a flow - Most videos serve a purpose such as telling a story or providing
information. Editing is a crucial step in making sure the video flows in a way which
achieves this goal.
 Give the video a particular "angle“ - Video can be tailored to support a particular
viewpoint, impart a message or serve an agenda.
 Increase or decrease the duration of the film.
 Reveal information in stages.
 Add effects, graphics, music, etc. - This is often the "wow" part of editing. You can
improve most videos (and have a lot of fun) by adding extra elements.

 Alter the style, pace or mood of the video - A good editor will be able to create
subtle mood prompts in a video. Techniques such as mood music and visual effects
can influence how the audience will react.

5.3 Challenges of video editor


37
BA (JMC) 209 Unit 1 Lesson 5

 Editors need to understand and articulate the inner workings of their specialty-
equipment, job pressures, working methods, theories, and instincts.

 The solitary nature of the job stresses organization, discipline, persistence, self-
reliance, and tireless devotion to details. Yet all editors recognize the importance of
collaboration, particularly with the director, and a generally soft-spoken nature can
be an asset to the give and take nature of the business. Editors never forget that
the director must be the unifying mind behind the project, and they remain servant
to that vision. The ideal relationship between editor and director is a close familial
one, in some cases spanning decades of trusted collaboration.

 Editors tend to work beyond the cutting- through previews, premieres, re-releases;
sometimes they are even reengaged for television adaptations and reconstruction's
years later. Unfortunately, film editors (like sound editors, who are even more
constrained by last-minute demands) face short budgets, short schedules, and
short tempers. Despite these limitations, editors are uniformly devoted to their
primary responsibility: to make real the directors vision. This duty involves speaking
up when they feel the director is too close to the film to see its flaws. Enter the
editor as "objective eye."

 A film editor is looking at the picture with complete objectivity... they should try very
hard to pretend they're the audience.

 Get Faster- Speed is an essential quality in a good editor. By speed, I simply mean
the time it takes to perform the edits. The danger with opting for being seen as a
fast video editor, is that the time for the thoughtfulness and consideration required
to produce quality projects can get squeezed out as schedules get shorter and
expectations get raised. Therefore “getting faster” is really about saving time on
performing the edits to make time to think about your edits. Here are some
suggestions on how to save time in the edit suite:

 Set up blank template folders to keep your projects consistently organized — which
you can copy and paste onto your edit drive with every new project. If you want an
app for that, use the free Post Haste app from Digital Rebellion.

 Memorize keyboard shortcuts for your NLE of choice. Print out a cheat sheet and
look at it often. If you find yourself using 2 or 3 strokes to perform one action, set up
a keyboard shortcut that does all of those things in one hit.

 Tidy as you go. Keeping your edits tidy and organized is one of the best ways to
save incremental amounts of time. I try to have my bins, sequences, and folders
laid out in such a way that if another editor had to sit down at my edit with no
outside input, it would be obvious what the latest sequence is, where the title
graphics are, how the project is structured, and so on.

 Watch More Work - Awareness of emerging creative trends is just as important as


knowing the latest tech developments. What was surprisingly good and what would
I definitely change? In going through this sometimes cringe-inducing process, I
hope I’ll stop making some of the same mistakes time and time again and hopefully
see that I’ve grown as a video editor.

38
BA (JMC) 209 Unit 1 Lesson 5

 Make Time to Learn - The most obvious way to improve is to make the time to
actively learn new things. Watch some tutorials or learn a new piece of software. It’s
also important to try to learn about fields similar to your own, but different enough to
bring a fresh perspective. As an editor learning about typography, graphic design,
photography, scriptwriting, storytelling, music composition, or painting could all help
me become a more creative, more inspired, film editor — a worthwhile investment.

5.4 What is the significance of the role of the editor?

Creating a video project of any kind is a long process with many important phases, and
one of the most important steps in creating the final cut of your project is the video editing
process. The editing phase of your video is a crucial step in video production. Once you
have all your footage and all your audio recorded it is time to edit it down into something
manageable. Here are three reasons why video editing is so important.

1.) Reviewing all Footage and Audio

The video editing process allows you to look over everything you shot during the shoot.
During filming it is easy to overlook certain shots you are creating because of how fast
paced a film set is. The editing process lets you slow down and carefully review all the
content that you have to work with so that you can formulate a way to put it all together.

2.) Improve or degrade the quality of the product

The editing stage is the stage where you can make or break your video. This is where all
the content you have comes together in a meaningful way. This is where you bring
together disparate clips into a cohesive story. There sometimes are shots you never really
thought would work, but in the editing room, they come alive. This is also where you make
big decisions about coloring, visual effects, transitions, and more.

3.) Creative Decisions

A well-edited project will be crisp and flow with great precision. All the creative decisions
you have made from pre-production through post-production shine through after it has all
been edited down to its final form. This is the process where you decide the pacing of the
film, and how all the shots work together to create a unified whole.
Video editing is a lengthy process on its own and requires a great attention to detail. It is
never a bad idea to hire a video editing service for your project because you know that you
will get professional results every time. For more information about the services we
provide, please contact us.
The artistry of editing depends not only on a knack for creativity, but also an in-depth
knowledge of the capabilities not just of their own non-linear editing software, but also of
graphics and animation software, camera operation, and all the techniques available to
expert users of those tools. Editing is often a matter of problem-solving, and knowing what
39
BA (JMC) 209 Unit 1 Lesson 5

CAN be done is a prerequisite for deciding what SHOULD be done. If an editor says 'I
don't know how to do that' the next sentence out of his/her mouth should be 'but I know
someone who does' or you're not working with a good editor.

5.5 Assignments

5.5.1 Class Assignments


1. Who is a video editor? What does he/she do?

5.5.2 Home Assignments


1. What are challenges for a video editor?

5.6 Summing Up
In this chapter, we learnt about roles and responsibilities of a video editor. Also the
challenges he/she may face while working.

5.7 Terminal questions

1. What are roles and duties of a video editor?


2. Why the role of a video editor is vital in improving the quality of final product?

5.8 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. The Technique of Film and Video Editing: History, Theory, and Practice, Ken Dancyger,
Focal Press

40
BA (JMC) 209 Unit 2 Lesson 1

___________________________________________________________________

UNIT 2 BA (JMC) 209


___________________________________________________________________

UNIT 2: Process of Video Editing


___________________________________________________________________

LESSON 1

Video Formats: Analogue and Digital 42

LESSON 2

Linear and Non-linear Video Editing: Equipment and its functions 50

LESSON 3

Steps for Linear and Non-linear Video Editing 57

LESSON 4

Editing Techniques: Types of Cuts and Transitions 66

41
BA (JMC) 209 Unit 2 Lesson 1

Lesson 1 Video Formats: Analogue and Digital

_____________________________________________________

STRUCTURE
1.0 Objectives

1.1 Introduction

1.2 Analogue Formats

1.3 Digital Format

1.4 Analogue to Digital conversion process

1.5 Benefits of Digital signals and formats

1.6 Class Assignments

1.6.1 Class Assignments

1.6.2 Home Assignments

1.7 Summing Up

1.8 Terminal questions

1.9 Suggested further readings

1.10 Keywords

42
BA (JMC) 209 Unit 2 Lesson 1

1. Video Formats: Analogue and Digital

In this lesson we will learn about the concept of analogue formats and signals.

___________________________________________________________________

1.0 Objectives
After going through this lesson, you should be able to:
 Understand basic idea of analogue formats and their character.
 How analogue formats work
___________________________________________________________________
1.1 Introduction

Introduction – from tape to computer

Analog is the way of the natural world: infinite detail, infinite resolution. Tape is one
medium that represents analog audio information; vinyl (a gramophone is known as a vinyl
record) records are another example. When something is converted from analog to digital,
that infinite detail is distilled into a finite number of values, so when you digitize analog
sounds, you naturally lose some information. The idea is that the detail lost is so minor you
wouldn’t be able to notice the difference.
Digital audio is a cornerstone of the broader computer revolution that has been going on
since the 1950s. This whole home recording thing got really serious when computers and
analog-to-digital conversion got good enough and cheap enough to be useful to the
layman. Tape used to be what you recorded audio to, but tape is dead now. Digital is just
too convenient and inexpensive. Maintaining and calibrating a tape machine is no longer
necessary. So all hail the digital revolution; we lost a bit, but the masses—gained a lot.
We used to live in a world of analog images and video, where we dealt with photographic
film, analog TV sets, videocassette recorders (VCRs), and camcorders. For video
distribution, we relied on analog TV broadcasts and analog cable TV, which transmitted
predetermined programming at a fixed rate. Analog video, due to its nature, provided a
very limited amount of interactivity, e.g., only channel selection on the TV and fast-forward
search and slow-motion replay on the VCR. Additionally, we had to live with the
NTSC/PAL/SECAM analog signal formats with their well-known artifacts and very low still-
frame image quality. In order to display NTSC signals on computer monitors or European
TV sets, we needed expensive transcoders. In order to display a smaller version of the
NTSC picture in a corner of the monitor, we first had to digitize the whole picture and then
digitally reduce its size. Searching a video archive for particular footage required tedious
visual scanning of a whole bunch of videotapes. Motion pictures were recorded on
photographic film, which is a high-resolution analog medium, or on laser discs as analog
43
BA (JMC) 209 Unit 2 Lesson 1

signals using optical technology. Manipulation of analog video is not an easy task, since it
requires digitization of the analog signal into digital form first.

Today almost all video capture, processing, transmission, storage, and search are in digital
form. In this section, we describe the nature of the analog-video signal because an
understanding of history of video and the limitations of analog video formats is important.
For example, interlaced scanning originates from the history of analog video. We note that
video digitized from analog sources is limited by the resolution and the artifacts of the
respective analog signal.

1.2 Analog Signal

Analog: It is relating to or using signals


or information represented by a
continuously variable physical quantity
such as spatial position, voltage, etc.

Above is the graphic representation of


waves, which are normally variously shaped, make up audio and video signals.

Now assume that such waves are quite long, have slightly different shapes (different
frequencies and amplitudes), and are made from garden hoses. These hoses (analog
signals) must now be shipped by truck to different locations (video and audio signal
transport and recording). The requirement is that the original shape of the bent hoses
(signals) cannot be disturbed even to a slight degree (no signal distortion) during
shipping. But even the most expensive shipping company (high-end equipment) cannot
prevent the long hoses from getting some kinks (signal noise) during packing into the
long and cumbersome crates (analog recording) and during transport (signal transport).
When the hoses with the kinks are then used as models for wave duplication (dubbing),
the various distortions from the original curves are not only maintained but often
exacerbated by additional kinks (added signal noise and artifacts).

1.3 Digital Signals

Digital: (of signals or data) expressed as series of the digits 0 and 1, typically
represented by values of a physical quantity such as voltage or magnetic polarization.

Digital usually refers to the binary system in which data are represented in the form of
on/off pulses. At fi rst glance this either/or system of binary digits may seem clumsy, but
the overwhelming advantage of the digital process is that it has great resistance to data
distortion and error. It also permits any number of combinations and shuffling around—an
extremely important feature when manipulating pictures and sound.

44
BA (JMC) 209 Unit 2 Lesson 1

Digital System

In the digitizing process, the analog signal is continuously sampled at fixed intervals; the
samples are then quantized (assigned a concrete value) and coded into 0’s and 1’s.

1.4 Analogue to Digital conversion process

It is a four-step process:
 Anti-aliasing
 Sampling
 Quantizing
 Coding

1.4.1 Anti-aliasing
In this step extreme frequencies of the analog signals that are unnecessary for its
proper sampling are filtered out.

1.4.2 Sampling
In the sampling stage, the number of points along the analog signal are selected for
building the digital signals. The higher the sampling rate, the more steps chosen and
the more they will look like the original signals. A high sampling rate is preferred over a
low one. The sampling rate of a video signal is usually expressed in megahertz (MHz).
In this process a number of samples (voltages) are taken of the analog video or audio
signal at equally spaced intervals. When you take and measure a relatively large
number of instances at shorter intervals from the original analog signal, you have a
high sampling rate. When you take a reading of relatively few instances at larger

45
BA (JMC) 209 Unit 2 Lesson 1

intervals, you have a lower sampling rate. A higher sampling rate produces better
signals.

1.4.3 Quantizing
At this stage, the checking of signals i.e. how high or low each part of the signal is
relative to a scale – the quantizing levels. The height of each bit is measured. An 8-bit
quantizing has a maximum number of samples. (28).

1.4.4 Coding
This process changes the quantization numbers of each step to binary numbers,
consisting of 0’s and 1’s.

1.5 Benefits of Digital signals and formats

An analog signal is an electrical copy of the original stimulus, such as somebody’s


singing into a microphone. The technical definition is that the analog signal fluctuates
exactly like the original stimulus. The analog signal is also continuous, which means
that it never intentionally skips any part of the signal, however small the skip may be.

The digital signal, on the other hand, is purposely discontinuous. It takes the analog
signal and selects points (instances) at equal intervals. These successive points
represent the original signal—a process called sampling

1.5.1 Quality
Even before the advent of advanced digital formats, picture and sound quality have
been a major concern of equipment manufacturers and production personnel. Digitals
signals promise extremely sharp and crisp pictures that not only show a great amount
of fine detail but also improved colour.
Complex editing and the rendering of special effects require many tape generations
(the number of dubs – copies- away from the original). The greater the number of
copies in analog recordings, the greater the loss of quality. There is hardly any
noticeable quality loss even after dozens of generations.
Another important quality factor is that the simple binary code is relatively immune to
extraneous electronic signals. With digital signal processing, electronic noise is held to
a minimum, if not completely eliminated.
There is a downside to these super clean signals, however, especially when dealing
with sound. Sometimes digital music recordings sound so sharp and clean that they
lack warmth and texture of the original piece. Higher sampling rates and more-
cpomplex digital signal combinations are trying to make up even more attention to
detail.

1.5.2 Computer compatibility and flexibility

46
BA (JMC) 209 Unit 2 Lesson 1

Digital signals or data can be transferred directly to computer without the need of
digitization. Such compatibility is especially important for creating special effects
and computer generated images. The opening animated title, graphical transition
from one story to the next where one picture peels off to reveal underneath - all
show the variety and flexibility of digital effects. The multiple screens within the
screen and the various lines of texts run simultaneously on bottom, sides, or top of
the main television screen are possible only through digital effects. Computer
software that allows the alteration or creation of audio and video images has
become an essential digital production tool.

1.5.3 Signal Transport

In contrast to an extremely wide, irreducible analog bandwidth, digital signals can


be compressed in various ways so that they can travel on the available highway
without causing gridlock and also fit into a reasonably sized storage area, such as a
videotape or hard disk. Even if the digital signals are more robust than their analog
counterparts, signal transmission is still a major concern of broadcasters.

1.5.4 Compression

Compression is the temporary rearrangement or elimination of redundant


information for easier storage and signal transmission. Digital information can be
compressed by regrouping the original data without throwing it away. Once at the
destination, the data can then be restored to their original positions for an output
that is identical to the original input. We do this frequently when “zipping” (on a
Windows platform) large computer texts for storage and transmission and then
“unzipping” them when opening the file. One of the most widely used digital
compression standard for still images is JPEG. Another compression standard for
high-quality video is MPEG-2 is a lossy (quality gets reduced a bit) compression
technique.

Few of the poplar analogue tape formats

 U-matic 3/4" (Sony)


 VCR, VCR-LP, SVR
 Betamax (Sony)
 VHS (JVC)
 Betacam (Sony)
 Hi8 (Sony) (mid-1990s)

VHS Tape

Few of the popular digital tape formats

 DV (including DVC-Pro)

47
BA (JMC) 209 Unit 2 Lesson 1

 HDCAM (Sony)
 HDV
 ProHD (JVC)
 MicroMV
 MiniDV
Mini DV Tape

Digital video has a significantly lower cost than 35 mm film. In comparison to the high
cost of film stock, the tape stock (or other electronic media used for digital video
recording, such as flash memory or hard disk drive) used for recording digital video is
very inexpensive. Digital video also allows footage to be viewed on location without the
expensive chemical processing required by film. Also physical deliveries of tapes and
broadcasts do not apply anymore. Digital television (including higher quality HDTV)
started to spread in most developed countries in early 2000s. Digital video is also used
in modern mobile phones and video conferencing systems. Digital video is also used
for Internet distribution of media, including streaming video and peer-to-peer movie
distribution. However even within Europe are lots of TV-Stations not broadcasting in
HD, due to restricted budgets for new equipment for processing HD.
Many types of video compression exist for serving digital video over the internet and on
optical disks. The file sizes of digital video used for professional editing are generally
not practical for these purposes, and the video requires further compression with
codecs such as Sorenson, H.264 and more recently
Apple ProRes especially for HD. Probably the most widely used formats for delivering
video over the internet are MPEG4, Quicktime, Flash and Windows Media, while
MPEG2 is used almost exclusively for DVDs, providing an exceptional image in
minimal size but resulting in a high level of CPU consumption to decompress.

1.6 Assignments

1.6.1 Class Assignments

1. What is difference between analogue and digital signal?

1.6.2 Home Assignments

1. What are benefits of digital formats?

2. What is the process of converting analogue signals into digital signals?

1.7 Summing Up

In this chapter, we learnt about the difference between analogue and digital signals and
what are various formats to store them.

1.8 Terminal questions

1. What is the process of analogue to digital signals conversion?


2. What are analogue and digital signals and formats?
48
BA (JMC) 209 Unit 2 Lesson 1

1.9 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

1.10 Keywords

Analogue: A signal that fluctuates exactly like the original stimulus.

Digital: Pertaining to data in the form of binary digits (on/off pulses).

Binary digit (bit): The smallest amount of information a computer can hold and
process. A charge is either present, represented by a 1, or absent, represented by a
0. One bit can describe two levels, such as on/off or black/white. Two bits can
describe four levels (22 bits); 3 bits, eight levels (23 bits); 4 bits, 16

Codec: Stands for compression-decompression. Can be one of several compression


systems of digital video, graphics, and audio files.

Compression: The temporary rearrangement or elimination of redundant picture


information for easier storage and signal transport.

Digital television (DTV): Digital systems that generally have a higher image
resolution than standard television.

Sampling: Taking a number of samples (voltages) of the analog video or audio


signal at equally spaced intervals.

Quantizing: A step in the digitization of an analogue signal. It changes the sampling


points into discrete numerical values (0’s and 1’s). Also called quantization.

High-definition video (HDV): A recording system that produces images of the same
resolution as HDTV (720p and 1080i) but with inferior colors. The images are much
more compressed than those of HDTV, resulting in a slightly lower image quality.

High-definition television (HDTV): Includes the 720p, 1080i, and 1080p scanning
systems. Because the 480p system produces high-quality video, it is sometimes,
though erroneously, included in the HDTV category.

49
BA (JMC) 209 Unit 2 Lesson 2

Lesson 2 Linear and Non-linear Video Editing: Equipment


and its functions
________________________________________________
2.0 Objectives

2.1 Introduction

2.2 Process of linear and Non-linear editing

2.3 Equipment used in Non-linear editing

2.4 Equipment used in Linear editing

2.5 Assignments

2.5.1 Class assignment

2.5.2 Home assignment

2.6 Summing Up

2.7 Terminal Questions

2.8 Suggested Further Readings

50
BA (JMC) 209 Unit 2 Lesson 2

2. Linear and Non-linear Video Editing: Equipment


and its functions

In this lesson we learn in detail the ways and use of linear and non-linear editing.
Along with that we will also focus on the various equipment used in the process and
its functions.
______________________________________________________________
2.0 Objectives
After going through this lesson, you should be able to:
 Understand various equipment used in linear and non-linear video
editing process
 Understand the non-linear and linear video editing
______________________________________________________________
2.1 Introduction
Postproduction editing is the third and final stage of the production process, in
which the various video and audio segments are given structure and meaning.
Editing offers you the final chance to clarify and intensify the intended message.
Assuming that the pre-production and production phases went according to plan,
you can now use your grasp of the program objective and your creativity to build a
program that has clarity and impact.

Editing goals
Basically, editing or postproduction is the process of combining individual shots in a
specific order. It has several purposes:
1. To assemble material in a sequential fashion. The shooting order may differ
from the running order
2. To correct mistakes by editing them out or by covering them with other
footage.
3. To create, enhance, embellish, and bring to life images and events that were
once captured live. Tools such as visual effects, sound effects, and music can
give the story more drama, thus more impact on the audience.

2.2 Process of Linear Editing and Non-Linear editing

Contrary to linear editing, where you copy a selected clip from one tape to another,
the basic principle of nonlinear editing is digital file management. Each of the files
contains a single frame or, for all practical purposes, a series of frames that make
up a clip (shot). You probably now see why this system is called “nonlinear”: you
51
BA (JMC) 209 Unit 2 Lesson 2

can access any one of the files (frames or clips) instantly in any order regardless of
where the information is located on the hard drive. The computer then flags the
selected clips so that they play back in the sequence you specify. Note that the
video files themselves are not moved from where they are stored on the hard drive;
your editing simply tells the computer the order in which to play back the clips.

2.3 Equipment used in the process of non-linear editing

You have learn how to set up a computer editing system, record footage from a camera or
VCR onto your hard drive, edit the footage and record it back to tape or disk.
Editing with a computer can be a complex process. This tutorial provides an overview and
general instructions — you may need to consult your manuals or support forums for some
specific tasks related to your software and hardware.

Nonlinear Editing System


If you were to equip your editing suite, opt for a high-end desktop computer with a high-
capacity hard drive and a high-speed processor. The typical nonlinear editing (NLE)
system must also include two fairly large external monitors—one for the computer output
and the other to show your edited sequences—and two small loudspeakers. It somehow
seems easier to work with two separate screens than with a divided one. If you intend to

52
BA (JMC) 209 Unit 2 Lesson 2

control the volume of additional sound sources, or premix several of them before importing
them to the NLE, you will also need a small audio mixer.

A computer used for editing must have the necessary software to accomplish the three
phases of nonlinear editing—capture, the actual editing, and export—as well as additional
special-effects software for creating transitions, graphics, and titles. Most editing software
allows you to import the video and audio data directly from the video recorder inside the
camcorder to the NLE computer. This represents the capture phase.

Once the information is on the hard drive, you can select clips and specify their order of
play. You can also add new information, such as clips or audio segments from another
shoot or source, to heighten the impact of your creation. Special effects software enables
myriad transitions and title possibilities. This is the actual editing phase.

Unless you play your masterpiece only on the NLE system, you need to dub the final
edited version onto an edit master tape or disc. This is the export phase.

Nonlinear Editing Phase 1: Capture/Digitise


Before you can do any nonlinear editing, you need to transfer the content of the
source media to the hard drive of the NLE computer. The source media can be
videotape, hard drives, memory cards, or optical discs.

Digital source tapes: You can transfer digital videotapes directly from the
camcorder to the NLE hard drive. If, however, you intend to select shots from the
source tapes to save space on the NLE hard drive, you should use a stand-alone
videotape recorder (VTR) for the capture. This is where your field log comes in
handy. Don’t bother capturing shots that you marked as definitely no good, such as
53
BA (JMC) 209 Unit 2 Lesson 2

the one in which the talent showed the wrong book during an interview with the
author.

Selecting shots always requires extensive tape shuttle, including repeated fast forwarding
and rewinding, which can be very hard on the small camcorder VTR. In this case you
should extract the tape cassette from the camcorder and use it in a sturdier stand-alone
VTR for this selection/capture procedure. The stand-alone digital VTR is well suited for the
job; extensive shuttles are part of its intended use. You will also find your desired shots
much more quickly than with the camcorder VTR.
Once you have inserted the source tape in the sturdier VTR, you can connect it to the NLE
system with RCA phono or S-video cables or, better yet, a FireWire (IEEE 1394) cable.

Analogue source tapes: If you want to edit some of your old analog tapes, you first need
to digitize them before capture by the NLE. To do this you can use RCA phono or S-video
cables to connect the analog camcorder to a converter box, which changes the analogue
content into digital data. A FireWire cable lets you connect the box with the hard drive of
the NLE.

Other digital source media If the digital camcorder uses recording media other than
videotape, you can transfer the source data directly to the hard drive of the NLE. Connect
the tapeless camcorder via RCA phono, S-video, or FireWire to the NLE system or, if you
use a compatible memory card, insert the cards directly into the slot of the NLE system.

2.4 Equipment used in the process of linear editing


1. Two VCRs (video tape machines), preferably with AV (audio and video)
outputs. If you don't have AV outputs, you can use the RF (aerial) outputs instead.

54
BA (JMC) 209 Unit 2 Lesson 2

2. At least one video monitor, but preferably two. Professional monitors are best
but you can use televisions if necessary. Connecting cables.
3. Edit Controller.
4. CG- Character Generator- Graphics generator-Adding titles and other effects
in linear editing normally require specialized equipment.
5. Video tape recorder: A video tape recorder (VTR), is a tape recorder that can
record video material. The video cassette recorder (VCR), where the videotape is
enclosed in a user-friendly videocassette shell, is the most familiar type of VTR
known to consumers. Professionals may use other types of video tapes and
recorders.
6. Recording device: USB drive, hard disc or a chip.

Edit Controller/ Switcher

• This is a device which connects to and controls the source and record
machines.
• The controls on the left (above and including the jog/shuttle ring) control the
source machine.
• The corresponding controls on the right are for the record machine (notice
the addition of a red record button).
• The controls in the middle are for various edit options such as marking in/out
points, etc.

55
BA (JMC) 209 Unit 2 Lesson 2

The edit controller is an interface between the source and record VTRs. It displays
elapsed tape time and frames, controls source and record VTR rolls, stores edit-in
and edit-out points and tells the VTRs to locate them on the tape, and offers
previewing before the edit a and reviewing after the edit.

2.5 Assignments

2.5.1 Class Assignments

1. What is the process of Non-Linear video editing?

2.5.2 Home Assignments

1. What are equipment used in linear video editing? ?

2.6 Summing Up

Postproduction editing involves selecting various shots from the source material and
putting them in a specific sequence. In nonlinear editing, the digital video and audio
material is stored on a computer disk and manipulated using a computer program. Most
nonlinear editing systems produce an edit decision list (EDL) and high-quality

video and audio sequences that can be transferred directly to the edit master tape. In
linear editing, videotape is used as the source material and for the final edit master tape.

2.7 Terminal questions

1. What is non-linear video editing? Describe the process and equipment used.
2.8 Suggested further readings
1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

56
BA (JMC) 209 Unit 2 Lesson 3

Lesson 3 Steps for Linear and Non-linear Video Editing


________________________________________________
STRUCTURE
3.0 Objectives
3.1 Introduction
3.2 Linear Editing Steps and systems
3.3 Non-linear editing systems and steps
3.4 Assignments
3.4.1 Class assignment
3.4.2 Home assignment
3.5 Summing Up
3.6 Terminal Questions
3.7 Suggested Readings

57
BA (JMC) 209 Unit 2 Lesson 3

3. Steps of Linear and Non-linear editing


_________________________________________________________________
In this lesson we learn the steps of linear and non-linear editing.
Along with that we will also focus on various important aspects of editing.

3.0 Objectives
After going through this lesson you will learn:
• Aspects of editing
• Steps of editing

3.1 Introduction
Editing provides ways of correcting and improving the final production:
1. Sequences can be removed or shortened that are uninteresting, irrelevant, or
repetitious.
2. Errors can be corrected by omitting faulty sections and inserting retakes.
3. The overall duration can be adjusted.

Editing begins with sorting through the available material and doing the following:
1. Selecting the required shots
2. Deciding on the order and duration of each shot
3. Deciding on the cutting point (when one shot is to end and the next to begin)
4. Deciding on the type of transition between shots
5. Creating good continuity.

3.2 Steps and systems of Linear Editing


When you want to locate a shot that is in the middle of the tape, for example, you
need to roll through shots 1 and 2 before reaching shot 3. We cannot simply jump
from shot 1 to 3. Because tape-based systems do not allow random access of shots
of frames, all tape-based editing systems are linear.
Linear editing is basically selecting shots from one tape and copying them in a
specific order onto another tape. The operational principle of linear editing is
copying.
One or several VTR’s play back portions of tape with the original footage and
another VTR records on its own tape the selected material.

The different tape-based systems fall into three categories:


Single-source

58
BA (JMC) 209 Unit 2 Lesson 3

Expanded single-source
Multiple-source systems

Single-source
The basic system that has only one VTR supplying the material to be dited is called
a single-source or cuts-only editing system. The machine that plays back the tape
with the origial footage is called the source VTR or the play VTR and other one is
called record VTR. The tapes are called source video tape and edit mater tape. For
this two monitors are required.

You use the source VTR to find the exact in-and-out points of the footage you want to copy
to the edit master tape. Record VTR also has to be told when to start recording (copying)
the source material and when to stop. An ‘in’ and ‘out’ cue tell that which is done by an
edit controller machine.

59
BA (JMC) 209 Unit 2 Lesson 3

Edit Controller
This machine automates editing to a certain extent. It memorizes some of commands and
executes them with precision and reliability. Control VTR search modes. (variable forward
and reverse speeds) separately for the source and record VTRs to locate scenes. You use
the source VTR to find the exact in-and-out points of the footage you want to copy to the
edit master tape. Record VTR also has to be told when to start recording (copying) the
source material and when to stop. An ‘in’ and ‘out’ cue tell that which is done by an edit
controller machine. It reads and displays elapsed time and frame numbers or time code for
accurate cuing. Marks and remembers precise edit-in and out points. Synchronizes tape
speeds in VTRs.

Expanded Single-Source System


In a documentary on rush hour traffic, you may want to add more traffic sounds to intensify
the shots of a downtown gridlock or put some music on a wedding scene. Such effect can
be achieved with the help of audio mixer. If titles need to be added, a character generator
is required and a Switcher that can mix the titles with the scene from the source tape,
without the edit master tape undergoing another generation. Switcher and audio mixer can
offer variety of effects. It can facilitate a great variety of transitions, such as cuts, dissolves
and wipes.

Multiple Source-System

60
BA (JMC) 209 Unit 2 Lesson 3

The tape based multiple-source editing system consists of two or more source
VTRs, a single record VTR, and a computer assisted edit controller. The
computerised edit controller directs the functions of the source A and B VTR’s, the
C.G. or effects generator, the audio mixer, and finally the edit and record functions
of the record VTR.

3.3 Non-linear editing systems and steps

NLE is slightly elaborate process but most flexible editing system. It is like putting
together a jig-saw puzzle – we arrange and rearrange pieces of scattered footage
into a meaningful film. Editing video on a computer is basically NLE, that is film or
video can be assembles in any order from beginning to end.

There are many inexpensive and expensive software available. But all follow basics:

 Digitising
 Compressing & storing
 Juxtaposing, rearranging and applying effects to audio and video files.
 Copying edited programmme back on to tape, DVD or hard drive.

61
BA (JMC) 209 Unit 2 Lesson 3

Digitising or capturing
Transferring analogue videotaped information to a a digital and storing the
information on computer hard drive. It happens in real time but also with an option of
batch capturing.

Compression
In video, the less compression there is, the higher quality the video and sound will
be. Generally MPEG-2 compression standard makes it difficult to do precise frame-
accurate editing.

Juxtaposing and rearranging video and audio files


NLE is comparable to rearrange letters, words, sentences, and paragraphs through
word processing. For editing, we juxtapose images and rearrange video and audio
files. We can juxtapose two frames or a series of frames to see how well they cut
together.

62
BA (JMC) 209 Unit 2 Lesson 3

A NLE setup requires a fairly high-end computer that can handle audio, video,
graphics, and a host of audio and video effects. A recorder that also serves as a
player is connected to the CPU of the computer using FireWire cable.

FireWire Cable

The editing software also works as an interface between the recorder/player and the
editor can be controlled by the keyboard and mouse. The raw footage on tape is
digitized, that is captured in digitizes form onto the computer’s hard disk. Each shot
needs to be named and arranged in different bins for easy editing.

Capture Window in NLE software

We can import music into another bin and store all graphics/titles in another bin.

63
BA (JMC) 209 Unit 2 Lesson 3

Editing timeline layout in NLE software

On the timeline, we can arrange shots side by side at random and rearrange them
to our needs. Our shots and audio are arranged in what are known as audio and
video layers. To apply effects, we go to the effects palette, select an effect and
apply it by dragging to respective video or audio.

Rendering is the process of allowing the computer to implement all of the audio,
video, and digital effects, on a frame by frame basis. In video editing it is the
computer process of combining your still pictures, video clips, audio clips and other
visual elements into a single digital video frame.
64
BA (JMC) 209 Unit 2 Lesson 3

Rendering is generally required for:


 The use of filters, transitions, generators, or any combination of effects that exceeds
your computer’s real-time playback capabilities.
 High-quality final output. Real-time effects that play back at preview quality must
ultimately be rendered for high-quality video output.
 Video clips using codecs that Premier Pro can’t play in real time.
 Multiple audio tracks that exceed your real-time playback limit.
 Clips with audio effects that require too much processing power.
 Some nested sequences, which can include layered Photoshop files.

The NLE software has the ability to adjust the volume of an audio clip. Fade-in, Fade-
out, cross-fade or cross dissolve with audio on separate tracks.

3.4 Assignments
3.4.1 Class Assignments

1. Describe the systems and steps for linear video editing systems.

3.4.2 Home Assignments

1. Describe the process of Non-linear video editing.

3.5 Summing Up
NLE is slightly elaborate process but most flexible editing system. It is like putting
together a jig-saw puzzle – we arrange and rearrange pieces of scattered footage
into a meaningful film. Editing video on a computer is basically NLE, that is film or
video can be assembles in any order from beginning to end.
3.6 Terminal questions
1. What are various systems for linear video editing. Describe the process of
linear editing system.
3.7 Suggested further readings
1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication

65
BA (JMC) 209 Unit 2 Lesson 4

Lesson 4: Editing Techniques: Types of Cuts and Transitions

STRUCTURE
4.0 Objectives
4.1 Introduction
4.2 Cuts
4.3 Transitions
4.4 Basic Video Effects
4.5 Assignments
4.6.1 Class assignment
4.6.2 Home assignment
4.6 Summing Up
4.7 Terminal Questions
4.8 Suggested Further Readings

66
BA (JMC) 209 Unit 2 Lesson 4

4. Editing Techniques: Types of Cuts and


Transitions
_________________________________________________________________
In this lesson we learn the various basic editing techniques using different
transitions to make the videos crisp and appealing.
__________________________________________________________________

4.0 Objectives
After going through this lesson you will learn:
• Basic Video transitions and their situational use
• Basic video transitions and their situational use

4.1 Introduction
The four basic editing functions are: (1) to combine—to hook various videotaped
pieces together pretty much in the sequence in which they were videotaped; (2) to
shorten—to make the program fit a given time slot and to eliminate extraneous
material; (3) to correct—to cut out bad portions of a scene and replace them with
good ones; and (4) to build—to select and sequence shots that will advance a
specific story.

4.2 Basic Transition devices


Whenever we put two shots together, we need a transition between them, a device
that implies that the two shots are related. There are four basic transition devices:
(1) the cut, (2) the dissolve, (3) the wipe and (4) the fade.

Although all the four have same basic purpose – to provide an acceptable link from
shot to shot – they differ somewhat in function, that is, how we are to perceive the
transition in a shot sequence.

The Cut
The cut is an instantaneous change from one image (shot) to another. It is t h e
most common and least obtrusive transition device, assuming that the preceding
and following shots show some continuity. The cut itself is not visible; all you see
are the preceding and following shots. It resembles most closely the changing field
of the human eye. Try to look from one object to another located some distance
away. Notice that you do not look at things in between, as you would in a camera
pan, but that your eyes j u m p ahead to the second position, as in a cut. The cut,
like all other transition devices, is basically used for the clarification and
intensification of an event.

67
BA (JMC) 209 Unit 2 Lesson 4

Clarification means that you show the viewer the event as clearly as possible. For
example, in an interview show the guest holds up the book she has written. To help
the viewer identify the book, you cut to a close-up of it.

Intensification means that you sharpen the impact of the screen event. In an
extreme long shot, for example, a football tackle might look quite tame; when seen
as a tight close-up, however, the action reveals its brute force.
By cutting to the close-up, the action has been intensified.

The Dissolve

The dissolve, or lap dissolve, is a gradual transition from shot to shot, the two images
temporarily overlapping. Whereas the cut itself cannot be seen on-screen, the dissolve is a
clearly visible transition. Dissolves are often used to provide a smooth bridge for action or
to indicate the passage of time. Depending on the overall rhythm of an event, you can use
slow or fast dissolves. A very fast one functions almost like a cut and is therefore called a
softcut.
For an interesting and smooth transition from a wide shot of a dancer to a close-up, for
instance, simply dissolve from one camera to the other. When you hold the dissolve in the
middle, you will create a superimposition, or super.
A slow dissolve will indicate a relatively long passage of time; a fast dissolve, a short one.
Because dissolves are so readily available in NLE software, you may be tempted to use
them more often than necessary or even desirable. A dissolve will inevitably slow down the
transition and, with it, t h e scene. If dissolves are overused, the presentation will lack
precision and accent and will bore the viewer.

The Wipe

There is a great variety of wipes available, the simplest of which is when the base picture
is replaced by another one that moves conspicuously from one screen edge to the other.
Other wipe effects look as though the top picture is peeled off a stack of others, or a
diamond expanding from the centre of the top picture gradually shows the one underneath.
The wipe is such an unabashed transition device that it is normally classified as a special
effect.
The wipe tells the viewers that they are definitely going to see something else, or it injects
some interest or fun into the shot sequence. Wipes and other such effects are especially
magnified on the large 1 6 x 9 HDTV screen. Like with any other special effect, you should
use discretion; overused or inappropriate wipes easily upstage the shots they are
connecting.

The Fade

In a fade the picture either goes gradually to black (fadeout) or appears gradually
on the screen from black (fade-in). You use the fade to signal a definite beginning

68
BA (JMC) 209 Unit 2 Lesson 4

(fade-in) or end (fade-out) of a scene. Like the curtain in a theatre, it defines the
beginning or the end of a portion of a screen event.
As such, the fade is technically not a true transition.
Some directors and editors use the term cross-fade for a quick fade to black
followed immediately by a fade-in to the next image. Here the fade acts as a
transition device, decisively separating the preceding and following images from
each other. The cross-fade is also called a dip to black.
You are certainly familiar with the unfortunate tendency to cut directly from one
commercial to the other without connecting them with some transition device that
would tell us where one commercial ends and the other begins. Such juxtapositions
can easily lead to embarrassing meanings, similar to a montage effect in film, where
two adjoining images are intended to create special meanings.
A brief dip to black could reduce, or even eliminate, this potentially funny or
inappropriate montage effect. That said; do not go to black too often—the program
continuity will be interrupted too many times by fades that all suggest final endings.
The other extreme is the never go- to-black craze: some directors do not dare go to
black for fear of giving the viewer a chance to switch to another channel. If a
constant dribble of program material is the only way to hold a viewer's attention,
however, the program content, rather than the presentation techniques, should be
examined.

There are certain established principles in the way one edits, and although like all
“rules” they may be occasionally disregarded, they have been created out of
experience. Here are a few of the most common:
1. Avoid cutting between shots of extremely different size of the same subject
(close-up to long shot). It is jolting for the audience.
2. Do not cut between shots that are similar or even matching (frontal closeup of
one person to a frontal close-up of a second person; it will look as though they
transformed from one to the other.
3. Do not cut between two shots of the same size (close-up to close-up) of the
same subject. It produces a jump cut. If two subjects are going in the same
direction (chasing, following), have them both going across the screen in the
same direction. If their screen directions are opposite, it suggests that they
are meeting or parting.
4. Avoid cutting between still (static) shots and moving images (panning, tilting,
zooming, etc.), except for a specific purpose.
5. If you have to break the continuity of action (deliberately or unavoidably),
introduce a cutaway shot. But try to ensure that this relates meaningfully to
the main action. During a boxing bout, a cutaway to an excited spectator
helps the tension. A cutaway to a bored attendant (just because you happen
to have the unused shot) would be meaningless, although it can be used as a
comment on the main action.
6. Avoid cutting to shots that make a person or object jump from one side of the
screen to the other. When cutting between images of people, avoid the
following distracting effects:
 Mismatched camera angles.
 Changes in headroom.
69
BA (JMC) 209 Unit 2 Lesson 4

 Jump cuts.
7. Avoid cutting between shots that are only slightly different in size. The subject
will suddenly appear to jump, shrink, or grow.

4.4 Basic Video Effects

Key
Keying means electronically cutting out portions of a television picture and filling them with
in with another, or a portion of another image.

Track Matte Key effect


The Track Matte Key reveals one clip (background clip) through another (superimposed
clip), using a third file as a matte that creates transparent areas in the superimposed clip.
This effect requires two clips and a matte, each placed on its own track. White areas in the
matte are opaque in the superimposed clip, preventing underlying clips from showing
through. Black areas in the matte are transparent, and gray areas are partially transparent.

A matte containing motion is called a traveling matte or moving matte. This matte consists
of either motion footage, such as a green-screen silhouette, or a still image matte that has
been animated. You can animate a still by applying the Motion effect to the matte. If you
animate a still image, consider making the matte frame size larger than the sequence
frame size so that the edges of the matte don’t come into view when you animate the
matte.

Chroma Key

It is a special effect that uses a specific colour (chroma), blue or green, as


background for the person or object that is to appear in front of the background
scene. During the key, green background will be replaced by the background video
source without affecting the foreground object. A typical example is the
weathercaster standing in front of a weather map or a satellite picture. During the
chroma key, the computer-generated weather map or satellite image replaces all
green areas – but not the weathercasters.

70
BA (JMC) 209 Unit 2 Lesson 4

Points to be remembered:
 Be sure that chroma-key area is painted evenly.
 It should be lighted evenly.
 Uneven background lighting will prevent a full replacement of the blue area
by the background video.
 The subject should not wear anything similar to green (chroma) colour.

Computer-Manipulated Effects
Take an existing image (camera-generated video sequence, video frame, photo)
and enhance or change it in some way.
We can manipulate with
 Image Size
 Shape
 Light
 Colour

There is a great variety of effects available for manipulating size, shape, light and
colour of an image. Many of these change realistc video into abasically graphical
image. Some of the most prominent are:
 Shrinking and expanding
 Stretching
 Positioning
 Perspective
 Mosaic

71
BA (JMC) 209 Unit 2 Lesson 4

 Posterization & Solarization

Shrinking and expanding


It refers to making a picture smaller while keeping the entire picture and its aspect ratio
(width to height) intact. It is not like Cropping. Because the visual effect is similar to a
zoom-out or zoom-in, this effect is also called squeeze-zoom.

Stretching
When we stretch the image vertically or horizontally. It is done by distorting the total image
so that its borders attain a new aspect ratio.

Positioning
The shrunk image can be positioned anywhere in the frame.

72
BA (JMC) 209 Unit 2 Lesson 4

Perspective
You can distort an image in such a way that it looks three-dimensional, or like it’s
occupying three-dimensional space.

Mosaic
The video image is broken down into many discrete, equal sized squares of limited
brightness and colour. The resulting screen image looks like an actual tile mosaic.
Such an image actually consists of greatly enlarged pixels. This technique is
sometimes used in interviews to obscure the guest’s identity. The mosaic like
distortion shows the person’s face but renders the features unrecognizable.

Posterization and solarisation

In Posterization, the brightness values (luminance) and the shades of the individual
colours are collapsed so that the image is reduced to a few single colours and brightness
steps.

73
BA (JMC) 209 Unit 2 Lesson 4

In Solarization, brightness values of the image are gradually changed to their opposite
values. In partial solarisation, the lighter areas get darker and the darker areas lighter.

Some more popular video effects:

o Freeze frame: Stopping movement in the picture and holding a still frame.
o Strobe: Displaying the action as a series of still images flashed onto the
screen at a variable rate.
o Reverse action: Running the action in reverse.
o Fast or slow motion: Running the action at a faster or slower speed than
normal.

o Mirror: Flipping the picture from left to right or providing a symmetrical split
screen.

4.5 Assignments

4.5.1 Class Assignments

1. What is a video transition? Describe use of video transitions in video editing by


listing at least four of them with logical and situational use.

4.5.2 Home Assignments

1. What are video effects? Describe two video effects and their situational use.

4.6 Summing Up

74
BA (JMC) 209 Unit 2 Lesson 4

Whenever you intend to use a visual effect or a transition, always think, is the
effect really necessary? Does it help clarify and intensify the message? Is it
appropriate? It all sums up to the Editing Decisions.

4.7 Terminal questions

1. What are commonly used basic transitions and video effects?

4.8 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication

75
BA (JMC) 209 Unit 3 Lesson 1

UNIT 3 BA (JMC) 209


___________________________________________________________________

UNIT 3: Mixing and Exporting


___________________________________________________________________

LESSON 1

Sound Design and Editing: Concept and Troubleshooting 77

LESSON 2

Effects and Transitions 96

LESSON 3

Styles of Packaging: News and Non-news 107

LESSON 4

Archiving and File Formats 117

76
BA (JMC) 209 Unit 3 Lesson 1

Lesson 1 Sound Design and Editing: Concept and Troubleshooting

_____________________________________________________

STRUCTURE
1.0 Objectives

1.1 Introduction

1.2 Sound design - Concept

1.3 Stages of Sound Design

1.4 Post production audio mixing

1.5 Trouble Shooting

1.6 Class Assignments

1.8.1 Class Assignments

1.8.2 Home Assignments

1.7 Summing Up

1.8 Terminal questions

1.9 Suggested further readings

77
BA (JMC) 209 Unit 3 Lesson 1

1. Sound Design

In this lesson we will learn about the concept and importance of Sound Design.

___________________________________________________________________

1.0 Objectives
After going through this lesson, you should be able to:
 Understand basic idea of Sound Design in video production.
 How troubleshooting can be done in Sound Design
___________________________________________________________________
1.1 Introduction

Sound design is a vast and complex area. The quality of sound dictates the difference
between amateur and professional productions. Things like the atmosphere of a scene,
among others, are strongly influenced by sound. To put it simply, it’s the set of processes
involved in the creation of the audio elements of a project (such as movies – where I’ll be
focusing on -, theatre, concerts, music, games, etc.).

1.2 Sound design – Concept


Most people think of film making or video production as a visual medium. For many years
it was, during the silent era of film. When talkies first came about in the 1920’s it was both
the greatest and worst thing that happened to the art. Films became more immersive and
more subtle. Dialogue added layered depth to the story. Sound FX fleshed out the world,
making it more real for audiences. Sound Design became essential to storytelling and the
silent film era all but died out. Nowadays, Sound Design might be half of your story, but it
takes a special attention to detail to get it right. The biggest difference between a
professional and an amateur is in the sound design of their video, not the visuals.

Sound design is the process of obtaining, manipulating, controlling or creating audio


elements. Or it is the process of specifying, acquiring, manipulating or generating audio
elements. It is employed in a variety of disciplines including filmmaking, television video
production, theatre, sound recording and reproduction, live performance, sound art, post-
production, and video game software development. Sound Design is much more than
creating a soundtrack. All audio elements which don’t already exist or weren’t recorded on
set must be created. When it comes to animated movies, for example, everything has to
be created from scratch. And the most important part: all sounds must seem real and flow
naturally. Be it feature films, corporate video, theatre or music production it is extremely
important.

78
BA (JMC) 209 Unit 3 Lesson 1

The ultimate compliment to sound designers and mixers and editors is when no one
actually notices the work. The reason is that if the work is noticed it’s drawing
attention to itself and making people stop thinking about the video. So, it’s
oftentimes a subtle, supporting character to the image, but it’s also why it is so often
misunderstood. Sound designers are the unsung heroes of video production. Their
work often goes unnoticed and if it does feel natural, like it was recorded at the
same time as the image then it has been done correctly.

Videos are an audiovisual medium. Most of what we take in, the messages or the
emotional connection we get from characters in a video, are not just based on what
we are looking at, but what we are hearing.

Example: A simple scene with a man waking up in his apartment with no dialogue.
You might think this scene is pretty pointless But it is telling and developing a story.
Perhaps there is the sound of car horns and people shouting outside the window, or
his neighbours television coming in through the wall. These things would imply that
he doesn’t have a lot of money, can’t afford to live in the nicest area of town. Instead
he lives in an apartment by the motorway, where the walls are too thin and his
neighbours just don’t care. These sounds give you an insight into the character’s
environment and lifestyle. An environment that the director wants you to see and
hear. Even if we don’t notice it, we are taking it in. It’s been put there purposefully to
give you a message and put you in a specific environment sought out by the
director.

A good eye can only get us so far in video making — we also need a good ear.
Observation of sound elements is extremely important.
There are birds outside, and traffic and wind. In the office, the air conditioner hums
and there is a low murmur of workers at their keyboards and telephones. The
factory comes on like a cacophony, but if we listen to the individual workers and
machines, they’re each playing their own individual parts. It’s still no symphony, but
there is a rhythm to the place. These sounds make people feel things and learning
to use those emotional cues effectively, in combination with music, is what great
sound design is all about.

1.3 Stages of Sound Design

Even if the sound is recorded with best efforts to capture, chances are that more
work needs to be done in post-production.

The Role of Sound Design

Whenever we watch a movie or a TV show, play a video game, listen to a radio play or
attend a theatre production, it's easy for us to take for granted each component of the
soundtrack we hear, and simply not think too deeply about how each element came to be.
Behind the scenes, though, a team of sound designers, sound recordists, dialogue editors,
79
BA (JMC) 209 Unit 3 Lesson 1

mixers, composers and others will all have worked tirelessly to collect, edit, create, record,
compose and manipulate the sound effects, dialogue and music needed to create that final
soundtrack.
This group of individuals considers each and every element of sound they create; nothing
that is heard in the final mix will have been placed or left there by accident. The work they
produce is crucial in enhancing the quality of the experience, and in creating convincing
worlds and characters whose stories will capture our imagination. Music will provide much
of the emotive undertone; sound effects and Foley will provide a sense of realism and
cement characters into their surroundings. Atmospheric sounds will give us a sense of
place and period in time, and dialogue will give voice and expression to characters as they
engage with one another or directly with the viewer.
Powerful and memorable sound design can be complex and difficult to produce, so the
sound designer must possess an array of skills to create effective content consistently.
Although it's beyond the scope of this article to identify each and every skill and method
required, I will focus on some of the key techniques that sound designers use on a daily
basis.

A good sound entails:

Location mixing
During video production, the recording of live sound is handled by the location mixer. This
is considered mixing, because originally, multiple mics were mixed “on-the-fly” to a single
mono or stereo recording device. In modern video or films with digital location recordings,
the mixer tends to record what is really only a mixed reference track for the editors, while
simultaneously recording separate tracks of each isolated microphone to be used in the
actual post production mix.

Dialogue
The first thing many people associate with Sound Design is Dialogue. The spoken word is
certainly a large part of video production, especially in the world of commercial video and
testimonials. Clean recordings of Dialogue (or monologue in many cases) is a must,
without any hums or background noise obscuring it. It’s more difficult than it seems to
capture realistic dialogue that doesn’t draw attention to itself.

ADR
Automatic Dialogue Replacement or “looping”. ADR is the recording of replacement
dialogue in sync with the picture. The actors do this while watching their
performance on screen. Sometimes this is done during production and sometimes
during post. ADR will be used when location audio has technical flaws. Sometimes
ADR is also used to record additional dialogue – for instance, when an actor has his
or her back turned. ADR can also be used to record “sanitized” dialogue.

Foley

80
BA (JMC) 209 Unit 3 Lesson 1

Named after Jack Foley, an early adopter of the practice who employed it primarily
in Radio. Foley is the art of recreating physical sounds; the sounds a character or
subject makes when they move or interact with the world around them. These
sounds include cloth rustling, grabbing, punching, eating, footsteps and an entire
library of other common sounds. Mostly sound designers recreate these effects in
the editing room and sync them with the footage manually. It’s the technique used to
recreate audio elements, during the post-production. All types of sounds are
produced: steps being taken, doors closing, papers flying, glass breaking,
breathing, etc. Deliberately, on the set of a movie, only dialogue is recorded and, by
recording all other sound elements in a foley studio, a lot of time is saved. Sounds
are recorded individually, while, at the same time, the foley watches the movie. By
having the elements recorded separately, it’s easier to edit them in post-production.
A basic example of the benefits of this technique: at a certain moment, one might
prefer to have a character’s steps be heard clearly, but we might also want to have
that sound become increasingly more subtle, emphasizing another audio element.
https://www.youtube.com/watch?v=UO3N_PRIgX0

Room Tone
Room tone is the sound of a silent room. Every space, whether outside or inside,
has a base tone to it. It’s rare in nature to find total silence. Every video needs the
realism of room tone to bridge the gaps between Foley or between Dialogues. This
also helps smooth out inconsistencies with the audio.

Sound FX
Unlike Foley, Sound FX refers to sounds of the environment that are not specifically
caused by human interaction. Sounds like fire, cars, doors and
elevators all fall under this category. During a product demonstration, sound editors
might recreate or enhance the sounds of the electronics whirring to life to sell the
effect to the audience. Sound effects for a film come from a variety of sources,
including live recordings, sound effects libraries and sound synthesizers. Putting
this all together is the role of the sound effects editor(s). Because many have
elevated the art, by creating very specific senses of place, the term “sound
designer” has come into vogue. For example, the villain’s lair might always feature
certain sounds that are identifiable with that character – e.g. dripping water, rats
squeaking, a distant clock chiming, etc. These become thematic, just like a
character’s musical theme. The sound effects editors are the ones that record find
and place such sound effects.

Music
Music plays an important part in a video’s Sound Design. For example, when it comes to
Disney’s classics, the music sets the differences in the mood and the rhythm with which
the action of a scene progresses. Not only does the melody change, but so do its tones
and instruments. This change gives the scenes the intended atmosphere for that moment.
There are actually two major types of music you can have in your Social Video. Diegetic
Music is music that originates from within the video. If actor or spokesman turns on a
record player and music starts, that would be Diegetic. Non-Diegetic Music is the type

81
BA (JMC) 209 Unit 3 Lesson 1

more audiences are familiar with. This is the intro music which might put over the opening
titles or your company logo animation. Most music which is heard is Non-Diegetic though
there are some notable examples that cross the boundary.

Noise vs Silence
When we talk about Sound Design (and about good Sound Design), the intention isn’t to
flood the audience with all the sounds that could possibly exist in the scene. The
atmosphere must seem real, but it isn’t necessary to highlight elements that aren’t
important to the narrative. For example, if we’re inside a house and someone opens the
door, we expect to hear the sound of the lock. But if someone’s arrival at the house is
unexpected, or if the scene’s climax, the sound will be amplified in order to give emphasis
to the action.

But increasing the volume isn’t an universally correct way (if there is such a thing) to give
emphasis to a moment. Silence is just as important as noise. In reality, it’s the intelligent
combination of both that marks the difference.

Laugh tracks
This is usually a part of sitcom TV production and not features films. When laugh tracks
are added, the laughs are usually placed by sound effects editors who specialize in adding
laughs. The appropriate laugh tracks are kept separate so they can be added or removed
in the final mix and/or as part of any deliverables.

Re-recording mix
Since location recording is called location mixing, the final, post production mix is
called a re-recording mix. This is the point at which divergent sound elements –
dialogue, ADR, sound effects, Foley and music – all meet and are mixed in sync to
the final picture. These various elements can easily take up 150 or more tracks and
require two or three mixers to man the console. With the introduction of automated
systems and the ability to completely mix “in the box”, using a DAW like Pro Tools,
smaller films may be mixed by one or two mixers. Typically the lead mixer handles
the dialogue tracks and the second and third mixers control sound effects and
music. Mixing most feature films takes one to two weeks, plus the time to output
various deliverable versions (stereo, surround, international, etc.).
The deliverable requirements for most TV shows and features are to create a so-
called composite mix (in several variations), along with separate stems for dialogue,
sound effects and music. A stem is a submix of just a group of component items,
such as a stereo stem for only dialogue.The combination of the stems should equal
the mix. By having stems available, the distributors can easily create foreign
versions and trailers.
There are many stages in the audio post-production process. Because audio can
make or break a video, a basic understanding of audio post-production is a must for
everyone on a video project.

82
BA (JMC) 209 Unit 3 Lesson 1

Standard Process
The standard process for audio work has evolved over many years of movie and
video production, and helps keep audio quality at its maximum throughout. When
you are part of a larger team, it’s especially important to follow this order, but even if
you are working solo, sticking to this process will make your workflow efficient.

The standard order of operations is:


 Dialogue editing
 Automated dialogue replacement
 Sound design
 Foley
 Music composition and editing
 Mixing

Dialogue Editing

In this phase, the raw recordings are organized and synced to the timeline.
Unwanted noise is removed and the recordings are trimmed down to the necessary
length.

There are two important sub-steps in this stage:

(a) Use of Expander to Efficiently Reduce Background Audio Noise


It is always desired to have a clean and clear audio in our video recordings. In
reality, however, dealing with environmental noise is a part of virtually every
production. Most of the time, a small amount of noise in the background of the video
can slip by unnoticed. As soon as that noise becomes audible, though, it can really
distract the audience. Although it's best to remove noise at the source, there are
dedicated production tools that can help you to clean up a noisy recording. The tool

83
BA (JMC) 209 Unit 3 Lesson 1

that most people reach for is a ‘noise gate’, but these can quickly become
destructive. Instead, you can use an EXPANDER to reduce the volume of the noise
without completely removing it, which sounds more natural.

Noise Reduction, Not Destruction


The goal with audio noise treatment is not to eliminate noise. The aim is to reduce
the noise until it stops being distracting, without introducing any negative artefacts
or distortions to the original sound.

Use an expander (plug-in or a separate application) when you have subtle


background noise from recording on location (such as wind or distant traffic) or
background noise from recording in a studio (such as A/C or exterior sounds).

Expanders increase the difference in loudness between quieter and louder sections
of audio making quiet sounds quieter and loud sounds louder. Expanding is useful
when you want to increase the dynamic range of the audio. For example, when you
have a noisy recording and want to reduce the volume of the quieter parts so you
don’t notice the noise as much. A side effect of expanders is that they change the
way sounds decay and can end up silencing quieter parts of your audio that you
want to keep. Expanders are also useful when producing voice overs and ADR.
Note that an expander isn't effective when the noise level is almost the same as the
dialogue level. In this case, an expander could not differentiate between the voice
and the noise. More destructive noise removal tools would be needed here.
Gates and expanders are very similar tools, and most dedicated audio and video
processing applications combine the Gate with the Expander.

The main controls on an expander are: attack and release times, and ratio.

 The attack time sets how fast the expander responds to signal levels above
the threshold.
84
BA (JMC) 209 Unit 3 Lesson 1

 The release time sets how fast it reacts when the signal level drops below
the threshold.
 The ratio determines how much to turn the volume down. A higher ratio
results in the volume being turned down more. A very high ratio of 12:1 or
more is considered a noise gate.

(b) Manually Synchronize Audio Tracks on Your Video Editing


Timeline
If an external microphone is used to ge audio for video, or if there are multiple
cameras set up for a single take, then there is a need to synchronize those clips
before the edit.
To sync clips, it is really helpful if a slate to create a comparison point is used (also
known as a clapper board) across all audio and video tracks. A clap with hands can
also be used for the same effect.

Key Synchronization Steps


 Line up your clips on different tracks and place them close to where they
need to be.
 Zoom in on the audio waveform and look for the clap, or any recognizable
waveform shape.
 Nudge clips until the waveforms line up.
 Play back the track and see if an echo is there. You may only be off by a
frame or two.

https://www.youtube.com/watch?v=PKPPVKSoB9Q

ADR (Automated Dialogue Replacement)

In most cases, some of the original audio recorded on set will be corrupt, noisy, or simply
missing. Other times, the quality is not up to scratch and the tone of the voices is poor.
ADR is the process of recording new dialogue in a studio environment to sync with the
video. The actor will lip sync to their original performance as closely as possible.

How to Record ADR

Step 1
Import the video and audio into a DAW. Make sure the video is in a large window, so that
yourself or the actor can easily see the movement of the lips

85
BA (JMC) 209 Unit 3 Lesson 1

Step 2
If you're only replacing certain words or phrases, use markers to highlight the
mistakes. If you are replacing the whole dialogue, skip this step.

Step 3
Loop the first section that you are re-recording and rehearse the dialogue.
.

Step 4
86
BA (JMC) 209 Unit 3 Lesson 1

Record the new dialogue. I recommend you record several takes—three or more—
for each phrase so that you have more to work with when editing.

https://www.youtube.com/watch?v=eXRMtlrl0B0

ADR is great for times when the audio in your video is unusable. Spend plenty of
time on the recording phase getting the lip-sync perfect.

Sound Design

Sound design is the process of creating audio effects for the picture. The sound designer
adds wild tracks and new field recordings to create background ambience. Any special
sound effects are created at this point, too.
Various techniques are used to create sounds, included field recording, heavy processing,
and electronic synthesis.

https://music.tutsplus.com/tutorials/30-incredible-sound-effects-on-audiojungle--cms-
26060?_ga=2.40828130.1996418701.1506855546-1985308825.1503504919

Foley
Foley is similar to sound design in the sense that it is a process of creating sounds to
enhance the realism of the picture. The difference is that Foley refers to human-based
sound effects. Foley artists will usually re-perform the scene live, replicating footsteps,
rustling clothes and prop movements. These sounds are then edited to match the scene.
https://music.tutsplus.com/tutorials/diy-foley-pit-construction--audio-
9531?_ga=2.35977440.1996418701.1506855546-1985308825.1503504919

Music Composition and Editing

In this step diegetic music (sound occurring inside a scene) and nondiegetic music (sound
not part of a scene, like soundtracks) is composed and organised. Where applicable,
licensed music is also curated and organised.

Using Background Music in Video


Music is a strong cue to your viewer: with music you can stir an emotion, enhance a mood,
heighten action, or ratchet up the suspense. There is no right or wrong way to use music,
but there is a time and a place to use it, and not all music is created equal.

Types of Background Music

87
BA (JMC) 209 Unit 3 Lesson 1

Emotion Evoking
If actor is angry, music can be used to emphasize anger. If actor is happy, perhaps
something light and wonderful. Or maybe both: upbeat music in an otherwise angry scene
can create an upsetting cognitive dissonance in the viewer. There are endless subtleties to
emotion-evoking music.
The purpose of emotion-evoking is to create a subconscious reaction in the viewer. Music
adds a layer of complexity to what is happening on screen, pointing to subtext, interior
motivations, or feelings. Emotion-evoking music creates a space for the viewer to
empathize with characters more deeply.

Scene Setting
The setting of the video is a specific place and should establish a sense of that
place for the viewer. While the establishing shot can create a sense of physical
space, the music establishes a particular cultural or social space.
Perhaps there is a long-shot of a busy city like New York or Toronto. A mellow jazz
tune would create a certain vibe, a punk song another, and a techno beat another
completely different one altogether. The situation might be the same, but music
changes the context substantially. Whether the music is dramatic or relaxed is also
a cue. The tempo, volume, and quality of the music can hint to the viewer what is
about to happen next.

Filler and Bridge


Filler music can be a bit random. It’s not specific and doesn’t necessarily have to
add anything to a scene or evoke any type of emotion, but it does help to do some
creative editing and fill in gaps.
For example, director might have planned on some key action happening but
instead what he got was unfortunately a little boring and not very visual but still
important to the story. Maybe you have some B-roll (secondary shots). Instead of
leaving the video to play quiet some music can be added to give a better flow to the
film. This kind of editing shows up pretty frequently in reality television programs.

Mixing

88
BA (JMC) 209 Unit 3 Lesson 1

Lastly, all of the audio is balanced to create a final audio mix. EQ and compression
are applied when necessary. The individual sounds and the entire mix are
measured for loudness and adjusted for optimal volume levels.
Voice Equalization—How To Apply EQ to Voice Recordings
Equalizers are not to fix the character of a voice over but they can emphasise the
good stuff. Or they can remove the bad stuff. It’s that simple.
They can be used to change the character of a voice, but only slightly. And they can
be used to improve a voice recording, but only if it’s a good recording in the first
place.

Applying EQ to a Voice Recording


(a) Use a High Pass Filter to Cut Everything Below 80Hz
This is a common practice and something that can be used to improve any voice
over. Anything below this frequency will be low end rumble and noise. Remove it,
and it will instantly clean up the voice over. Try going even higher, especially on a
female voice. If your voice recording is sounding a bit too bass heavy, cutting
everything below 100Hz will really help with intelligibility.

(b) Cut 100-300 Hz to Add Clarity


Similar to the last tip, cutting the bass will improve clarity. On the other hand, if the
voice sounds a bit thin, try boosting somewhere in this frequency range.

(c) A Wide Boost Between 2-6 kHz Can Improve Clarity


If cutting some of the bass around 100-300Hz doesn’t add enough clarity, try a
gentle boost across this frequency range.

https://photography.tutsplus.com/tutorials/understanding-the-phases-of-audio-post-production-for-video--cms-26688

1.4 Post production audio mixing


Post production mixing is normally done in the audio production room. In television, audio
post production involves not only the proper mixing and quality control of the sound track,
but also, if not especially, the synchronization of the sound track with the video portion.
Computerized DAWs play a major role in audio mixing, various phases of audio control,
and especially in audio/video synchronization. But even relatively simple audio
postproduction tasks, such as editing a sound track of a conversation or doing some audio
sweetening – filtering out a hum or supplying narration and background music to a
documentary – can become formidable and time-consuming tasks.

Controlling Sound quality


The control of sound quality is probably the most difficult aspect of audio control. For
example – there is a sound effect of a police siren which is supposed to be thin, but when
mixed with the traffic sounds, the thin and piercing siren may be perfect for communicating
mounting tension. Before making any final quality judgements, listen to the audio track in
relation to video. The key to good television audio lies I an optimal original sound pickup

89
BA (JMC) 209 Unit 3 Lesson 1

and your sensitive ears, and that all tupes of audio design in postproduction mixing take
time.

Aesthetic factors

When dealing with television sound, we should pay attention to five basic aesthetic
factors: Environment, figure-ground principle, persepective, continuity and energy.

Environment
Whereas in most studio recordings, we try to eliminate as much ambient sound as
possible, in the field these sounds, when heard in the background of the main
sound source, are often important indicators of where the event takes place or even
how it feels. Such sounds help establish the general environment of the event. For
example when covering ad downtown fire, the sirens, the crackling of the fire, the
noise of the fire engines and the pumps, and the tense voices of the firefighters and
onlookers are important in communicating some of the excitement and
apprehensions to the television viewers.

Figure-ground
One important perceptual factor is the figure-ground principle, whereby we tend to
organize our environment into a relatively mobile figure (a person, a car) and a
relatively stable background (a wall, houses and mountains). For example, if you
are looking for someone and finally discover her in a crowd of people, that person
immediately becomes the focus of your attention-the foreground-while the rest of
the people become the background. The same happens in the field of sound. We
have the ability to perceive, within limits, the sounds we want or need to hear (the
“figure”) while ignoring to a large extent all other sounds (the “ground”), even if they
are relatively louder.

Perspective
Sound Perspective means that close-up pictures are matched with relatively nearby
sounds, and long shots correspond with sounds that seem to come from father
away. Close sounds have more presence than distant sounds – a sound quality that
makes us feel in proximity to the sound source. Such desirable variation of sound
presence is virtually eliminated when using lavaliere mics in a drama. Because the
distance between mic and mouth is about the same for each actor, their voices
exhibit the same presence regardless of whether they are seen in a close-up or a
long shot.

Continuity
Sound continuity is especially important in postproduction. You may have noticed
the sound quality of a reporter’s voice change depending on whether he or she was
speaking on-or off camera. When on-camera the reporter used one type of mic and
was speaking from a report location. Then the reporter returned to the acoustically

90
BA (JMC) 209 Unit 3 Lesson 1

treated studio to narrate the off-camera segments of the videotapes story, using a
high quality mic. The change in microphones and locals gave the speech a distinctly
different quality. This difference may not be too noticeable during the actual
recordings, but it becomes obvious when edited together in the final television
programme.
 To avoid such continuity problems, first use identical mics for the on-and off-
camera narration.
 Second, try to match the on-camera sound quality through equalization and
reverberation.
 Third, if you recorded some ambience of the on-camera location, mix it with
the off-camera narration.
Sound is also a chief element in establishing visual continuity. A rhythmically precise
piece of music can help a different series of pictures seem continuous. Music and
sound are often the important connecting link among abruptly changing shots and
scenes.

Energy
Energy refers to all the factors in a scene that communicate a certain degree of aesthetic
force and power. Obviously, high-energy scenes, such as a series of close-ups of a rock
band in action, can stand higher-energy sounds than a more tranquil scene, such as lovers
walking through a field of flowers. Good television audio depends a great deal on your
ability to sense the general energy of the pictures or sequences and to adjust the volume
accordingly.

Surround Sound
It is a technology that produces a sound-field in front of, to the sides of, and behind the
listener, enabling you to hear sounds from the front, sides and back. Developed originally
for film reproduction, it is now used for HDTV. The most prevalent system is Dolby 5.1,
which positions three speakers in front and two in the back for sound reproduction.

The proper speaker placement to benefit from surround sound is needed. Good
surround-sound mixing generally restricts on-screen dialogue to the middle front
speaker and laterally spreads action to all three front speakers. But of you are
91
BA (JMC) 209 Unit 3 Lesson 1

surrounded by a sound environment – such as when amid downtown traffic, when


playing in an orchestra or when participating in sporting events – all five speakers
are active.

1.5 Trouble Shooting


We've all experienced it: there were problems with load in or set-up, time is short,
the system is set-up with only a few minutes to spare and of course, something
works improperly or not at all. Although the first instinct might be to start checking
plugs, connections, cables, etc. in a random fashion (i.e. “panic”), a tried-and-true
troubleshooting method will almost always find the problem with less effort and in a
shorter amount of time.
The most basic troubleshooting technique is the “Divide and Conquer” method. This
involves identifying the good parts of the system as well as figuring out which parts
have failed. Not only can these working sections be eliminated as the cause of the
problem, but they can also be used to test other parts of the system.
For example, a mic channel at a mixer is dead while others are operating properly.
The good news here is that you can use one of the working channels to isolate the
problem.

It’s no secret that video producers put a lot of effort into shooting and editing videos
that look good. Conscientious shooters pay careful attention to variables like
lighting, lens selection and camera settings when they capture their footage.
Likewise, they are attentive to the potential compromises that compression can
introduce during editing and export, and may invest hours of effort to perfectly hone
their color correction and effects filters to treat their footage in a way that makes it
look like a feature film. Shooting and editing video productions that have a beautiful
aesthetic is a wonderful thing, but even the most visually appealing productions can
come across as amateurish attempts if they are accompanied by a sub-par
soundtrack.
In reality, audio quality is every bit as important as video image quality. In fact, we
might instead consider the postulation that as video image quality increases, audio
quality should increase proportionately, because the expectation of quality has been
raised within the viewer. Furthermore, in the mind of the viewing audience, a video
production will typically be remembered, and therefore judged, based on its
weakest component. So, no matter how good a video production looks, a poor
soundtrack will taint the entire production in the mind of the audience. In light of this,
video producers must raise their awareness of audio and increase their
attentiveness to cleaning up and covering messy audio.
There are two approaches to audio editing: eliminating problems with source sound
that would create distractions, and adding audio effects and elements, like sound
effects, to build a soundscape. In this article, we will focus on reducing or
eliminating common audio problems like hiss, distortion caused by clipping and
weak, hollow sound.

Here are five strategies that might help you make the most of a bad sound situation.

92
BA (JMC) 209 Unit 3 Lesson 1

1 - Mute It
The first step is to determine whether the offending audio track is critical to the success of
the video or not. If, for instance, a news production about the county fair that is driven by
voiceover narration, or an on-camera host includes a problematic soundtrack that is
associated with ambient audio that accompanies B-roll video, eliminating the problem may
be as easy as muting the audio that coincides with the B-roll. While the editor may have
preferred to include the sound of the carnival midway, the integrity of the story would not
be compromised by omitting the track. If the track is plagued by buzz or distortion, it can
be cut altogether. Likewise, if the audio problem occurs in a short section of non-crucial
audio delivered by an on-camera host, but the rest of the delivery is fine, a decision can be
made as to the importance of including that specific line of dialogue. Cutting out
problematic sound altogether may be an acceptable option.

3. Add Music
In the case where audio delivered by an on-camera host or in a voiceover narration
includes a small, but noticeable hiss, a fast and easy solution may be to simply add a
music bed. When the offending audio track is exposed and alone it welcomes a greater
degree of scrutiny. The viewer’s ear may be drawn to the hollow quality of the track.
Adding a music bed gives the ear something else to hear and draws attention away from
minor audio imperfections. Use care to not make the mix too hot, however. The goal is to
blend the music track in such a way that it supports and lifts the primary sound source, not
in a way that competes with it.

4. Add Ambiance
One very common audio problem is introduced in editing when an audio track is chopped
up and the dialogue is spread out on the timeline. When recording voice overs or on-
camera talent, ambient environmental background sound is captured along with voice
track recordings. This contrasts heavily with the ultra silent gaps between sound bites,
resulting in increased awareness of the noisy background sound during the voice
segments. The solution here is to cycle through the source footage to find sections of
ambient “room noise” that may have been captured between takes without the voice. This
room ambience can then be edited into the gaps on the timeline to smooth out the silences
between audio edits. This is such a common occurrence that video professionals make a
regular practice of recording several seconds to a minute of ambient sound on their sets
as a part of their shot lists so that they have all the ambient room sound needed to fill
these aural interludes.

5. Apply Noise Reduction


Another common cause of audio interference is 60 cycle hum that may be introduced by
power cords that cross over microphone cables or by bad cables that are poorly grounded.
This sounds like a low hum or buzz. Most of today’s video editing applications include
built-in filters that are designed to reduce or eliminate noise within an audio clip at a
specific frequency. There is typically a drag-and-drop preset option, or a manual slide
control that allows the selection of a targeted range; In this case, 60Hz. Manual controls
also allow the manipulation of resonance around the selected range that let the user limit
the frequencies that the filter affects. This lets the editor achieve a narrower, but cleaner
frequency range and can effectively reduce unwanted audio buzz.

93
BA (JMC) 209 Unit 3 Lesson 1

6. Software Solutions
Some audio problems cannot be fixed so easily and require the use of highly specialized
audio editing software with a much greater degree of control. Some examples are Sony
SpectraLayers, Adobe Audition and Avid Pro Tools. While audio controls within video
editing applications are limited to frame-level accuracy, these professional audio tools
allow manipulation at the sample level. These programs will allow you to carve out
unwanted sounds, and sample objectionable background noise to remove it for impressive
results. While there may be a learning curve to operate them proficiently, these
professional audio editing software solutions can salvage sound that video editing
applications could not save.
https://www.videomaker.com/article/c4/17125-five-ways-to-fix-messy-audio

1.6 Assignments
1.6.1 Class Assignments

1. What are stages of sound design?

1.6.2 Home Assignments

1. What constitutes Postproduction mixing?

2. What is a standard process of sound design?

1.7 Summing Up
In this chapter, we learnt about various stages of sound design and its importance
in a film or television programme.

1.8 Terminal questions


1. What do you understand by sound design concept?

2. Why sound design is importance to any tv programme?

1.9 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing

2. Video Production, Vasuki Belavadi, Oxford Publication

3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal


Press,13 edition.

94
BA (JMC) 209 Unit 3 Lesson 2

Lesson 2 Effects and Transitions -


_____________________________________________________

STRUCTURE
2.0 Objectives

2.1 Introduction

2.2 Sound effects

2.3 Audio Effects

2.4 Sound Transitions

2.5 Video Effects

2.6 Video Transitions

2.7 Assignments

2.7.1 Class Assignments

2.7.2 Home Assignments

2.8 Summing Up

2.9 Terminal questions

2.10 Suggested further readings

95
BA (JMC) 209 Unit 3 Lesson 2

2. Sound Effects

In this lesson we will learn about the various types of sound effects and transitions used in
a video programme.

___________________________________________________________________

2.0 Objectives
After going through this lesson, you should be able to:
 Understand importance of sound effects and transitions.
 Understand importance of video effects and transitions.
___________________________________________________________________
2.1 Introduction

Sound effects (or audio effects) are artificially created or enhanced sounds, or sound
processes used to emphasize artistic or other content of films, television shows, live
performance, animation, video games, music, or other media. In motion picture and
television production, a sound effect is a sound recorded and presented to make a specific
storytelling or creative point without the use of dialogue or music.

2.2 Sound effects


Sound effects are artificially created or enhanced sounds that are used in artistic works to
emphasize or express an action, mood, or feeling. Sound effects were initially used in
radio dramas, but can be observed more often today in podcasts, theater, films, and
television shows. They are often synchronized with certain actions, such as a door's
slamming being accompanied by the appropriate noise. Sound effects may also be used in
the background of a scene to create anticipation or other emotions. Any sound, other than
music or speech, artificially reproduced to create an effect in a dramatic presentation, as
the sound of a storm or a creaking door.

2.2.1 The Purpose of Sound effects


Unsurprisingly, the most realistic sound effects tend to be recordings of actual sounds. To
create these sound effects, the actual sounds that are recorded may also be edited or
enhanced; sometimes, the pitch, intensity, or another aspect of the sound may be altered
using software. Sound effects may also be created entirely digitally by using software or
sound equipment to recreate the intended effect. Such sound effects tend to be less
realistic. Both of these kinds of stock sound effects are widely available and may be found
in sound effect libraries.

Multiple sound effects may be played at once or closely together in films and television
shows. This is often done to establish realism, as sounds are often concurrent in reality.
96
BA (JMC) 209 Unit 3 Lesson 2

The use of several subtle sounds such as those of fabric rustling, light footsteps, and quiet
conversation in the background of a film can make otherwise uncomfortable and
unnaturally quiet scenes convincing and lifelike. Sounds can also be layered to create a
new effect altogether. This sort of layering of effects is often done when no existing sound
effects fulfill a producer's needs and none can be easily made. This practice is especially
common in science fiction movies, which often feature various mythical creatures,
imagined monsters, and futuristic innovations and infrastructures that call for sounds that
do not currently exist. The term "Foley effects" refers to the sound effects recorded live for
a specific film and added to the visual footage post-production. Foley effects are largely
created using various props, many of which are used in innovative fashions, but may also
include some stock effects. Some reverberation is added to Foley tracks to make the
sounds more realistic and complete the effects. Foley effects are often subtle background
noises, but may consist of louder and more immediately noticeable sounds as well.

2.3 Audio effects


Audio or sound effects are hardware or software devices that manipulate how
an audio signal sounds. Effects can be controlled via various parameters including
rate, feedback or drive. They are useful when playing live or as studio tools during
the recording process.

Panning
Panning is the distribution of a sound signal in a stereo (or multi-channel) field.
Humans have two ears. Our brain processes the difference in timing between our
left and right ear. This gives us the ability to identify the placement of a sound in a
3D space—for our survival!
Stereo sound systems have evolved from a single speaker to a set of two, left and
right (L-AND-R). This has allowed us to move from mono to stereo sound playback.
Panning works by letting through more or less of a signal into each speaker,
creating various spatial effects.
When something is ‘hard panned’ to one side, you hear it coming from only from
that side.When something is panned to the middle, you will hear it coming from
between your speakers at the ‘phantom center’ (you hear it centered even though
there is no speaker in the center).
Play a sound and grab your panning knob. If you turn it gradually from one side to
the other, you will hear the sound travel across your stereo field from one side to the
other. This is a trick used a lot for sound effects of a car passing by for example.
Panning is a great way to artificially position a sound in a specific place in your
stereo field. It also lets you prevent muddiness and masking in your mix (when two
sounds cover each other up).Using auto-pan effects lets you sweep a sound across
the stereo field, creating a sense of the sound moving between the left and right.
The center of your mix is usually the busiest. It’s common to keep the low frequency
elements (bassline, drums) and lead elements (vocals) panned to the center
because they ground your mix. Other instruments are panned somewhere to the
right or left. It’s important to keep a balance: if you pan instrument slightly to the
right, pan something with a similar frequency range at the same spot to the left.

97
BA (JMC) 209 Unit 3 Lesson 2

Echo and Delay


Delay is an audio effect that records an audio signal for playback a set period of time after
the original signal. Delay can be played back in different ways to achieve sounds such as
echoes that decay over time, or a pronounced repeated doubling effect that adds new
layers to a recording. Delay is one of the most important effects. In fact, it’s the foundation
for other effects as well including chorus and reverb. However, the current definition of
delay is normally used to describe more pronounced echo effects.

Most delays work by playing back the dry signal while also playing back the wet or
‘delayed’ signal shortly after the original. Early tape delay units featured a recording head
and playback head(s) (also called a tap) a few inches apart. The result was an echoing of
the recording signal shortly after it was played. Delay units with multiple taps and tape
settings eventually gave artists the ability to playback multiple echoes at different intervals
with a greater level of control. More modern solid state and digital effects units use a
recorded buffer to emulate the playback head effect of older delay units. The incoming
signal is stored and played back depending on tweaks to the parameters that control the
echoing effect.

Reverb
Reverb is short for reverberation. Reverb happens daily, but we don’t always notice it.
When a sound occurs, two things happen: A) the direct sound hits your ears B) a bunch of
other sound waves bounce off of surfaces before reaching your ears. Those other sound
waves will reach your ears later and with less energy (therefore quieter).
Reverb is a bunch of echoes all happening at the same time, so you hear them as one
single effect: reverb. There are different kinds of reverb in many types of spaces. The most
obvious examples of reverberant spaces are tunnels, cathedrals, halls and caves.
Digital reverbs and reverb plugins calculate the needed delay, level, frequency response
and algorithmically generate multiple echoes. Reverb plugins do thousands of calculations
per second. Reverb is echo-y, it makes things sound like they’re in a particular kind of
room. Reverb brings some sustain to a sound and makes it stick around for longer—often
referred to as reverb tails. It gives a dreamy, even solemn quality to your signal (think a
choir in a big cathedral).
Reverb makes things sound further away in the mix if you push the wetness and bring
down a lot of the original dry signal. It can widen the mix and make it sound bigger and
fuller.

Chorus
Chorus is an effect obtained when similar sounds with slight variations in tuning and timing
overlap and are heard as one.It happens naturally when multiple sources make a similar
sound overlap. Think of a real life choir singing multiple parts at the same time. They all
overlap to form a distinct sound. The chorus effect does the same thing! Chorus adds
complexity and movement. It’s also that classic 80’s sound!
A stereo chorus widens your stereo image. Guitarists use it to make their tone ‘dreamy’ or
thicken their bass guitar sound. It’s also very common as an effect on synthesizers, organs
and vocals.For example: Use chorus to fatten the sound of your monophonic bass synth
sound and make it sound ‘meaner’.

98
BA (JMC) 209 Unit 3 Lesson 2

Equalization
Equalization (or EQ) is the cutting or boosting of a particular frequency (or range of
frequencies) in the frequency spectrum. Humans can hear audio frequencies roughly
between 20 and 20 000 Hertz (Hz). Any sound that human ears perceive sits somewhere
in that frequency spectrum. An Equalizer (EQ) divides that spectrum into sections (called
‘bands’) that you use to cut or boost parts of your sound.
EQing is like sculpting: it shapes the existing frequencies of your sound. It doesn’t add new
frequencies per se. By cutting or boosting certain frequencies, EQ shapes the tone and
character of your sound. EQing also changes the balance between the frequencies that
are already there. EQ changes the character of your sound either very subtly or
dramatically. Cutting high end will make your sound darker. Boosting the high end will
make it brighter. The sound of EQ will vary a lot depending on how much you use and on
what frequencies. EQ is a key tool for good mixing. It gives you the power to carve out
space in the frequency spectrum for each of your sounds to get them sitting right in the
mix. EQ is used to remove undesirable elements from a recording. EQ is also a way to
boost the key elements in your mix.

Compression
Compression is the reduction of dynamic range—the difference between the loudest and
quietest parts of an audio signal. When compression is applied, the quieter parts of the
signal are boosted and the louder ones are attenuated.

Compressors reduce the gain of your signal (‘GR’ stands for Gain Reduction on your DAW
compressor). Compression reduces the dynamic range of a signal. The dynamic range is
the difference between the loudest and the quietest parts of a signal.

Tremolo
Tremolo is a modulation effect created by varying the amplitude (volume) of a signal. It
gives a trembling effect—the word ‘tremolo’ itself is italian for trembling. It’s often confused
with vibrato, which is a modulation of the pitch. Many Fender guitars have a misnamed
‘tremolo arm.’ Often that’s technically a vibrato arm because it varies the pitch, not the
amplitude. Tremolo gives a sense of movement, tension or drama. It makes a sound more
rhythmic, percussive or stuttering. It’s also used to create a pulsating effect. In terms of
acoustic instruments, tremolo is achieved with bowed stringed instruments by playing very
fast back and forth with the bow (or pick, on a guitar).

Audio Filter
An audio filter attenuates (turns down) a set of frequencies above or below a determined
threshold—called the ‘cutoff frequency.’ They’re often found inside of EQs or as standalone
plugins. The most common types of filters are High-Pass Filters (HPF), Low-Pass Filters
(LPF) and Band-Pass Filters (BPF). These are defined by their shape and ‘slope.’ LPFs let
through all the frequencies below the cutoff frequency—and attenuate those above. HPFs
let through all the frequencies above the cutoff frequency—and attenuate those below.
Filters can be either very subtle of very dramatic, depending on how much you are cutting
and how steep your slope is. They color your sound—making it darker (with a LPF) or
brighter (with a HPF). Filters are used both for corrective and creative reasons. They’re a
key tool in the producer’s arsenal for carving space in a mix for certain instruments and
99
BA (JMC) 209 Unit 3 Lesson 2

frequency ranges.For example, when you’re layering drums, filters will help you shape the
tone of your kicks, snares, etc. Many modern VSTs allow for completely unique curves for
your filter shapes. They’re also the DJ’s bread and butter for creating exciting build-ups
and transitions. It’s a creative application of filtering that works as a production tool as well.

2.4 Sound transitions


In production, audio transition is often referred to as crossfade. It is the sound that you
hear when a clip or sequence is transitioning to the next clip or sequence. Adding audio
transitions post-editing is a technique used to create a smooth transition between two
audio files.
Basically, during an audio transition, one audio source file fades out while the other audio
source file fades in, creating a smooth transition of the sounds or audio since the listener
can hear both files simultaneously even for a short period of time.
Often times, the length of audio transitions or the fading of audio transitions is limited only
to the amount of audio in a source file. This means that an editor can make the audio
transition as long or as short as the audio or sound file.

Difference between Transitions and Effects


It is also important to understand the different between transitions and effects.
Effects, whether sound or image, refer to the movement and sound that
accompanies slides or text as they are introduced into the video frame. Effects can
be used within a clip or sequence to add interests, description and design to the clip
or sequence.

Transitions, on the other hand, are visual or audio effects from one slide, clip or
sequence to the next during a video or audio. It is like an introduction or option that
helps fill in the gap or timeline between each clip. However, both transitions and
effects can be customized to suit your preference.

Some commonly used audio transitions:

While editing in Premiere Pro, go to the Effects tab and click Audio Transitions >
Crossfade. You’ll have the option to apply:

 Constant Gain (a quick fade out and in)


 Constant Power (an even dissolve between both tracks)
 Exponential Fade (a slow fade out and in)

All three choices allow you to tinker with the audio in different ways and generally act
as tools for blending the two separate audio files together in order to avoid a rough
sounding cut. Though not always immediately effective — some volume tweaking and
beat matching is required — the blending capabilities nonetheless can help out a great
deal.
100
BA (JMC) 209 Unit 3 Lesson 2

Though dragging the effect to the desired clip will work fine, hitting Sequence> Apply
Audio Transition automatically adds Constant Power to the cut closest to your marker.
This can prove to be a faster shortcut if your Effects tab isn’t open and you’re flying
through your edit or performing last looks.

2.5 Video effects


2.5.1 Color Correction effects

Brightness & Contrast effect

The Brightness & Contrast effect adjusts the brightness and contrast of an entire
clip. The default value of 0.0 indicates that no change is made. Using the
Brightness & Contrast effect is the easiest way to make simple adjustments to the
tonal range of the image. It adjusts all pixel values in the image at once—highlights,
shadows, and midtones.

Color Balance (HLS) effect


The Color Balance (HLS) effect alters an image’s levels of hue, luminance, and
saturation.
Hue - Specifies the color scheme of the image.
Lightness - Specifies the brightness of the image.
Saturation - Adjusts the image’s color saturation. The default value is 0 which
doesn’t affect the colors. Negative values decrease saturation, with -100 converting
the clip to grayscale. Values greater than 0 produce more saturated colors.

Fast Color Corrector effect


The Fast Color Corrector effect adjusts a clip’s color using hue and saturation
controls. This effect also has levels controls for adjusting intensity levels of image
shadows, midtones, and highlights. This effect is recommended for making simple
color corrections that preview quickly in the Program monitor. Quickly remove a
color cast. The Fast Color Corrector and the Three-Way Color Corrector effects
have controls to quickly balance colors so the white, grays, and black are neutral.
The adjustment that neutralizes the color cast in a sampled area is applied to the
entire image. This can remove the color cast in all colors. For example, if an image
has an undesirable bluish cast, when you sample an area that should be white, the
White Balance control adds yellow to neutralize the bluish cast. This yellow
adjustment is added to all the colors in the scene, which should remove the color
cast in the entire scene.

Three-Way Color Corrector effect


The Three-Way Color Corrector effect lets you make subtle corrections by adjusting
a clip’s hue, saturation, and brightness for the shadow, midtones, and highlights.
You can further refine your adjustments by specifying the color range to be
corrected by using the Secondary Color Correction controls.

101
BA (JMC) 209 Unit 3 Lesson 2

2.5.2 Blur and Sharpen effects

Camera Blur effect


The Camera Blur effect simulates an image leaving the focal range of the camera,
blurring the clip. For example, by setting keyframes for the blur, you can simulate a
subject coming into or going out of focus, or the accidental bumping of the camera.
Drag the slider to specify a blur amount for the selected keyframe; higher values
increase the blur.

Compound Blur effect


The Compound Blur effect blurs pixels based on the luminance values of a control
clip, also known as a blur layer or blurring map. By default, bright values in the blur
layer correspond to more blurring of the effect clip. Dark values correspond to less
blurring. Select Invert Blur for light values to correspond to less blurring.
This effect is useful for simulating smudges and fingerprints. Also, it can simulate
changes in visibility caused by smoke or heat, especially when used with animated
blur layers.

Directional Blur effect


102
BA (JMC) 209 Unit 3 Lesson 2

The Directional Blur effect gives a clip the illusion of motion.

2.5.3 Mirror effect

The Mirror effect splits the image along a line and reflects one side onto the other.
Reflection Center
The position of the line about which the reflection occurs.
Reflection Angle
The angle of the line about which the reflection occurs. An angle of 0° reflects the
left side onto the right. An angle of 90° reflects the top onto the bottom.

2.5.4 Twirl effect


The Twirl effect distorts an image by rotating a clip around its center. The image is
distorted more sharply in its center than at the edges, causing a whirlpool result at
extreme settings.
Angle
How far to twirl the image. Positive angles twirl the image clockwise; negative
angles twirl it counterclockwise. For a whirlpool result, animate the angle.
Twirl Radius
How far the twirl extends from the twirl center. This value is a percentage of width or
height of the clip, whichever is greater. A value of 50, for example, produces a twirl
that extends to the edges of the clip.

103
BA (JMC) 209 Unit 3 Lesson 2

Twirl Center
Sets the position of the center of the twirl.

2.5.5 Crop effect


The Crop effect trims pixels from the edges of a clip. The Left, Top, Right, and
Bottom properties specify what percentage of the image to remove.

2.5.6 Block Dissolve effect


The Block Dissolve effect makes a clip disappear in random blocks. The width and
height of the blocks, in pixels, can be set independently.

2.5.7 Gradient Wipe effect

104
BA (JMC) 209 Unit 3 Lesson 2

The Gradient Wipe effect causes pixels in the clip to become transparent based on
the luminance values of corresponding pixels in another video track, called the
gradient layer. Dark pixels in the gradient layer cause the corresponding pixels to
become transparent at a lower Transition Completion value. For example, a simple
grayscale gradient layer that goes from black on the left to white on the right causes
the underlying clip to be revealed from left to right as Transition Completion
increases.

2.6 Video Transitions

2.6.1 Video Dissolve transitions


Cross Dissolve transition
Cross Dissolve fades out clip A while fading in clip B.
Cross dissolve can also work well at the beginning or end of a clip when you want
to fade in or out from black.

Dip To Black transition


Dip To Black fades clip A to black, and then fades from black to clip B.
Note: Using dip to black at the beginning or end of a clip will also affect a video on a
lower track, something not always expected when a simple fade in/out of the
targeted clip is what is wanted. The cross dissolve transition may work better for
this.

Dip To White transition


Dip To White fades clip A to white, and then fades from white to clip B.

Dither Dissolve transition


Dither Dissolve fades clip A to clip B using a dithering algorithm. You can specify
any of the following options:
Border Width Increases the size o the dithering. The default is zero.
Border Color Determines the color used for the dithering. The default is black.

Page Peel transitions


Page Peel transition

105
BA (JMC) 209 Unit 3 Lesson 2

Page Turn transition

2.7 Assignments

2.7.1 Class Assignments

1. Describe five video effects used in video editing?

2.7.2 Home Assignments

1. What are sound effects and their purpose?

2. What are audio effects? Describe 3 audio effects with their situational use.

2.8 Summing Up

In this chapter, we learnt about various video effects and transitions along with their
use. Also understood various types of sound effects and audio transitions.

2.9 Terminal questions

1. What do you understand by sound effects?


2. Describe different types of video transitions?

2.10 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
106
BA (JMC) 209 Unit 3 Lesson 2

2. Video Production, Vasuki Belavadi, Oxford Publication


3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

107
BA (JMC) 209 Unit 3 Lesson 3

Lesson 3 Styles of Packaging: News and Non-news


_____________________________________________________

STRUCTURE
3.0 Objectives

3.1 Introduction

3.2 Packaging in News

3.3 Stages of Video editing in News

3.4 Non-news programme

3.5 Video Editing of Non-News programmes

3.6 Class Assignments

1.8.1 Class Assignments

1.8.2 Home Assignments

3.7 Summing Up

3.8 Terminal questions

3.9 Suggested further readings

108
BA (JMC) 209 Unit 3 Lesson 3

3. Styles of Packaging: News and Non-news


In this lesson we will learn about the difference between techniques and styles of
editing a news and a non-news programmes.

_____________________________________________________________

3.0 Objectives
After going through this lesson, you should be able to:
 Understand how news stories are packaged In video.
 Understand how non-news stories are packaged In video.
________________________________________________________________
3.1 Introduction
Reporters tend to classify themselves as either news reporters or feature (non-
news) writers. There is of course no rule stating that a reporter cannot be a feature
writer, or vice-versa, but over time, reporters tend to slot themselves into particular
categories depending on their beats and the strengths they identify in themselves.
For example, how you want to present the news story on whether you are sharing
timely information or whether you want to give out information that can be read
leisurely. If there was a drunken brawl at a Rock Show and you are reporting about
the event as it happens or has just happened, then use a news story style. If you
are showing a general piece about safety measures at such events and why rock
shows need to be organised with greater care, you can take the news-feature or
non-news approach.

3.2 Packaging in news


A news package is a creative, visual and long form of storytelling found on
television newscasts. News is conveyed to an audience by packaging together a
story that includes characters, facts, plot twists and a climax.
A package is a self-contained taped news report. Many networks use news
packages to provide innovative newscasts to broad audiences. Alternate ways of
referring to these newscasts include package, taped package, news package or
simply as a pack.
These types of newscasts deliver in-depth coverage of news events by investigating
subjects of all kinds. News correspondents probe trends, crimes, conflicts and
issues of interest to present long segments, and sometimes full one or two-hour
broadcasts as news packages usually run for 1:15 to 2:00 in length. This type of
news presentation is best for complicated stories or ones that have multiple
interviews. In the case of magazine-style news programming, packages can be 20
minutes or longer.

3.2.1 Structure and Script


109
BA (JMC) 209 Unit 3 Lesson 3

Reporters will often spend large amounts of their time researching stories and interviewing
characters to eventually write the scripts for these packages. A common part of a news
package is the appearance of a reporter talking into the camera.
This is called a "standup" because the reporter is often seen standing in front of the
camera on the scene of the story. Usually, the news anchor will read an introduction live,
then the pre-recorded story will be shown.
Most viewers have never seen a script for a news package, as what the audience sees is
the video form of the script.
When a script is created, it often involves many different elements in addition to the exact
wordage of the story that the reporter is going to present, such as:

 Storyline
 Visuals
 Audio
 Timing and Cues
 Tone
 Voiceovers

The writer has to consider both what the viewer sees (visuals) and also what they are
going to hear (audio). There is the visual aspect of video production, where images and
videos of the subject matter are presented, while the audio specifies sound bites,
voiceovers and music that may accompany the visuals to help the story along.

Capture (import) of video and take notes on what is to be seen and heard. Taking notes is
called “logging your tape,” and it includes noting where things are within the run of the
tape/file. Also to make notes about a particularly good shot or other parts which can be
used.
Timing and specific cues for the editor and post-production team are also important
aspects of script creation for news packages. Indicating the timing and length of a
particular visual on the script can help with weaving sound bites and voiceovers together
with images and storylines. By also indicating the tone and feelings that are to be
conveyed, the emotional component to a newscast can start to take shape. Once the full
package script is complete, the reporter is ready to go into a sound booth and record
voice-overs.

The post production team will then use the script to bring together the whole news
package, to create a newscast that is entertaining, compelling and informative, while
keeping in line with the reporter's overall vision and storyline.
Import your narration voice track into the video editing software along with all video clips.
Edit story and export video. Exported video file should not be not too big.

Editing video is really about structuring stories. It’s about establishing a beginning,
middle and end, deciding how scenes will transition into each other, establishing a
rhythm, and building momentum.
110
BA (JMC) 209 Unit 3 Lesson 3

Knowing how to trim a clip or sequence a series of shots is important in all forms of
video storytelling. In video journalism, these techniques can help us advance stories
and enhance their journalistic purpose.

3.3 Stages of video edit and package a News story

(a) Log sheet


The first thing to do is view your tape and the time code. Make a list, or log, of the
different shots you see, the times they start and finish (called in and out points) and
what people say (script). If someone can log, while you are filming, even better. It
will save you valuable editing time, particularly if you are on a deadline.

(b) Assembly edit


Now you are ready to do an assembly edit. That is where you look at your
storyboard and cut out the film to match the shots you planned. The paper edit will
help you find these sections easily. Lay them down in order. In most computer
editing packages, this means laying them on a line from left to right, called a
timeline.

(c) Sound edit

111
BA (JMC) 209 Unit 3 Lesson 3

Listen to the script and cut the film so that it all makes sense when you listen to it.
Your edited report on the timeline should now look almost finished.

(d) Final Edit


This is where you put the finishing touches to your report.
For example, you can add different bits of video, to make your report visually interesting,
and graphics.
The final edit usually involves shortening the report or making it more concise. Make sure
your overall editor is happy you haven't cut out any vital bits such as the other side of the
argument.

(e) Adding visual interest


If you want to, you can put different video over the sound. For example, if the reporter is
speaking to the camera and you want to keep his voice, but show the location, you can cut
and paste pictures from the original film and overlay them onto the timeline.
As a general rule, these pictures should be two seconds or longer - otherwise, it is too
quick for the audience.
Make sure you reduce any background noise attached to the location shots, so you can
hear the reporter's words clearly. Don't remove the background noise though, at a low
level, it makes the report sound natural.

(f) Editing interviews


You can cut out the reporter's questions, as long as the interviewee answers in full
sentences. This will make your report more concise. You only need to include one or two
good answers in your report. Listen to all the answers first and select the best ones. It will
save you valuable editing time, particularly if you are on a deadline. Interviews often
involve filming the interviewee's head and shoulders. Keep the sound track you laid down
in the sound edit and overlay the pictures.

(g) Transitions
There's no need to go overboard with these. If you've filmed your report well, you won't
need to add any. Some very common transitions are:
Blurring the edges between one shot and the next, called a cross-dissolve. This is often
used to show that time has passed.
Fading from or to black at the beginning and end of your report, to mark the beginning
and end.

(h) Adding graphics


To make your report look really professional, add name and job titles. Have a look for the
"type tool" or something that does the same job.
Using titles means that the reporter and interviewees don't need to spend time introducing
themselves. When adding titles, keep the font plain and make the words big enough to
read on screen. Make sure you spell names correctly.

112
BA (JMC) 209 Unit 3 Lesson 3

3.4 Non-news programmes

News Features
A feature is a typically longer than a standard news story. It’s written and edited in a
different style, typically with more detail and background based on more extensive
research than would be required to simply report a news event. Features can vary
widely -- you might write a news feature, an arts feature or a human interest feature.
Although the term implies softer news, a feature is often defined by its length and
style, not necessarily its subject matter.
The style component is important. Features humanize events and issues rather
than make a recitation of facts. Why should your readers care about the event
you're writing about? Explain why they might. You might address this question in
your opening paragraph or paragraphs, hooking your readers, then move on to
more of the nuts and bolts of your topic.
A feature story is not about the news; it's about human interest. In essence, we can
call features 'people stories,' or narratives about the human element of life. It's not
just about facts; feature stories are more about storytelling and weaving an
interesting narrative.

3.5 Video Editing of Non-News programmes

The major steps of video editing are same as editing a news-story. But using same
tools a bit different make story much more appealing and of human interest.

After pre-production and production stage, the news feature actually gets made on
editing table that is post-production. It is like now we have data to process, and by
using the video editing software at its best use, we can make the data presentable
and watchable keeping in mind the target audience.

(i) First Assembly


The first assembly is when the action starts to pick up for the video editor. Using the
storyboard as a foundation, you start making selections. Usable footage is trimmed
and marked, while the bad takes are cast aside. It is not uncommon for you to set
up your timeline to resemble the storyboard. You take all the usable footage, often
times multiple takes of the same shot, and place them in sequence. The result is an
extended video of each potential shot for the final edit. This portion of the process is
for internal purposes, to be shared with the director and producer. Many directors
like to make a paper edit. They take notes while screening the selects of the first
assembly and then write out what clips they like and don’t like, sometimes
rearranging them into a new order.
B-roll (secondary shots) is also separated at this time. Its content is logged and it is
sorted into a hierarchy according to the editor’s needs. This is all contingent on the
project as the organizational needs of the footage are dependent on what’s most
needed for the video.

(ii) Sound edit


113
BA (JMC) 209 Unit 3 Lesson 3

Sound edit is very differently done when a news story is packaged compared to a
non-news one. A non-news story is mainly thematic and according to the topic. For
example if we make a news feature on a famous photographer who passed away,
the music on voice over, anchor links and titles will be low volume, subtle and
soothing. Here we can’t use dramatic or high volume music. This is called
aesthetics in use of music. It should support or compliment the visuals rather to
distract or disturb the viewers.

(iv) Editing Interviews


A news feature can have interviews longer in duration as compared to news stories.
Because news features are made to give details of the topic and sometimes
interviews are enough to do this. Also, there could be background music also for
interviews which give a nice environment to the context or topic. It also sets the
mood.

(v) Transitions and Graphics


We can use these heavily in the non-news stories package. As it is clear that video
editor working on non-news stories has more freedom to experiment with various
transitions and graphics. Also he or she has more time than news video editor to do
a hit and trial method. As there is no concept of breaking news. There are plenty of
options available in a non-linear editing software to do this.

(vi) The Rough Cut


The structured organization of the first assembly sets you up to make the rough cut.
The rough cut is the first true edit and is the stage in which you start to display your
craft as more than a technical exercise. At this stage, it is no longer about solely
discovering and organizing footage, it’s about storytelling and crafting a message,
using the footage from production as a foundation to achieve the director’s vision.
Timing is vital to the rough cut. A video editor influences the timing of a video more
than any other person who touches the production. It is during the rough cut that
you start to play with the timing. Whether making fast cuts or extending pauses,
here you begin to create what will become an emotional connection with the
audience. The rough cut is meant to be shared. The video editor works with the
director, producer and a client, if it’s their project. Communication is kept open
between all parties and much dialog takes place, helping to shape the overall edit.
The parties agree on what changes need to take place before the edit moves on to
the final cut.

(vii) The Final Cut


The final cut of an edit is when the cutting and timing of the footage is finalized. It’s
not the final version of the video, ready for release, but it’s awfully close. The edit at
this stage is the one that will be used for several finishing steps, all of which need to
synchronize perfectly with each other. For this reason, the final cut is often known
as picture lock or frame lock, meaning that the frames in the edit will not change in
time from this point moving forward.
The final cut is not just about locking things down however, it’s also about
elimination. A scene might be finalized, but that doesn’t ensure that it stays in the
114
BA (JMC) 209 Unit 3 Lesson 3

production. Once all the scenes are cut and precisely timed, you review them with
the director and producer. If there’s a scene that doesn’t work or doesn’t contribute
to the overall narrative of the video, it’s eliminated.
First, visual effects and graphics are added to the video. Most visual effects are
planned out during preproduction but aren’t generated until post-production
because the visual effects artist needs know what footage they’re specifically
working with to make the effects seamless. Graphics are also planned in advance
and are fine tuned to coincide with the final edit.
What the audience will see on screen is only part of their experience. After visual
effects and graphics are added, audio sweetening is performed. The final cut is
round tripped to an audio editing suite, sometimes on the workstation, for the
placement and editing of sound effects and musical underscores. Dialog is mixed
down with these elements to create an audio track that supports and carries the
accompanying video.

(viii) Color Grading

The final step before deliverables are rendered and shipped is color grading. Color
grading is the stage in which you, or a colorist, manipulates color and tonal qualities
of the video image to craft a unique look that helps set the mood for the video and
visually tell the story. The advent of digital cinema cameras, greater computing
power and more advanced codecs has increased the implementation of color
grading.

3.6 Assignments

3.6.1 Class Assignments

1. Describe steps of packaging news programme?

3.6.2 Home Assignments

1. What is difference between news and non-news story?

2. What are various steps to edit a non-news package?

3.7 Summing Up

In this chapter, we learnt about various stages of video editing of a news and a non-
news package.

3.8 Terminal questions

1. What do you by packaging news story?


2. What do you by packaging non-news story?

3.9 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
115
BA (JMC) 209 Unit 3 Lesson 3

2. Video Production, Vasuki Belavadi, Oxford Publication


3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

116
BA (JMC) 209 Unit 3 Lesson 4

Lesson 4 Archiving and File Formats


_____________________________________________________

STRUCTURE
4.0 Objectives

4.1 Introduction

4.2 Archiving

4.3 Archiving in Premier Pro using PROJECT MANAGER option.

4.4 Archive File Formats

4.5 Class Assignments

4.5.1 Class Assignments

4.6 Summing Up

4.7 Suggested further readings

117
BA (JMC) 209 Unit 3 Lesson 4

4. Archiving and File Formats


In this lesson we will learn archiving process and its significance.

_____________________________________________________________

4.0 Objectives
After going through this lesson, you should be able to:
 Understand how archiving is done and its importance.
 To know about various file formats used for archiving video data and projects.
________________________________________________________________
4.1 Introduction
Video files pose unique organizational challenges. For starters, they're much larger
most other files types, so where you store them and how frequently you archive
them matters. Second, you probably remember what's in your video files in a very
different way than you remember or think about other kinds of files, such as a
PowerPoint presentation. When you want to find a video clip, however, you might
recall who is in the video, or the holiday when you shot it, or that it was a scene
outdoors.

4.2 Archiving
"Archiving means collecting, organising, describing, preserving, and providing
access to materials of evidential, historical, cultural, or other value".
Archiving is really important as you can be sure that at some point either a
customer will come back and ask for some footage from an old project or you will
find you need to get a hold of some footage and you don't know where it is!
However, good archiving of old material will help make this a far less painful
experience.

The process of storing project data so you can get access to it later is called
archiving.
When you're finished editing your videos, you'll typically export the final file. If you
upload it to Facebook, YouTube, or some other site, that copy can be considered
one of your backups, which is good. You always want a backup of your files. But if
you're posting some videos to YouTube and saving others to a private Dropbox
account, your video files are going to be all over the place.

118
BA (JMC) 209 Unit 3 Lesson 4

4.3 Archiving in Premier Pro using PROJECT MANAGER option.


You can do a lot with Adobe Premiere’s Project Manager. You can easily collect and copy
all of your assets to pass off to a client or a fellow editor, or consolidate one of your
projects to save space on a full hard drive. Whatever the situation, the Project Manager is
a helpful tool.

Step 1: Choose Your Sequence(s)

You can find the Project Manager at the bottom of the File menu. First, at the top of the
Project Manager dialog box you’ll notice the Sequences area, where you can specify
individual sequences you would like to include in your archive.

Step 2: Select How to Manage the Resulting Project

You have two options under the resulting project section — you can Collect Files and
Copy to a New Location, or you can Consolidate and Transcode. I want to hand off all of
the assets in their original format, so I’ll select the Collect Files and Copy to a New
Location option.

With Consolidate and Transcode, you can choose to actually render out your original
content to a new format. You can transcode Sequences or Individual Clips, and you have
a variety of different format and preset options available to you when going this route. But
again, we’ll stick with the Collect Files and Copy to a New Location option.

119
BA (JMC) 209 Unit 3 Lesson 4

Step 3: Customize Your Options

 The Project Manager offers you a number of options when archiving your projects.

 Exclude Unused Clips: Use this feature when you only want to include the media
used in your selected sequences.

 Include Handles: When utilizing the Consolidate and Transcode option, you can
choose to include frame handles on each clip which will provide room to add
transitions or retime clips.

 Include Audio Conform Files: You can choose to include audio conform files or just
re-conform them later on.

 Convert Image Sequences to Clips: A nice feature, the Project Manager can
instantly convert image sequences to clips.

 Include Preview Files: You can choose to include preview files, or re-render them
from your archived project.

 Rename Media Files to Match Clip Names: If you’ve spent time renaming clips in
your project, it’s nice to be able to change the name for your archived project.

 Convert After Effects Compositions to Clips: If you choose to Consolidate and


Transcode, you can have the Project Manager convert After Effects comps to clips.

 Preserve Alpha: Another feature available with the Consolidate and Transcode
method, preserving alpha is important if you’re passing a project on to another
editor or if you’d like to make changes in the future.

120
BA (JMC) 209 Unit 3 Lesson 4

Step 4: Destination Path and Disk Space


Our final step includes selecting a location for our archived project. After you select a
destination, the Project Manager will show you the disk space available. You can click on
Calculate to find out the estimated size of your archived project, as well as the size of the
original project.

4.4 Archive File Formats

Due to the large file sizes and the use of numerous codecs, archiving video is a complex
topic. The dependency on specific codecs creates a potential for footage becoming
inaccessible. And project files carry an inherent risk of obsolescence, as NLE software and
the formats it understands are constantly moving targets.

121
BA (JMC) 209 Unit 3 Lesson 4

Most users choose to archive their original camera media as well as digital master files of
their completed programs after editing. It is also a good idea to store copies of the project
file that contains the instruction set on how the source files were assembled.

For client work, archive decisions often come down to a matter of cost. Some clients are
willing to pay for archives while others are not. Additionally, many clients assume that the
production company has archived everything, so be sure to clarify assumptions. The use
of longterm media formats such as LTO tape and optical media has made this process
easier, but it is still neither easy nor cheap.

Create a digital master of the finished production


When a project is done, you’ll often make several digital files for delivery. Typically
these files are heavily compressed as they are intended for playback on portable
media devices or the Internet. While you will probably want to archive these files,
you’ll also want to save a digital master file with the least compression that is
practical.

A QuickTime movie using a low-compression scheme is a good format choice for


your digital master file. Popular codec choices include the Apple ProRes 422 (HQ),
Avid DNXHD, Cineform, or Animation codecs.
These files may be very large, but they ensure a high-quality digital file that can be
used to make additional digital derivatives.

Archiving a master to tape


It’s very common for people who make or commission video to render and archive a
master copy of the production to some form of tape. This is frequently done even
though there may be multiple copies of the digital files on various hard drives or
other media. A finished master copy written to tape provides a backup version that
may not be subject to the same codec uncertainties that other digital copies have.
If you don’t own a system capable of creating a digital tape, you could send a
master digital copy to a production house and subcontract the process. You could
also write the file out to Blu-ray disc to create a high-quality version that is not so
dependent on installed codecs.

Camera original files


In addition to the finished master version, it will generally be advisable to archive
the camera original files. There are a few key considerations when creating these
archives.
Maintaining file structure: It is a best practice to maintain an exact copy of the file
structure of the original source media files when copying to a backup device. The
additional metadata and folder structures are often needed to assist editing software
in properly importing and interpreting the camera data, particularly for AVCHD.
Redundancy: If you have only one copy, it is not backed up. Be sure that the media
exists in at least two locations and uses at least two formats of backup. While hard
drives are cheap, they are not a permanent solution. Many turn to optical media
such as Blu-ray discs or tape-based archives like LTO, DLT, and AIT.
122
BA (JMC) 209 Unit 3 Lesson 4

Use software: Many use software tools to ensure a complete copy of the disk image.
Tools like Final Cut Pro X have a camera archive feature that supports certain formats (not
DSLR cameras, however). Another popular choice is ShotPut Pro which can automate
copying of tapeless media to up to three separate locations. It also has a great set of tools
for verifying copies and, creating additional backups to optical formats.

Project files
The project file created by your nonlinear editing software truly is intellectual property. If
you ever need to make a change to the video after the creation of the digital master, it will
probably be easiest to do so from the original project file. Of course, this is only so if the
project file can be opened, and all the clip files can be easily reattached. You will want to
take great care to ensure that the project file is archived to multiple locations and multiple
media types.

At the end of a project, it is also a good idea to export additional versions of a project file.
Over time, manufacturers often evolve their project file formats. Some even drop support
all together (such as Apple did in its initial transition from Final Cut Pro 7 to version X).
Most editing tools can export an EDL (edit decision list) and XML (extensible Markup
Language) file. Storing a copy of each with the archived project is a good idea.

Removing unused media from the project file


When you archive a completed project and the associated media, you will almost certainly
have a bunch of stuff that was not used in the finished product. This may include a lot of
unused source footage, and it may also include temporary render files. In some cases, it
will make sense to include all of this with the archived project, and in some cases you may
want to remove some of the files to save space.
There are a number of factors to balance here. These include the total size of the data, the
value of the project, the prospective need to repurpose the project or footage, the need to
revisit the project, and more.
For a high-value project that only takes up a couple hundred gigabytes, it probably makes
sense to archive everything. If the project has limited audience and a ton of transcoded
footage, then it might make sense to trim off the unused transcoded footage when you
send the project to archive.
In the end, you’ll have to make the decision of what to cut according to your own valuation
of the project in the context of the archiving costs.

Format obsolescence
A major challenge with regard to the preservation of digital video files is the long-term
readability of file formats. This is especially true since there are so many competing
formats and codecs in the video editing market. The sheer number of compressed and raw
formats leads to many potential problems. We have seen rapid evolution with video codecs
as well as camera acquisition formats. Even if workflows exist to recover discontinued
formats, it often means re-compressing or reprocessing the assets which can lead to a
loss in quality.

123
BA (JMC) 209 Unit 3 Lesson 4

4.5 Assignments
4.5.1 Class Assignments

1. Describe steps followed in NLE software for archiving the project?

4.6 Summing Up
In this chapter, we learnt about various stages of archiving a video editing project
through Project Manager Option and various file formats.

4.7 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

124
BA (JMC) 209 Unit 4 Lesson 1

UNIT 4 BA (JMC) 209


___________________________________________________________________

UNIT 4: Multi Camera Editing


___________________________________________________________________

LESSON 1

Control Room and Panel: Use of Switcher, Chroma, Super – Impositions

126

LESSON 2

Multi-camera Online Editing: Concept and Process 136

LESSON 3

Live Events: Recording, Editing and Telecasting 143

LESSON 4

Emerging Trends in Multi-camera Video Editing 148

125
BA (JMC) 209 Unit 4 Lesson 1

Lesson 1 Control Room and Panel: Use of Switcher, Chroma,


Super - Impositions

_____________________________________________________

STRUCTURE
1.0 Objectives

1.1 Introduction

1.2 Control Room

1.3 Switcher

1.4 Chroma

1.5 Super-impositions

1.6 Class Assignments

1.6.1 Class Assignments

1.6.2 Home Assignments

1.7 Summing Up

1.8 Terminal questions

1.9 Suggested further readings

126
BA (JMC) 209 Unit 4 Lesson 1

1. Control Room

In this lesson we will learn about the functioning, design and layout of control room

___________________________________________________________________

6.0 Objectives
After going through this lesson, you should be able to:
 Understand basic functioning of Television Control Room.
 How control room is designed and about all the personnel involved.
___________________________________________________________________
6.1 Introduction

Production Control Room is the most vital part of a TV studio. The basic television system
is considerably expanded when doing a television production in the studio or in the field,
such as a telecast of a sporting event. The expanded systems needs equipment and
procedures that allow for the selection of various pictures and sound sources; for control
and monitoring of picture and sound quality; for the recording, playback and transmission.

6.2 Control Room - Concept


Multi-camera production, also called online production. The video is completed at the time
of production. Production and post-production phases merge into one. TV shows like KBC,
Indian Idol or live newscasts from the studios are multi camera productions. Cameras
placed at various vantage points cover and capture action from different angles and
distances, providing perspective and ambience. In contrast, in single-camera production
only one camera is used for shooting a subject or an event.
Also called PCR, is a separate room adjacent to the studio floor, where all the production
activities are coordinated. Here, the producer/director, production assistant, vision mixer
operator, audio technician, and other production persons sit and take decisions for
broadcast live.

Today’s live TV shows and reality programming requires the real-time interactivity and
ultra-fast turnaround first pioneered by live sports and news. Studio and on-site
productions are fast adopting open, tailored entertainment workflows for the flexibility they
provide.

127
BA (JMC) 209 Unit 4 Lesson 1

The production control room or studio control room (SCR) is the place in a television studio
in which the composition of the outgoing program takes place.

The production control room is occasionally also called an SCR or a gallery – the latter
name comes from the original placement of the director on an ornately carved bridge
spanning the BBC's first studio at Alexandra Palace which was once referred to as like a
minstrels' gallery. Master control is the technical hub of a broadcast operation common
among most over-the-air television stations and television networks. Master control is
distinct from a PCR in television studios where the activities such as switching from
camera to camera are coordinated. A transmission control room (TCR) is usually smaller in
size and is a scaled-down version of central casting.

1.2.1 Facilities in a production control room include:

 A video monitor wall, with monitors for program, preview, VTRs, cameras, graphics
and other video sources. In some facilities, the monitor wall is a series of racks
containing physical television and computer monitors; in others, the monitor wall
has been replaced with a virtual monitor wall (sometimes called a "glass cockpit"),
one or more large video screens, each capable of displaying multiple sources in a
simulation of a monitor wall.
 A vision mixer, a large control panel used to select the multiple-camera setup and
other various sources to be recorded or seen on air and, in many cases,

128
BA (JMC) 209 Unit 4 Lesson 1

in any video monitors on the set. The term "vision mixer" is primarily used in Europe, while
the term "video switcher" is usually used in North America.
 A professional audio mixing console and other audio equipment such as effects
devices.
 A character generator (CG), which creates the majority of the names and full digital
on screen graphics that are inserted into the program lower third portion of the television
screen
 Digital video effects, or DVE, for manipulation of video sources. In newer vision
mixers, the DVE is integrated into the vision mixer; older models without built-in DVE's can
often control external DVE devices, or an external DVE can be manually run by an
operator.
 A still store, or still frame, device for storage of graphics or other images. While the
name suggests that the device is only capable of storing still images, newer still stores can
store moving video clips and motion graphics.
 The technical director's station, with waveform monitors, vectorscopes and the
camera control units (CCU) or remote control panels for the CCUs.
 In some facilities, VTRs may also be located in the PCR, but are also often found in
the central apparatus room.
 Intercom and IFB equipment for communication with talent and television crew
 A signal generator to genlock all of the video equipment to a common reference that
requires colorburst.

A model studio layout


129
BA (JMC) 209 Unit 4 Lesson 1

Studio Television System

1.3 Switcher

Also called vision mixer. With a series of inputs to be combined, manipulated, and set out
on the programme line, a vision mixer is used for selection and proper sequencing of
images supplied by cameras and other inputs sources like titling, graphic machines and
VTRs.

The main concept of vision mixer is the bus – a row of buttons, with each button
representing a video source. Pressing such a button will release the video signal out of
130
BA (JMC) 209 Unit 4 Lesson 1

that bus. Older video mixers has two equivalent buses called A and B bus, such a mixer is
known as an A/B mixer. Most modern mixers, however, have one bus that is always the
programme bus, the second main bus being the preview bus. Both preview and
programme buses usually have their own video monitor. A professional video switcher can
handle up to 20-30 inputs. Another main feature of a vision mixer is the transition lever.
This lever, simply creates transition between two buses.
Instead of moving lever by hand, a button (commonly labelled as Mix) can be used, which
performs the transition over a user-defined period of time. Another button Cut, directly
swaps the buses without any transition. Common transitions include dissolves and wipes.
The third bus on the vision mixer is the key bus. A mixer can actually have more than one
of these, but they usually share only one set of buttons. Here a signal can be selected for
keying into the programme. The image that will be seen in the programme is called the fill,
while the mask used to create the translucence of the keys is called the source. Note that
instead of the key bus, other video sources can be selected for the fill signal, but the key
bus is usually the most convenient method for selecting a key fill. Usually a key is turned
on and off the same way a transition is. These three main buses together form the basic
mixer section called programme/Preset or P/P. Bigger production mixers may have a
number of additional sections of this type, which are called Mix/Effects amd are numbered.
Another keying stage is called the down-stream keyer. It is mostly used for keying text or
graphics and has its own Cut and Mix buttons. After the downstream keyer is one last
stage that overrides any signal with black, usually called FTB or fade to black. The
switcher can do a host of other functions besides cuts and dissolves. It can be used for
text keying, chroma keying, etc.

The main functionality of a video switcher is for creating a master output for real-time video
broadcast or recording. They can create different visual effects, ranging from simple mixes
and wipes to elaborate effects. They can also perform keying operations and help in
producing color signals. Video switchers work similarly to audio mixers. They make use of
multiple input sources, then apply the desired effects and produce one or more outputs.

Most video switchers are based around program and preview bus, which each has its own
monitor. The program bus is the main output feed, whereas the preview bus is for
choosing and previewing the source that is about to go live. Using the preview bus is
optional. However, the preview bus is needed in the case of any visual effects. A modern
video switcher has additional features like the ability to store complex mixer configuration
as well as serial communications with the ability to make use of proprietary
communications protocols. The use of video switchers is now minimal due to the advent of
computer-based non-linear editing systems.

Capabilities and usage in TV production


Besides hard cuts (switching directly between two input signals), mixers can also
generate a variety of transitions, from simple dissolves to pattern wipes.
Additionally, most vision mixers can perform keying operations and generate color
signals (called mattes in this context). Most vision mixers are targeted at the
professional market, with newer analog models having component video
connections and digital ones using Serial Digital Interface (SDI). They are used in
live television with video tape recording (VTR) and video servers for linear video
editing, even though the use of vision mixers in video editing has been largely

131
BA (JMC) 209 Unit 4 Lesson 1

supplanted by computer based Non-linear editing systems (NLE). Older


professional mixers worked with composite video, analog signal inputs. There are
still a number of consumer video switchers with composite video, S-Video or even
FireWire available. These are often used for VJing, presentations, and small multi-
camera productions.

Capabilities of Vision Mixer Operators

Vision Mixers edit programmes live (as they are being transmitted or recorded),
using a variety of transition methods, such as cuts, mixes, wipes and frame
manipulation. They join together images from various visual sources, including
cameras, video tape recorders (VTR Machines), graphic generators and digital
video effects (DVEs). They are the Director's ‘second pair of eyes’ in the gallery.

o be able to work on a variety of different vision mixing desks and equipment


o have a good understanding of the language of the transmission
o be able to stay calm and react quickly and accurately under pressure
o have high levels of concentration and stamina
o have the discipline to respond to cues accurately according to predetermined
plans
o have the confidence to take the initiative and deal with unforeseen
circumstances or problems when they arise
o be able to multitask
o have excellent organisational abilities
o pay precise attention to detail
o have excellent verbal and written communication skills, showing diplomacy,
patience and sensitivity
o have effective team working skills
o have advanced IT skills
o have excellent visual and aural awareness, combined with artistic and
aesthetic abilities
o have excellent colour vision
o have good sense of rhythm in order to produce accurate and sensitive
transitions
o be able to read a musical score, or to bar count
o have knowledge of the requirements of the relevant health and safety
legislation and procedures

1.4 Chroma
It is the process of electronically replacing the green background with a picture. Green or
blue colour provide good contrast to the human skin tones, and can make the talent stand
132
BA (JMC) 209 Unit 4 Lesson 1

out better than any other colour can. The background needs to be equally lit. It is important
to eliminate all shadows. The subject must be kept at a minimum distance of 4 feet from
the screen to avoid shadows from falling on the screen. The vision mixer has a chroma
keying slicer. The slicer must be switched on and the talent superimposed on the green
screen. Superimposition stacks images on top of one another, while matting completely
replaces parts of one image with parts imported from another. Superimpositions, or
"supers", consist of two or more images combined into one through electronic processing.
There are three major uses for superimpositions: transitions, multiple images and special
effects.
The use of chroma keying has become quite popular in recent years, with many
applications of this video effect used for live streaming. Chroma keying is used to remove
the background of a video scene which is then composited over another scene or image.
The most obvious use of this technique is for weather broadcasts, where the presenter is
composited over the top of a weather radar image. In more recent times we’re seeing
video game streamers chroma keying themselves into their live streams.

In a nutshell, the chromakey process designates a single, very narrowly defined color in
one video image and electronically replaces that color with a second image, leaving the
rest of the picture untouched. The foreground and background pictures may be live
camcorder feed, videotape, computer output, or any combination to create a chromakey
effect: Place a foreground subject in front of a designated chromakey color (usually
bright blue or green) and feed the picture through one channel of a digital switching
system. Feed a background picture through another. Command the system to replace
the designated color in the foreground picture with the same areas of the background.
Some inexpensive A/B-roll video mixers have chromakey capability, and better quality
prosumer products offer considerable sophistication. The MX-Pro from Videonics
succeeds the MX1, Panasonic offers the WJ-AVE55 and stand-alone switchers are
available from SIMA and Edirol. For nonlinear editors, programs like Adobe Premiere
offer a wide range of chromakey and other superimposition methods.

Simple Chromakey Setups


The most important factor in chromakey backgrounds is uniformity of color, surface, and
lighting. The goal is to produce a backdrop with (very nearly) the same hue, value, and
intensity over every single square inch. You can solve both surface and color problems in
a single bound by buying a roll of seamless chromakey blue studio backing paper from a
photo supply house. To use the paper, hang it from a simple pipe rack, unroll it, pull it
forward until the wall-to-floor transition is a smooth curve, and place your actors on it.
Alternatively, you can clear a stretch of wall in your basement or garage and paint it
chromakey blue. Take the trouble to tape, mud, spackle and sand as needed to produce
a truly smooth surface. The price of "official" chromakey paint would make your jaw hit
the floor, so fake it instead. Simply take the color swatch printed here to your friendly
neighborhood paint store (preferably one with computer matching) and brew up a gallon
of water-based paint. For full-length shots the wall or paper roll should be at least 9x12
feet. There are two reasons for this. First, you need a minimum of six inches on all
borders to protect against shooting off the backing. Second, place the foreground subject
as far from that backing as possible, so as not to spoil the effect by throwing the subject's
shadow on the wall. The greater the distance from subject to backing, the larger that
backing must be. If this is starting to sound like too much construction, try chromakey-lite
133
BA (JMC) 209 Unit 4 Lesson 1

instead: tack up a 4x6-foot sheet of half-inch wallboard, paint that puppy blue, and start
shooting. However you've created it, you now have a flat, smooth, evenly colored
surface. The next step is to light it.

1.5 Super Impositions

Titles, video or graphics appearing over an existing video picture, partially or


completely hiding areas they cover. Done with Character generator operator, which
produces titles/graphics, appropriatetp a production and ensures their ‘supering’ on
the video during production.

Super-impositions Graphics produced LIVE

134
BA (JMC) 209 Unit 4 Lesson 1

1.6 Assignments

1.6.1 Class Assignments

1. How does Production Control Room function?

1.6.2 Home Assignments

1. What is a vision mixer? Explain its functions.

2. What is a chroma key? What factors determines its quality?

1.7 Summing Up

In this chapter, we learnt about TV studio and function of Production Control Room.

1.8 Terminal questions

1. What do you understand by super-impositions? Explain with examples.


2. What is studio television system? Explain with diagram.

1.9 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

135
BA (JMC) 209 Unit 4 Lesson 2

Lesson 2 Multi-camera Online Editing: Concept and Process -


_____________________________________________________

STRUCTURE
2.0 Objectives

2.1 Introduction

2.2 Multi-Camera Shoot

2.3 Online Editing – Concept and Process

2.7 Assignments

2.7.1 Class Assignments

2.7.2 Home Assignments

2.8 Summing Up

2.9 Terminal questions

2.10 Suggested further readings

136
BA (JMC) 209 Unit 4 Lesson 2

2. Multi-camera Online Editing: Concept and


Process
In this lesson we will learn about the concept of multi-camera setup and online editing.

___________________________________________________________________

5.0 Objectives
After going through this lesson, you should be able to:
 Understand concept of multi-camera production and its benefits.
 Understand concept of online editing of live events.
___________________________________________________________________
5.1 Introduction
Capturing a live event can be a formidable task. Whether it's a wedding, concert, or
presentation, using a multiple camera setup can help ensure you get complete coverage,
without missing important moments. In this segment, we talk about the advantages and
disadvantages of a multi-cam setup, as well gathering the information you need about the
event.

2.2 Multi-cam Shoot


The use of multiple video cameras to cover a scene goes back to the earliest days of
television; three cameras were used to broadcast The Queen's Messenger in 1928, the
first drama performed for television. The BBC routinely used multiple cameras for their live
television shows from 1936 onward.
Use of more than one camera to shoot a production is called Multi-cam production. Multi-
cam helps you record different angles simultaneously and shoot scenes much faster than
with a single camera. It requires great planning and lighting to make sure the footage from
each camera matches, but what you can't control on location you can usually correct in
post. From a two-camera interview to a 26-camera concert special, multicamera
production is being used more now than ever before. And with so many camera types,
codecs and editing workflows for filmmakers to chose from, it’s important to learn about
Multi-cam Production. The multiple-camera setup, multiple-camera mode of production,
multi-camera or simply multicam is a method of filmmaking and video production. Several
cameras—either film or professional video cameras—are employed on the set and
simultaneously record or broadcast a scene. It is often contrasted with single-camera
setup, which uses one camera.
Generally, the two outer cameras shoot close-up shots or "crosses" of the two most active
characters on the set at any given time, while the central camera or cameras shoot a wider
master shot to capture the overall action and establish the geography of the room. In this
way, multiple shots are obtained in a single take without having to start and stop the action.
This is more efficient for programs that are to be shown a short time after being shot as it
reduces the time spent in film or video editing. It is also a virtual necessity for regular, high-
output shows like daily soap operas. Apart from saving editing time, scenes may be shot
137
BA (JMC) 209 Unit 4 Lesson 2

far more quickly as there is no need for re-lighting and the set-up of alternative camera
angles for the scene to be shot again from the different angle. It also reduces the
complexity of tracking continuity issues that crop up when the scene is reshot from the
different angles. It is an essential part of live television.

The multiple-camera method gives the director less control over each shot but is faster
and less expensive than a single-camera setup. In television, multiple-camera is
commonly used for sports programs, news programs, soap operas, talk shows, game
shows, and some sitcoms. Before the pre-filmed continuing series became the dominant
dramatic form on American television, the earliest anthology programs (see the Golden
Age of Television) utilized multiple camera methods.
Multiple cameras can take different shots of a live situation as the action unfolds
chronologically and is suitable for shows which require a live audience. For this reason,
multiple camera productions can be filmed or taped much faster than single camera.
Single camera productions are shot in takes and various setups with components of the
action repeated several times and out of sequence; the action is not enacted
chronologically so is unsuitable for viewing by a live audience.

2.2.1 Multi-Camera Production Techniques

Follow the 180-degree rule


Knowing where to put your cameras is one of the biggest challenges for most production
teams moving from a single-camera to a multi-camera setup. Here’s why: In a basketball
game, for example, Team A is going right to left; Team B is going left to right. If you place
two cameras on opposite sides of the court, the teams will be running in the opposite
direction every time you switch cameras—and your viewers will be left dazed and
confused.

138
BA (JMC) 209 Unit 4 Lesson 2

All of your shots need to make sense as a whole. The 180-degree rule ensures that all of
your cameras are filming from a singular direction. Think of an imaginary line across the
center of the court. You can place your cameras anywhere behind that line on one side,
choosing a variety of angles to mix up the shots—but not on the other side. That way, all of
your cameras are strategically placed to ensure that directions remain consistent.
Make communication a priority

Multi-camera shooting techniques are complex enough that you need to make open
communication among your team a priority. Each of your camera angles should be
different enough that shot changes are clearly new and purposeful to anyone viewing the
live feed (two shots that are too close in perspective—both following the ball from a slightly
different angle, for example—might actually unsettle viewers).
There are two ways for a producer to manage multiple camera shots:
Meet with your camera operators before the event, telling each one what to shoot
throughout. It’s somewhat rigid, but it could work if you’re just starting out.
Use walkie-talkies and headsets—or a full broadcast communication system—to provide
directions on the fly. You’d still probably want to set some parameters ahead of time, but
open communication allows for a greater degree of creativity in a broadcast.

Have a clear vision for your multi-camera production


Graduating to a multi-camera setup for your live stream events is a natural step in a lot of
cases, but it’s only beneficial if you have a clear idea of how you’re going to use all of
those cameras. What are you hoping to achieve? Do you want to bring in specific shots
that you aren’t currently able to get? Once you get started, you’ll likely feel compelled to
splice in multiple camera angles when the action of the scene may not really require them.
As a result, you could end up with a disjointed broadcast. (A well-placed single camera is
better than a poorly orchestrated multi-camera production every time!)

So before you make the leap, talk it through with your production team or your platform
provider. Come up with some loose guidelines as to when and how often to switch—
sometimes it’s best to stick with a certain angle for five or even ten minutes. A producer’s
vision is key to pulling off most of these multi-camera shooting techniques successfully.

2.3 Online Editing


Live switching or editing is something we take for granted every time we watch a TV
program. If it's an episode of serial TV, we expect the camera angles to change while the
action appearing on screen continues in a fluid manner. We experience the action on
screen via a mix of wide shots, close-ups, 2-shots, reaction shots, and so on. Switching
angles is no less significant in our own professional production work, especially when
we're shooting live events for online delivery. And whether we're streaming video live, or
producing it for on-demand online viewing, live switching is either the only way to get the
job done, or—in many instances—simply the most efficient.

Basic Set up

139
BA (JMC) 209 Unit 4 Lesson 2

Two (or more) cameras. It's important that the cameras be matched at least in
overall quality and format. Why? Because if you conduct a multi-camera shoot with
a fancy 3-chip DV camera and a single-chip Hi8, your viewers will be able to see a
noticeable difference in the picture quality when you switch from one camera to
another.

A switcher/SEG. This is the heart of the multi-camera shoot, the device that
switches the video signal from one camera to another. SEGs (Special Effects
Generator) also perform other duties like transition effects, chromakey/lumakey,
and triggering other devices like titlers, but their main function is to switch between
two or more video signals. Popular low-cost models include the Focus
Enhancements MX-4 and the Edirol V-4, but a search on eBay might turn up some
inexpensive older models by Panasonic and other manufacturers.
A video monitor. At least one; two, if possible--the primary one to view the
switcher's Monitor output, and another to view the signal as it appears when it goes
to tape, for quality control.
A VCR or Direct to Edit or other recording device. This will be where the final
signal gets recorded, the output of the switcher/SEG. Optimally, it should be a high-
quality device, but this isn't necessary. A spare camcorder will do nicely; a VCR
works well, too.
Crew. At least one camera operator per camera, plus a technical director to operate
the switcher. If enough crew is available, you might consider an overall director to
lord it over the entire process. You may also want to round up some lighting crew,
an audio technician, grips...there's always plenty to be done on a multi-camera
shoot. But the basic bare-bones crew is one cameraperson per camera, plus a
technical director to operate the switcher.
Wireless microphone headsets. These will make it easier for the director to
communicate with the camera operators quietly while the shoot is in progress.

Fade in/fade out. Most switchers have a way to set up black or another color on
one of the channels. As you start your shoot, begin with black as your main output
instead of one of the cameras, then use a dissolve effect to fade in to your first shot.
At the end of the performance, do the same thing, only in reverse, fading out the
last shot to a black screen.

Transition effects. Many switchers have dozens of built-in transitions, such as wipes,
slides, page turns, and other effects. Often, these are controlled by typing a number into
the switcher before switching to another camera. If the director knows the number of an
effect, he can tell the TD to "prepare 112," for example, at which point the technical
director types the number into the switcher. Then, when the order to "Take Camera A,"
transition effect number 112 takes place.
Many video switchers have built-in audio mixers. Which is a good thing, because when
you're recording the signal from two or more cameras into a separate VCR, you'll need
some way to organize your sound. The best way to do this is to use entirely separate
mikes that are independent of the camcorders themselves. Just plug the microphones into
the mixer, and send the output directly to the VCR you're using to record the live video. In
a live performance like the one described here, you might try using a stereo pair of mikes

140
BA (JMC) 209 Unit 4 Lesson 2

to record the whole thing. If the performance audio is going through a PA, consider taking
the audio feed directly off the PA's mixer, if possible.

The Success of Live Video Depends on Footage Quality


Quality is just as important as content, however, what with 90 percent of Facebook Live
viewers saying that the way in which a video is edited and its visual aspects are more
important than anything else! In-person events are growing in popularity as part of a well-
executed content marketing strategy, proving that video is far more than just a trend. From
panel discussions and conferences to product releases and presentations, the options are
endless.

Here’s how to edit video from live events, simplified:

Live Edits and Post-Production Edits – First things first, you must know the difference
between a live edit and a post-production edit. You may need to hire another person to
perform a live edit, which creates a fast-paced edited sequence of what was being filmed.
A post-production edit, on the other hand, involves executing tasks once shooting has
come to an end.
Project Directory – Once you decide which type of editing style suits you, a project folder
should be created. This folder will eliminate confusion, because it can be used to organize
and access every single file, including photos, graphics, raw footage and sound.
Make a Copy – Memory cards and hard drives will really come in handy when you are
editing live events. Don’t underestimate just how useful a backup copy can be. Dropbox,
OneDrive and the Cloud storage system are just a few examples of software that can be
used for storing copies of your files.
Clean it Up – Trimming is a big part of the editing process and if you miss out this step,
your live video might be too long and a little bit messy to watch. Keep it organized by
focusing on the sequence (start, beginning and end), and pay thought to your audience
when you are making the edits, because they are the people you need to please!
Video Editing Software – Your editing software is the weapon you will use to transform
live video into a flawless production. Select it based on features, audio options, and how
the video editing software pairs up with your other gadgets.
The power of live video cannot be ignored. Including video in your online marketing
strategy is essential because while content is vital for good search engine rankings, high-
quality video content has the potential to encourage positive buying decisions and compel
action.

2.4 Assignments

2.4.1 Class Assignments

1. What are differences between single and multiple-camera productions?

2.4.2 Home Assignments

1. What are various equipment involved in a multi-camera production?

141
BA (JMC) 209 Unit 4 Lesson 2

2. What are roles and responsibilities of the various personnel involved in a multi-
camera production?

2.5 Summing Up

In this chapter, we learnt about multi-camera setup, its requirements equipment and
benefits.

2.6 Terminal questions

1. What are transition effects?


2. What is online editing?

2.7 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

142
BA (JMC) 209 Unit 4 Lesson 3

Lesson 3 Live Events: Recording, Editing and Telecasting-


_____________________________________________________

STRUCTURE
3.0 Objectives

3.1 Introduction

3.2 Recording of Live Events

3.3 Editing and Telecasting


3.4 Assignments

3.4.1 Class Assignments

3.4.2 Home Assignments

3.5 Summing Up

3.6 Terminal questions

3.7 Suggested further readings

143
BA (JMC) 209 Unit 4 Lesson 3

3. Live Events: Recording, Editing and Telecasting


In this lesson we will learn about the concept Recording and editing while telecasting a
video programme.

___________________________________________________________________

6.0 Objectives
After going through this lesson, you should be able to:
 Understand concept of Live recording and editing of an event.
___________________________________________________________________
6.1 Introduction
Recording Live event is interesting and challenging at the same time. A big team of
video professionals are required to do it. Be it a cricket match, musical event or
political programme.

3.2 Live Event Recording


A live event is an event which occurs in present. For example Prime Minister’s
speech on Independence Day or a sporting event. The value of such events are
high at the time of occurrence and not after that. Although videos can be referred for
several purposes later too.
When it comes to edit a live event, the team of highly skilled professionals is
required. For example, news events happen very fast sometimes. Like a Political
rally or award function.
To determine whether or not you need to use multicam video editing features, there
are some considerations worth making.To determine whether or not you need to
use multicam editing features, there are some considerations worth making. Many
professional video editing applications have built-in multicam editing functions. At its
core, any multi-layer video editing application can edit multicam footage; however,
it's typically cumbersome to do so without more advanced feature sets. Even with
the best tools, it can be tricky, so it's important that you do the setup correctly. The
best way to do that is to have a plan to sync all your camcorders.

3.3 Editing and Telecasting


Multicam editing is very challenging Having more than one camcorder at a shoot
does not justify multicam editing. There are many reasons that a two-camera shoot
will be a classic A/B edit, rather than a full-blown multicam production.

Editor’s first consideration is what the B camera is doing. If B camera is simply


shooting some cutaways for safety, a classic A/B edit will suffice. Also, if the B
camera is not rolling continuously, that's another sign that multicam video editing is
unnecessary. Multicam editing is best when both camcorders are running
continuously, and you accurately synchronize the action unfolding before the
cameras. Typical situations requiring multicam editing are live events, such as
concerts, recitals, plays and other live performances. Multicam editing is very similar
144
BA (JMC) 209 Unit 4 Lesson 3

to live editing, although it's not truly live, as you have the flexibility to stop time
whenever needed. That said, a multicam edit will be a much easier edit if you treat it
as if it were truly live.

Precision Control
When you are planning on a multicam edit, consider putting a great deal of
concentration in the setup of the shoot. Ideally, a technical director will coordinate
camera operation with your camera operators, or the camera operators will shoot
the footage in a way that will allow every frame to be useable in the final production.
The worst-case scenario in a simple multicam edit (e.g., a two-camera edit) is that
both cameras have unusable footage at a particular time during a live performance.
This type of challenge slows down the edit tremendously. So, make sure that you
shoot with a plan before the multicam edit.

Sync Now
The last shooting consideration will also save the multicam editor a great deal of
time. Always plant a sync point at the beginning of the shoot by using a clapboard
or another mechanism, so that the editor can quickly and accurately match the
timing on both camera sources. A clapboard is the ideal tool, as it's both a visual
and aural cue for the editor. The technique is rather simple, and you can do it in a
matter of seconds. Someone holds the clapboard in front of the two cameras,
typically near where the talent will be. Both cameras should start rolling tape and
have a view of the clapboard in the camera. The person holding the clapboard will
typically say "Mark" and then clap the arm down on the clapboard. Done. The shot
has been marked, and the cameras will now continue to roll tape constantly until the
performance has ended or the tape has run out. In the edit bay, the editor will use
the clapboard as a sync point. The editor lines up both the visual cue of the
clapboard arm closing shut and the audio spike of the snap sound of the clapboard
between camera A and camera B. If you do it properly, you will have both cameras
accurately synchronized. If you don't have a clapboard, you can use the production
assistant's arms. Have the assistant hold out his arms, mimicking a clapboard, and
slap closed the top hand to the bottom hand. This will also give a visual cue and a
nice clap sound, which will be just as useful to the editor.

A challenge to this camera-sync technique is the venue. In many cases, you might
be shooting in a crowded and loud auditorium where both camcorders will not
record the clapping of a hand or clapboard. This problem is preventable if you plan
on how your camcorders are acquiring sound. If camera A is using house sound
and camera B is simply recording from the onboard mic, you can move camera B
very close to the clapboard for the sync. In this way, you can mark the shot near a
microphone so that camera A records the clap through house sound and, with
camera B nearby, both should register the audio spike from the clapboard.
Depending on your situation, you can use a variety of visual and audio cues, some
of which may occur naturally (e.g., a sound technician testing the microphone
before the event, or the sweeping second hand on a classroom clock), or you may
add some for your editor's convenience.

145
BA (JMC) 209 Unit 4 Lesson 3

Sync Later

Each video-editing application has a different method for activating the multicam
editing feature, but, in each application, the editor must prepare the clips with every
frame synchronized. Until one of the video software teams creates a tool that
automatically syncs our footage, we're stuck with doing all the tweaks ourselves. As
the editor, your first task in doing a seamless multicam edit is to sync all the
cameras. If you're lucky, the footage will have a clear sync point as we just
discussed. The goal at this point is to do a little maintenance on each video clip so
that you have every clip perfectly synchronized.

The captured footage will rarely ever be naturally synchronized. So, put camera A
and camera B video clips on separate video layers on the timeline. Take a look at
the audio layers and see if you can identify the waveform spike from the clapboard.
Also view the footage to find the clapboard. You can toggle each layer on and off so
you can see the clips on lower video layers as well. Once you've found the
clapboard cue, you'll need to move one clip forward or backward to match the audio
spikes in a single frame. Once you get these points close, play back a few seconds
of the recorded performance. If it's close but not precise, the audio will sound
doubled (i.e., like an echo). At this point, you'll want to magnify the timeline around
the area of the clapboard audio spike and then nudge the clips frame by frame. Test
the results again by playing back part of the recorded performance. If it's frame-
accurate, the audio will sound like a single source of audio. Otherwise, continue to
nudge the clips so that the spike occurs on the same frame. This takes a little bit of
patience as you move from nudge to testing, over and over again.

Once you have all your clips synchronized, you'll trim the excess footage away, so
that each clip starts at the same point. Now your clips are synchronized and
trimmed to the starting point of your edit. From here, each video-editing application
will require a different set of commands to start the multicam edit. Some
applications require you to move each of these clips into a separate sequence, and
others require a couple of clicks on the timeline. Either way, the most important part
of the setup is complete.

Time Saver
Once the synchronizing is finished, multicam editing can be a timesaver, especially
when you're editing long performances with many different viewing angles. For the
advanced multicam editor who is doing a great many live performances, consider
using cameras that have timecode jam-sync capabilities. This nifty feature allows
every camera to run on the same timecode, so that synchronizing footage can
happen in a much more logical fashion. It's so much easier, but this type of
camcorder is generally much more expensive. For the rest of us, plan ahead and
coordinate your shoot with precision.

A live broadcast, also called a live transmission generally refers to various types of
media that are broadcast without a significant delay.
The most common seen media example of the live transmission is a news program
or a news broadcasting.
146
BA (JMC) 209 Unit 4 Lesson 3

The first regular television broadcasts started in 1937. Broadcasts can be classified
as "recorded" or "live". The former allows correcting errors, and removing
superfluous or undesired material, rearranging it, applying slow-motion and
repetitions, and other techniques to enhance the program. However, some live
events like sports television can include some of the aspects including slow-motion
clips of important goals/hits, etc., in between the live television telecast.

A broadcast may be distributed through several physical means. If coming directly


from the radio studio at a single station or television station, it is simply sent through
the studio/transmitter link to the transmitter and hence from the television antenna
located on the radio masts and towers out to the world. Programming may also
come through a communications satellite, played either live or recorded for later
transmission. Networks of stations may simulcast the same programming at the
same time, originally via microwave link, now usually by satellite. Distribution to
stations or networks may also be through physical media, such as magnetic tape,
compact disc (CD), DVD, and sometimes other formats. Usually these are included
in another broadcast, such as when electronic news gathering (ENG) returns a
story to the station for inclusion on a news programme.

The final leg of broadcast distribution is how the signal gets to the listener or viewer.

3.4 Assignments

3.4.1 Class Assignments

1. What do you understand by live event recording?

3.4.2 Home Assignments

1. What do you understand by live event editing?

3.5 Summing Up

In this chapter, we learnt about live recording set-up, the channel, multi-camera setup, its
requirements equipment and benefits.

3.6 Terminal questions

1. Describe the process of live recording?


2. What is a live event? What all is required for video editing a live event?

3.7 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal Press,13
edition.

147
BA (JMC) 209 Unit 4 Lesson 4

Lesson 4 Emerging Trends in Multi-camera Video Editing


_____________________________________________________

STRUCTURE
4.0 Objectives

4.1 Introduction

4.2 Emerging trends

4.3 Assignments

3.3.1 Class Assignments

3.3.2 Home Assignments

4.4 Summing Up

4.5 Terminal questions

4.6 Suggested further readings

148
BA (JMC) 209 Unit 4 Lesson 4

4. Emerging Trends in Multi-camera Video


Editing
___________________________________________________________________

In this lesson we will learn about the Emerging Trends in Multi-camera Video Editing.
___________________________________________________________________

4.0 Objectives
After going through this lesson, you should be able to:
 Study emerging and upcoming trends in the market vital for skill upgradation.
___________________________________________________________________
4.1 Introduction
Media is a growing field. It never remains stagnant. It is highly dependent on
technology. Technology changes very fast if we consider hardware and software.
Video editor is one field which is highly skill and technology driven. So it becomes
very important for video editor to remain updated and keep learning new things.

4.2 Emerging Trends


Video editing has come a long long way. From the beginning of the 20th century, when film
as a medium began to develop, editing meant simultaneously two things at once: the
joining of shots as well as the manipulation of images. Many of the first films made were
realist, documentary films, such as the Lumiere Brothers’ “Arrival of the Train,” which
fascinated audiences and allowed them to recognize themselves and the places and
events around them. Montage style developed as a counterpoint, where Soviet film
makers such as Eisenstein juxtaposed contrasting or even unrelated shots to create new
meaning. Rathern than tell a linear story, montage sought to evoke emotion. Montage gave
rise to the formalist tendency, which began to see any form of video footage as fodder for
creating illusions, magic tricks and fantastic worlds, a style begun by George Melies and
continued by the Hollywood superhero 3D blockbuster of today.
Before the digital revolution, linear video editing was done with expensive video tape
recorders (VTR) that did not promise quality and was were cumbersome. Later inventions
such as the “flying erase-head” and vision mixers made the process easier. But the switch
from celluloid to digital incited a fundamental change in the process. Gone were the days
of handling magnetic tapes, and with the arrival of premier software such as Final Cut Pro
and Adobe After Effects, digital video editing was here to stay.
Professional video and photo editing software with a multitude of features may become
available on smartphones soon, meaning users can shoot a film, edit it, add special effects
and title cards, and release it to YouTube, all from a smartphone. Before you worry that
your Avid Media Composer skills are wasted, don’t despair: the entertainment industry,
while flexible and able to adapt and absorb new trends like these, will still have need of
professional

149
BA (JMC) 209 Unit 4 Lesson 4

editors able to apply advanced skill and precision. Phones will not replace post-
production. Instead, digital editors can see this trend as an interesting opportunity to
plug into popular culture and play with emerging new media.
Apps such as Adobe Premiere Clip and WeVideo can be used to make home
videos or presentations. For professionals there are paid options, such as the
powerful Pinnacle Studio Pro developed by Corel, with more sophisticated features.
Live videos are already a thing — whether you’re streaming a rock show live on
Facebook timeline or showcasing a 30 sec clip on Instagram. And live video editing
is going to be the next big thing. While it’s still at a nascent stage, with live editors
rushing to apply filters or emoji to recorded content or camera switching in TV, you’ll
soon see innovative developments in this space. The app Lumify, for instance (only
available on ios), let’s you edit video from the moment you start recording, for
example changing the white balance or focus exposure. We’re entering an age
where one records and edits simultaneously. Soon, more complex features will
become easily available, particularly designed for seamless video transitions so as
to make sure the audience does not notice the cuts between shots. You can expect
the video editing industry to boom, and as a digital editor you’ll be expected to know
the fundamentals of editing as well as the new trends. Even beyond editing digital
content with film or advertising companies, your skills can apply in many new fields
— from marketing strategies to social media promotion.

4.3 Assignments

4.3.1 Class Assignments

1. What is live streaming of video?

4.3.2 Home Assignments

1. What are new trends in terms of editing software?

4.4 Summing Up

In this chapter, we learnt about new emerging trends in video editing.

4.5 Terminal questions

1. What are new emerging trends in video editing?

4.6 Suggested further readings


1. Herbert Zettl, TV production Handbook, Thomas Wardsworth Publishing
2. Video Production, Vasuki Belavadi, Oxford Publication
3. Millerson, G., & Millerson, G. (1999). Television Production. Oxford: Focal
Press,13 edition.

150

You might also like