Download as pdf or txt
Download as pdf or txt
You are on page 1of 164

Concepts of Digital

Filmmaking & Visual Fx


Concepts of Digital Filmmaking & Visual Fx

© 2014 Aptech Limited

All rights reserved.

No part of this book may be reproduced or copied in any form or by any means – graphic, electronic or
mechanical, including photocopying, recording, taping, or storing in information retrieval system or sent
or transferred without the prior written permission of copyright owner Aptech Limited.

All trademarks acknowledged.

APTECH LIMITED

Contact E-mail: ov-support@onlinevarsity.com

Edition 1 – 2014

Disclaimer: Arena Multimedia is registered Brand of Aptech Ltd.


Preface

Preface

Visuals Arts are now a part of everything from TV and film to live performance and multimedia to distance learning,
training and corporate videos. Today’s high-end, nonlinear digital production requires experience in cinematography,
writing, directing, producing, sound design, animation, special effects and many other associated skills. Smaller
productions are often staffed by only a few people. Visual producers are finding themselves responsible for more
and more technical and artistic aspects of production. To enter this field you should love to entertain, tell a story,
organize things and work with high-end technology. Our successful graduates are extremely creative, dedicated
and organized, and possess excellent communication skills. Be prepared for long hours. If you are willing to learn
all the filmmaking concepts in detail, turn to page one and start reading, and work through each session.

The ARENA Design team has designed this course keeping in mind that motivation coupled with relevant training
and methodology can bring out the best. The team will be glad to receive your feedback, suggestions, and
recommendations for improvement of the book.

ARENA Design Team

Aptech Limited iii


This page has been intentionally left blank.
Table of Contents

Table of Contents

Introduction........................................................................................................................................................ 1
Introduction to Digital Filmmaking....................................................................................................................... 1
Process of Filmmaking........................................................................................................................................ 2
Film Language..................................................................................................................................................... 3
Script/Screenplay/Shooting Script....................................................................................................................... 9
Previsualization................................................................................................................................................. 11
Project Planning................................................................................................................................................ 13
Summary........................................................................................................................................................... 15
Exercise............................................................................................................................................................. 15

Understanding Video Technology................................................................................................................. 17


Introduction........................................................................................................................................................ 17
Videotape Standards......................................................................................................................................... 17
Video Compressions and Resolutions............................................................................................................... 21
Choosing a Computer System.......................................................................................................................... 23
Summary........................................................................................................................................................... 25
Exercise............................................................................................................................................................. 26

Guidelines for Selecting a DV Camera.......................................................................................................... 27


Introduction........................................................................................................................................................ 27
Preparing Our Camera...................................................................................................................................... 27
Types of Cameras............................................................................................................................................. 35
Creative Cinematography.................................................................................................................................. 37
Characterization Through Camera.................................................................................................................... 38
Summary........................................................................................................................................................... 39
Exercise............................................................................................................................................................. 39

Basics of Lighting and Art Directing............................................................................................................. 41


Introduction........................................................................................................................................................ 41
Lighting for Films............................................................................................................................................... 41
Interior and Exterior Lighting............................................................................................................................. 47
Creative Lighting for Films................................................................................................................................. 49
Art Direction Basics........................................................................................................................................... 51
Summary........................................................................................................................................................... 53
Exercise............................................................................................................................................................. 54

Dealing with Audio.......................................................................................................................................... 55


Introduction........................................................................................................................................................ 55
Aptech Limited 
Concepts of Digital Filmmaking & Visual Fx
Types of Microphones and Headphones........................................................................................................... 55
Important Terminology and Aspects of Audiography......................................................................................... 61
Summary........................................................................................................................................................... 67
Exercise............................................................................................................................................................. 68

Editing.............................................................................................................................................................. 69
Introduction........................................................................................................................................................ 69
Editing Basics.................................................................................................................................................... 69
Editing Equipments........................................................................................................................................... 71
Creating a Rough Cut........................................................................................................................................ 73
Fine Cutting....................................................................................................................................................... 74
Transitions......................................................................................................................................................... 75
Sync Editing...................................................................................................................................................... 76
Summary........................................................................................................................................................... 77
Exercise............................................................................................................................................................. 77

Color Corrections and Monitor Calibration................................................................................................... 79


Introduction........................................................................................................................................................ 79
Factors Affecting Color...................................................................................................................................... 79
Importance of Monitor Calibration..................................................................................................................... 81
Summary........................................................................................................................................................... 83
Exercise............................................................................................................................................................. 83

Compositing and Rotoscoping...................................................................................................................... 85


Introduction........................................................................................................................................................ 85
Titling................................................................................................................................................................. 85
Compositing...................................................................................................................................................... 88
Rotoscoping...................................................................................................................................................... 91
Visual Effects..................................................................................................................................................... 92
Summary......................................................................................................................................................... 107
Exercise........................................................................................................................................................... 108

Types of Outputs........................................................................................................................................... 109


Introduction...................................................................................................................................................... 109
Videotape Masters........................................................................................................................................... 109
The Final Audio Mix......................................................................................................................................... 110
Creating a Video CD (VCD)............................................................................................................................. 112
DVD Authoring................................................................................................................................................. 113
Summary......................................................................................................................................................... 115
Exercise........................................................................................................................................................... 115

vi Aptech Limited
Table of Contents
Glossary......................................................................................................................................................... 117

Answer Key.................................................................................................................................................... 133

Colored Section............................................................................................................................................. 141

Aptech Limited vii


Concepts of Digital Filmmaking & Visual Fx

Iconography

: Hands-on Project

: Note

: Exercise Answers

viii Aptech Limited


Introduction

1 Introduction

Learning Outcomes
In this session, you will learn to -

 Explain Digital Filmmaking


 Explain the process of filmmaking
 Explain film language
 Explain script/screenplay/shooting script
 Explain previsualization
 Explain project planning

1.1 Introduction to Digital Filmmaking


The most modern of all the arts, cinema is fittingly the most dependent on science and technology. The twentieth
century’s dominant art form was born out of the nineteenth-century liking for machinery, movement, optical illusion
and public entertainment.

The key scientific principle on which films are based is the false assumption of motion. It is also commonly known as
persistence of vision. This happens because of the ability of retina to retain an image of an object for 1/20 to 1/5 of
a second after image is removed from the field of vision. The brain has a perception threshold, below which images
exposed to it will appear continuous and film’s speed of 24 frames per second is below that threshold. Hence, in
films we see perceptual illusions generated by a machine.

Filmmaking is a vast topic and covers a lot of things. As a viewer we all know many basic things that are related to
films like acting, make up, direction etc. Apart from this there are innumerable other things that are important while
creating any film. In this book we are going to cover many concepts related to digital filmmaking and after reading
this book we will be surprised to learn how much we already know about filmmaking.

The phrase digital video or film is very simple to understand but there are lots of aspects related to it. For example, a
Quick Time movie downloaded from the Web, an animation generated by a computer programme, transferring video
film from home video camera onto our computer or transferring 35 mm motion picture film into high end graphics
workstation with the help of special scanners, all result in a digital video.

Some people use the term digital video to refer to very specific pieces of equipment like DV camera while others use
digital video as a broader term that includes any type of digitized video or film. But in the broadest definition we can
say that a digital film is a process where our source video is digitized at some point so that it can be manipulated and
edited on the computer and after making these changes the final output that we get is also through a computer.

The process of filmmaking of digital video is similar to the analog filmmaking but the main difference between a
digital and analog camera is that a digital camera digitizes video while we shoot and stores it on tape in a digital
format while an analog camera stores video and audio on tape as analog waves. Here one question that comes to
our mind is what does the word “digitizing” mean? So let us proceed further and understand this term in depth.

Aptech Limited 
Concepts of Digital Filmmaking & Visual Fx
■ What is Digitizing?
We have learnt above that a digital video camera digitizes video on-board and simply stores the resulting
numbers on a tape. As a result, if we transfer video from a DV camera into our computer, we don’t technically
digitize the video, because the camera has already done that for us. As a result we are just copying and pasting
that data from that tape into our computer.

Whereas if we are using an analog video camera the digitizing process will happen inside our computer rather
than just inside our camera. For this process we need a special hardware called video capture boards that
can change the analog video signal of our camera into digital information and store it on our computer’s hard
drive.

There are many advantages in using a digital video over traditional video editing and production equipment,
among which the main ones are quality, function, and price.

While creating any film we have to take into consideration the following aspects at various stages. Each
department of film needs equal attention and good planning:
 Script
 Camera
 Direction
 Art Direction
 Audiography
 Acting
 Costume and Make up
 Editing (Special Effects and Compositing if required)

We will learn about each of these stages as we proceed further. When the film is finished, prints are made,
which are copies of the film that can be shown in screening. Finally, the movie is released or distributed. Movies
are increasingly distributed to a global marketplace and issues of multiple languages, technologies and venues
must be dealt with. Many decisions made during the movie’s production affect what kind of distribution is
possible, and the filmmaker must try to anticipate distribution goals from the very start of the project.

1.2 Process of Filmmaking


To make any film project successful and profitable it is very important to plan our project perfectly and execute it in
the best way. The process of filmmaking is divided into three parts as follows:
 Preproduction
 Production
 Postproduction
Each step has its own importance and there is no way that we can make a good film by excluding any of the steps.
To understand the importance of these parts we need to understand each one of them in detail. Let us start with
preproduction.
 Preproduction: Preproduction is the time for planning and preparation. Fiction projects usually begin with
a script, while unscripted documentaries may start as a written proposal outline of what is to be filmed. In
this stage the filmmaker (or producer) draws up a budget of the estimated cost and arranges for financing.
During this period the crew and locations are selected as well. For some projects, sets may be built in a
studio and casting is done to choose actors.
 Production: The production period essentially begins with the camera roll. This is sometimes called the
start of Principal Photography. Since movie equipment is generally expensive it is often rented for the

 Aptech Limited
Introduction
duration of production or only on the days it is needed.
 Postproduction: The postproduction period (often just called post) begins once the principal shooting
is completed. Editing is done to condense what are often hours of film or video footage into a watchable
movie.

Before getting into the actual preproduction work area we must know the terms related to films and understand
them in brief so that it will be easier for us to understand the contents of the book more easily. These are the key
terms and are needed to know by any person who wants to be in film media as every film house uses them very
frequently.

1.3 Film Language


If we have a basic film background or are regular film watchers we will notice that most of the expressions are
familiar to us and many a times we have used them while describing a film shot or film production. Here we will
also find the terms, which are not so common for audience but are very common in production houses. Let us
understand these terminologies.
 Shot: A sequence of frames is called a shot, which is commonly defined as the footage created from the
moment the camera is turned on until it is turned off.
 Setup: If the shot is the basic building block of the film, the setup is the basic component of a film’s
production. A setup also referred to as a camera position, to execute or simply angle is just what the name
suggests: arranging the camera to execute a shot. Beyond this basic definition it is important to start
thinking of the shots as accomplishing goals, dramatic or otherwise. A shot may show us necessary piece
of information or help create an atmosphere and hence does not have to be discussed in a purely narrative
(story) context.
 Proxemics: Proxemics from proximity refers to the distance between subject and camera. Essentially there
are three basic positions: long shot, medium shot and close-up. There are many points in between and
outside these three, such as medium close ups and extreme long shots. But these alternative positions can
be seen, and will be treated as variations of the basic three.
 Long Shot: A long shot (LS) is any shot that includes the full human body or more. A shot that includes just
the person from head to toe is alternately called a full body shot or a full shot. The full-body shot was much
used by filmmakers in the early years of filmmaking but is avoided in recent years. Director Charlie Chaplin
shot almost exclusively in full-body shots. Refer to Figure 1.1.

Figure 1.1: Long shot


(Refer to the Colored Section for the colored image.)
 Extreme Long Shot: A long shot in which the subject is exceptionally far away from the camera is called
an extreme long shot. The long shot, and even more the extreme long shot (ELS) can also used to diminish

Aptech Limited 
Concepts of Digital Filmmaking & Visual Fx
the subject. Presenting a lone figure in a vast landscape will make the figure appear to be overwhelmed by
the surroundings. Refer to Figure 1.2.

Figure 1.2: Extreme long shot. The human form is small, perhaps barely visible. The point of view is ex-
tremely distant, as in aerial shots or other distant views
(Refer to the Colored Section for the colored image.)
 Medium Shots: The medium shot (MS) represents how we interact with people in life. In this technique
we see a person from waist up which gives more details than full body. The medium shot literally puts the
viewer on equal footing with the subject being filmed. Refer to Figures 1.3a and 1.3b.

Figure 1.3a: Medium long shot (MLS). Most, if not Figure 1.3b: Medium shot (MS). The actor is framed
all of the actor’s body is included, but less of the from the thigh or waist up
surrounding space is visible than in the LS
(Refer to the Colored Section for the colored
(Refer to the Colored Section for the colored image.)
image.)

 Close-up: The close-up (CU) is essentially a head shot, usually from the top shirt button up. Anything
closer than that is extreme close up (ECU). The medium close-up (MCU), which is from midchest up, is
also frequently employed. The CU is the shot that provides the greatest psychological identification with a
character as well as amplifies details of actions. It is the point in which the viewer is forced to confront the
subject and create some kind of internal psychological self. Close-ups are also used to show the details.
The interrelationship of these shots can be particularly successful in creating suspense. Refer to Figures
1.4, 1.5, and 1.6.

 Aptech Limited
Introduction

Figure 1.4: Medium close-up Figure 1.5: Close-up (CU). The Figure 1.6: Extreme close-up
(MCU). The lower chest of the ac- actor is framed from his or her (XCU). Any framing closer than a
tor is still visible chest to just above his or her close-up is considered an XCU
head
(Refer to the Colored Section for (Refer to the Colored Section for
the colored image.) (Refer to the Colored Section for the colored image.)
the colored image.)
 Angles: Although the term angle is often used on the set to designate simple camera position (setup), it
also has a more limited meaning in terms of camera resources, that is, the height and orientation, or level,
of the camera in relationship to the subject.
 Low-angle shot: A low-angle shot is one in which the camera is below the subject, angled upward. It has
a tendency to make characters or environments look threatening, powerful, or intimidating. In this kind of
angle the viewer is presented with low-angle shots of skyscrapers looming over the awed onlooker. The
low-angle shot can also give a distorted perspective, showing a world out of balance. This can produce a
sense of both disorientation and foreboding. Refer to Figure 1.7.

Figure 1.7: Low angle in which the camera is lower than the filmed object
(Refer to the Colored Section for the colored image.)
 High-angle shot: The high-angle shot is the opposite of low-angle, and produces the opposite effects as
well. The camera is placed above the subject, pointing down. It tends to diminish a subject, making it look
intimidated or threatened. This is the conventional way of making characters look insignificant. Refer to
Figure 1.8.

Aptech Limited 
Concepts of Digital Filmmaking & Visual Fx

Figure 1.8: High angle in which the camera is higher than the final object
(Refer to the Colored Section for the colored image.)
 Eye-level shots: These shots are taken with the camera on or near the eye level of the character or
subject being filmed. Eye-level shots tend to be neutral. Much like the medium shot, an eye-level shot puts
the viewer on equal footing with the subject being filmed. It has none of the diminishing or exaggerating
qualities of the high- and low-angle shots. Refer to Figure 1.9.

Figure 1.9: Eye-level shot


(Refer to the Colored Section for the colored image.)
 Bird’s-eye shot: The bird’s-eyeshot, also called an overhead shot, is actually a variation of the high- angle
shot but is so extreme that it has an effect all its own. This shot is from directly above and tends to have a
godlike, threatening point of view; people look antlike and insignificant. Refer to Figure 1.10.

 Aptech Limited
Introduction

Figure 1.10: Bird’s-eye view


(Refer to the Colored Section for the colored image.)
 Oblique shot: In an oblique shot, also called the Dutch angle, the camera is tilted laterally on a tripod so
it is no longer straight with the horizon. The oblique shot takes the straight lines of the world and presents
them as diagonals. It is generally used to give an overwhelming sense of the world being unbalanced or out
of kilter. When a technique like the oblique angle is used, we must also recognize that employing a camera
that is level with the world have certain aesthetic assumptions. Refer to Figure 1.11.

Figure 1.11: Oblique shot or Dutch Angle


(Refer to the Colored Section for the colored image.)
 Pans and Tilts: Camera movement is a critical aspect of the way films work, both in terms of the kinetic
energy it can provide and its employment as a storytelling device. A pan is a shot in which the camera is
simply pivoted horizontally on a tripod. A tilt is similar to a pan except that its movement is vertical. Refer to
Figures 1.12a and 1.12b.

Aptech Limited 
Concepts of Digital Filmmaking & Visual Fx

Figure 1.12a: Panning the camera Figure 1.12b: Tilting the camera
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
 Focus Effects: Occasionally, beginners expect that, as a rule, everything should be in focus. This is,
indeed, the approach of many films. However, focus can be used to create many effects-from those so
subtle that they are rarely noticed by the viewer to others so big that they demand to be interpreted on a
thematic level. As should be expected, there are a number of approaches to using focus as an aesthetic
expression.
 Deep Focus: The approach that keeps all elements in the frame sharp is called deep focus. Refer to Figure
1.13.

Figure 1.13: In deep focus shots, all planes of the image are in focus--as in one shot from a commercial
for GEICO insurance, deep focus enables the viewer to see a pregnant woman in a wheelchair in the back-
ground while a young man speaks on the phone in the foreground
(Refer to the Colored Section for the colored image.)
 Shallow Focus: Shallow focus is an approach in which several different planes of focus are incorporated
within a single image. This can create a purposefully less realistic image, one that manipulates viewer
attention and suggests different planes of action, both literally and figuratively. Refer to Figure 1.14.

 Aptech Limited
Introduction

Figure 1.14: The shallow focus


(Refer to the Colored Section for the colored image.)

1.4 Script/Screenplay/Shooting Script


1.4.1 Script
All film productions begin with a well-written script. It is very important to keep in mind that a bad script can kill a
good story or a good script can make a story much more interesting and convey its message more dramatically. It is
a general rule that no matter what our finished product will be, feature film, documentary, corporate training video,
we have to start with a written script. A script is more than just a description of what people will say and do during
out shoot. It is also a description of what locations and props we will need as well as what special effects, graphics
and sets we will have to create. A finished script is required to start pre-production budgeting and scheduling and
will help as a reference point that we can follow all the way through post-production.

Shooting out a sequence requires careful attention to detail in order to ensure that all the elements of a scene
remains consistent from shot to shot. While filming a script, a script supervisor needs to take care of various types
of continuities as follows:
 Action
 Props, Costume and Make up
 Historical
 Lighting
 Sound
 Performance
 Spatial

1.4.2 Screen Play


Once we are comfortable with the idea of telling our story visually we need to work on the structure of the story.
Most of the screenplays including Hollywood screenplays typically have a three-act structure. In the first act, the
characters and major clashes are introduced. In act two, the conflicts are complicated and in the last act solved.
In simple words the story is broken down as beginning, middle and end. The break up of story with screenplay is
helpful for script detailing.

Aptech Limited 
Concepts of Digital Filmmaking & Visual Fx
Screenplay format makes it easy to quickly judge the budget and scheduling concerns in a script. In any kind of
project we are working on, writing screenplay will make our production job much easier.

1.4.3 Screen Play Format


It is also known as Shooting Script. Shooting Script is a sequence of lines that uses a simple structure quite
effectively: a series of separate lines linked frequently by repeated phrases and syntactical forms. Throughout, the
language is direct and spare.

The benefit of screenplay format is that it makes it easier to determine the length and pacing of out script. If we are
following standard screenplay margins and layouts, our script will be more or less one minute per page. In reality
this time varies from 3 to 5 minutes per page. In the traditional screenplay format the script will be divided into
scenes defined by slug lines. A slug, tells whether the subsequent scene is interior or exterior, the location of the
scene and whether the scene takes place during the day or night. Let us have a look at an example of this kind of
screenplay.

Voice over PG 1
indication of a
SCREEN BLACK
character
Bharat (V.O.)

People were always asking me, did I know Niti Sharma.

FADE IN: INT. SOCIAL ROOM - TOP FLOOR OF HIGH-RISE - NIGHT

Niti has coffee cups in her hand for Bharat and herself. They struggle
intensely.

They are both around 30-35; Niti is good-looking, smart, with pleasant
personality; and Bharat, is tall guy with serious look on his face. They are both
sipping coffee now and enjoying the rains. Suddenly they hear a gun shot from
a very close distance.

Dialog NITI

Hey what happened? What was that noise all about?

BHARAT oor -- ee-ee --uh -- aa-i –

Bharat falls down on the floor and his face is twisted with pain. NITI sees a
woman standing with a gun in her hand. The gun is still pointed at Bharat and
there is lot of anger on her face.

Bharat (still distorted)


Slug-line and (NITI tries to get the gun. The lady keeps control)
Action
Bharat (V.O.)

Ohh Sunita why did you do it? What made you do such crazy thing? Its all my
fault. I should have told Niti the truth. Its too late now. The pain is terrible but the
most terrible thing is that the way you have treated me. You did not even give
me a chance to explain.

Bharat Three minutes.

Though this example is of one scene, there are many pages and scenes developed in parts and simple language,
which than an actor says and does suitable movements.

10 Aptech Limited
Introduction

1.5 Previsualization
1.5.1 Storyboarding
Our main tool for preparing for our shoot is the storyboard. Storyboarding is the first step for serious visualization.
Storyboards are comic-book like representations of the images in our production. This step of Previsualization
forces us to answer some questions that we may not have dealt with during the script writing. From selecting
locations, till the editing of project, storyboarding gives us a clear picture of how our project is going to shape up. In
this stage we will make practical decisions about how we will shoot our script.

1.5.2 Location Scouting


Though there is not a specific rule for hunting a location there are some things that we need to remember and some
questions when looking for and selecting a location. Refer to Figure 1.15.

Figure 1.15: Different kind of locations

Take into consideration the following points when scouting a location:


 Is the location easily available?
 Do we need both the inside and outside of the location?

Figure 1.17: If indoor shots are not required,


outdoor shot of castles will turn out to be much
cheaper
Figure 1.16: It may not be possible to shoot a film
in Kashmir
 Can we find a similar location in another town that is cheaper?
 Is the location at a practical distance from the rest of our shoot?
 Does the location need improvements or modification?
 Does the location have physical space required for the size of our shoot?
 Is the location too noisy?

Aptech Limited 11
Concepts of Digital Filmmaking & Visual Fx
 What kind of equipment we will need for a chosen location?
 Plan for light and sound
 Is it possible to create a realistic looking set in the studio itself?
It is very important to talk to our principal crew members when exploring locations. Refer to Figures 1.16 and 1.17.
They are the people who can tell us if it will be affordable, if it is possible to shoot in the chosen location, if the dress
designing and other clothing accessories will be suitable for that particular location. All these issues will be sorted
out when we will actually start the shoot.

1.5.3 Production Design


To create a look of our movie from the look of the objects, sets and cast to the way movie will be shot and cut is the
process known as production design. A production designer helps to define the look of the movie and to explore how
all the elements at our disposal can be used to strengthen our visual image and our story. The production designer
is the head of the art department. On a big movie, the production designer is responsible for the overall vision, while
the art director implements that vision and manages the art department crew, which includes set designers, set
dressers, prop masters, modelers, set construction workers and production assistant.

Figure 1.18: A storyboard created on computer


(Refer to the Colored Section for the colored image.)
Figure 1.19: A storyboard created for a video
(Refer to the Colored Section for the colored
image.)

1.5.4 Computer and Video Storyboards


An increasingly important part of the production process is to create an animated storyboard that captures the
essence of character movement and camera angles. The previsualized animated storyboard allows directors and
other creative professionals to iteratively refine the elements of a production before committing to the expensive
production process.
 Computer Storyboards: Programs like Storyboard Artist and Storyboard Quick help us in creating
storyboards quickly and easily. Though drawings created with these programmes do not look as good as
hand drawn images, they make it easier to pick up objects and move them around providing for easier
revisions. Refer to Figure 1.18.
 Video Storyboards: These storyboards can be created with a video camera. To get the video output we
gather our cast in a rehearsal phase, take the available props and set pieces if we can get them and begin
to block and stage some scenes. Later we digitize this footage and make rough edits. Our video storyboard
is ready. Refer to Figure 1.19.

12 Aptech Limited
Introduction

1.6 Project Planning


Scheduling and budgeting are very closely related. Because they are so intertwined there’s a saying in the
entertainment industry that says: Good, Fast and Cheap: We can have only two of these things. These two things
need to be planned at the preproduction stage itself as the entire production work is based on these things.

1.6.1 Scheduling
Making a movie takes a lot of time and unfortunately for the actors, most of the time is spent standing around while
the crew set up the cameras and lights. To save this kind of waste of time a good schedule can help us get everyone
to the right place at the right time, and ensure what we’re doing when we get there.

We can schedule our project in whichever way we want but it is important that our schedule is simple to understand
to all the cast and crew. Let us understand one simple method:
 Create a grid and arrange our scenes: Create our schedule on a big grid, with characters and crew listed
on the rows, and scenes listed along the Columns. Next group all our scenes together according to their
locations. Each scene that needs to be shot in this location should be planned in their own columns. The
advantage of this schedule is that, when we get to a particular location, we want to shoot all the scenes that
take place at that location so that we don’t have to go traveling back and forth from one location to another.
Refer to Figure 1.20.

Figure 1.20: Creating a grid and arrange our scenes


 Mark characters and crew needed: Make marking on the grid to indicate which cast and crew members
are required for each scene in that particular location. Refer to Figure 1.21.

Aptech Limited 13
Concepts of Digital Filmmaking & Visual Fx

Figure 1.21: Marking characters and crew needed


 Rearrange as needed: Re-order the columns as our schedule changes. During actual shooting some
scenes will take longer than estimated or sometimes actors will take longer to memorize the dialogs. With
experience we’ll get better at estimating how long it will take to shoot a scene.

1.6.2 Budgeting
The budget is an accounting of what every aspect of the movie costs. It is a complicated and time-consuming
process that goes hand in hand with scheduling, as it is difficult to predict how long our shoot is going to take. Before
getting into the production, the estimated budget plays a key role in getting the project financed and under way. In
the movie making process budget makes it clear to producers exactly where the money is going to be spent. For
feature films, some production managers and others specialize in reading scripts and drawing up budget estimates
based on the use of locations, size of cast, entire units accommodation and food expenses, special effects and the
like. Preparing and managing the budget is generally one of the producer’s key jobs.

There are many ways to organize a budget. Different types of productions (features, documentaries, corporate
projects, multimedia) call for different budget formats. There are several computer packages (such as Movie Magic
Budgeting) that help us lay out budget and track expenditure. Refer to Figure 1.22.

Figure 1.22: Budgeting

14 Aptech Limited
Introduction
It often helps to divide the budget chronologically, separating the cost of preproduction (research, casting, scouting,
planning), production (film or tape costs, equipment rental, travel and food for crew) and postproduction (editing and
the various finishing costs like music, mixing, online costs or negative matching, titles and prints and dubs). Every
producer also has to set a budget for publicity of the film once it is over and ready to release.

1.7 Summary
In this session, Introduction, you learned that:
 A movie begins with the kernel of an idea, an image, and some small piece of a story or character. From there,
the original concept is built into a working story treatment. The treatment is developed into a screenplay and
the screenplay is made into a movie.
 There are a number of steps any film project takes before it shows up at your local theatre. They are:
● Development

● Pre-production

● Production

● Post-production

● Marketing and distribution

● Pre-Production
 Once the development phase is complete, the project moves into pre-production. This is the preparatory or
primary planning stage.
 It is during pre-production that costumes and sets are designed, the remaining crew hired and locations
scouted and chosen. Shooting schedules are also developed and casting continues. Absolutely everything
that can be done prior to principal photography is considered.
 Production is the “active” process of making the film. The script is put to the camera, in studio and on location,
with actors, full costume, makeup, lights and sound. This is often referred to as “principal photography”.
This stage is expensive, time-consuming and requires extensive financial and logistical planning.
 Post is the compilation phase. The director and/or producer will work with the picture and sound editors
putting together the hundreds of shots and sounds taken during principal photography. This is when special
effects are added and shots are adjusted technically, aesthetically and for greater narrative impact. Dialogue
is fine-tuned. Soundtrack and audio effects are matched to the visual content. Slowly, the film goes from a
rough cut to the finished, polished, final version that audiences will see in theaters.
 Once the picture is completed and approved, it is marketed and distributed.

1.8 Exercise
1. The production period essentially begins with the camera roll.

a. True b. False

2. The postproduction period begins once the principal shooting is completed.

a. True b. False

3. Proxemics refers to the distance between the subject and the camera.

a. True b. False

4. T
he MCU is the shot that provides the greatest psychological identification with a character as well as amplifies
details of actions.

Aptech Limited 15
Concepts of Digital Filmmaking & Visual Fx

a. True b. False

5. A tilt is similar to a pan except that its movement is horizontal.

a. True b. False

6. A
production designer helps to define the look of the movie and to explore how all the elements at our disposal
can be used to strengthen our visual image and our story.

a. True b. False

16 Aptech Limited
Understanding Video Technology

2 Understanding Video Technology

Learning Outcomes
In this session, you will learn to -

 Explain videotape standards


 Explain video compressions and resolutions
 Explain the process of choosing a computer system

2.1 Introduction
Film has proven its power to engage us for over 100 years; radio for over 70 years, television for 50, and computer
media, the new kid on the block, is proliferating faster than its predecessors. Media production demands writing and
rewriting, research, group effort, and clarity of thought. Creating a film offers a means for associated people to talk
to whomever they think is an important audience and earn good money.

Digital video (DV) offers a number of advantages over analog. Digital media itself has particular quality characteristics.
Generally, digital media present a crisper, cleaner product that can be viewed an unlimited number of times. The
largest advantage, however, is that it’s a lossless medium. DV can be transported an unlimited number of times and
still retain its quality. This is not true of analog video, which has a “generation” effect each time it’s transferred.

Digital Video also offers an extended shelf life. So long as the magnetic videotape does not significantly degrade,
video quality will look exactly the same today as years from now. Because of these advantages, users can import
movies into their computer, work on them and export them back to a DV camcorder for storage. Down the road,
they can capture them back to the computer without losing any quality. So in addition to offering good quality, DV
camcorders also act as a good video backup/storage device.

2.2 Videotape Standards


2.2.1 Video Basics
Just like audiotape, videotape works by manipulating magnetic particles that are suspended in a medium applied
to a thin piece of celluloid.

The videotapes that are used for TV are interlaced. In this method our TV first displays even numbered scan lines
called even fields from top to bottom and then it goes back and fills in remaining odd fields. Interlacing technique
avoids the image flickering on TV screen. The speed at which the scan lines run on the TV set is called its frame
rate. Different formats have different frame rates depending on the format supported by that country. Refer to
Figures 2.1a, 2.1b, 2.1c, and 2.1d.

Aptech Limited 17
Concepts of Digital Filmmaking & Visual Fx

Figure 2.1a: Original image


(Refer to the Colored Section for the colored image.)

Figure 2.1b: Video out 1 (odd field) Figure 2.1c: Video out 2 (even field)
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)

Figure 2.1d: Video out 1+2 final image


(Refer to the Colored Section for the colored image.)

Our computer monitor and many new digital television formats use progressive scan. This method draws each line,
18 Aptech Limited
Understanding Video Technology
in order from top to bottom which outcomes in cleaner looking image and mostly higher resolution. Refer to Figures
2.2a and 2.2b.

Figure 2.2a: Full resolution progressive video Figure 2.2b: Half resolution interlaced video field
frame
(Refer to the Colored Section for the colored
(Refer to the Colored Section for the colored image.)
image.)

2.2.2 Video Formats and Standards


Though digital data is stored in the tapes in the digital format, it is a fact that all the videos are turned to analog
format while getting transmitted on TV, as TV understands only these types of signals. So even when our deck and
camera stores information in digital format, that information gets converted to analog signals before broadcasting
through air for TV sets or when sent through wire for VCRs to our television. Refer to Figure 2.2a and 2.2b.

There are many different kinds of video signals, which can be divided into either television or computer types. The
format of television signals varies from country to country. In the United States and Japan, the NTSC format is used.
NTSC stands for National Television Systems Committee, which is the name of the organization that developed
the standard. In Europe, the PAL format is common. PAL (phase alternating line), developed after NTSC, is an
improvement over NTSC. SECAM is used in France, Russia and Asia, and stands for sequential coleur avec
memoire (with memory). It should be noted that there are a total of about 15 different sub-formats contained within
these three general formats. Each of the formats is generally not compatible with the others. Although they all
utilize the same basic scanning system and represent color with a type of phase modulation, they differ in specific
scanning frequencies, number of scan lines, and color modulation techniques, among others.

Digital TV or DTV has many different formats, specifications and names that fall under the “ digital television” category.
DTV includes standards for broadcasting like, ATV (Advanced Television, HDTV (High Definition Television) and
SDTV (Standard Definition Television). Refer to Table 2.1. It is possible to convert from one standard to another,
either through expensive tape-to-tape conversions, or through special software.

Standard NTSC PAL HDTV Film


Frame rate 29.97 25 24 or 30 24
Fields 2 fields 2 fields No fields No fields
Vertical resolution 525 625 1080 N/A
Scanning Method Interlaced Interlaced Progressive N/A
Aspect Ratio 1:33:1 1:33:1 16:9 or 1.78:1 1:85:1 (for 35 mm)
Table 2.1: formats under digital television category

Aptech Limited 19
Concepts of Digital Filmmaking & Visual Fx
2.2.3 Video Timecode
Timecode is important because it identifies individual frames of our video and is used throughout any video editing
system. It’s rather like the page numbers of a book. Timecode systems assign a number to each frame of video
analogously to the way that film is manufactured with edge numbers to allow each frame to be uniquely identified.
Time data is coded in binary coded decimal (BCD) digits in the form HH:MM:SS:FF (Hours: Minutes: Seconds:
Frames), in the range 00:00:00:00 to 23:59:59:29 for 30 Hz frame rate systems. There are timecode variants for
systems having 24, 25, 29.97, and 30 frames per second.

Timecode is fundamental to videotape editing. An edit is denoted by its In Point (the timecode of the first frame to be
recorded) and its Out Point (the timecode of the first frame beyond the recording). An edited tape can be described
by the list of edits used to produce it. Each entry in such an edit decision list (EDL) contains the In and Out points of
the edited tape, and the in and out points of the source tape, along with tape reel number and/or other source and
transition identification.

An edited tape is invariably recorded with continuous “nonbroken” timecode. Nearly all editing equipment treats
the boundary between 23:59:59:29 and 00:00:00:00 as a timecode discontinuity; consequently, it is conventional to
start the main program segment on tape with the code 01:00:00:00

It is a frequent requirement to lock a television station’s master timecode generator to its master clock system,
providing accurate time of day information in the timecode signal for video recorders, station automation systems
and other broadcast equipment. Master clock systems provide a central, highly accurate source of time and
timecode for a broad range of time, date and timecode-referenced devices, from
clocks to computers. A choice of off-air and satellite synchronization references
and various device drivers ensures that the master clock can operate reliably
anywhere in the world, providing a very comprehensive and cost-effective
solution.

■ Types of Video Timecode Figure 2.3: SMPTE Timecode

There are several standards of timecode, and a deck or camera can only read timecode that it has been
designed to understand.
 SMPTE: This timecode is the professional industry standard set up by the Society of Motion Picture and
television Engineers (SMPTE). All professional equipments can understand this timecode.
 DV timecode: This timecode is developed for DV format and is becoming increasingly popular.
 RCTC: (Rewriteable Consumer Time Code) is a format Sony developed for use with consumer Hi8 equipment
but unlike DVTC, it is difficult to find any support for RCTC outside our own editing system.

Timecode can be stored in a number of physical locations on videotape: on the address track, on an audio track
or as window burn (or “vis” code), which is a visible counter that is superimposed over our video. Most digital
video formats uses address track timecode, but in special situations we might need to restore to video burn or
audio track timecode. Refer to Figure 2.3.

■ Timecode for Film Sources


Film has different methods for keeping track of time: keycode and Aaton timecode.
 Keycode: It refers to a number on each frame of the film itself that is put by the film lab. When film to
video (or telecine) transfer is complete, we can have the keycode numbers added to our videotapes in the
form of window burn. But we must remember that this time code gets superimposed permanently onto the
videotape and we should use this format only when we want to eventually go back to films.
 Aaton Timecode: It is electronic timecode for film where electronic pulses are stored on the film itself and
can be added to telecine transfers as window burn timecode.

20 Aptech Limited
Understanding Video Technology
So far we have learnt about video formats; now it is essential to understand the format’s specifications to get
the best possible quality all throughout our project. While discussing video formats we will come across many
terms like compressions, resolutions, data rates etc. Let us understand these terms in detail.

2.2.4 Videotape Features and Formats


Most of the tape formats are not longer than 90 minutes but if the length of the project is longer than that then we
have to use two tapes for our master. When television broadcasting is done we will have to build commercial breaks
into the master edit whereas in feature films we need to find a break in our movie to allow for a Reel Change. There
are various formats available in the market for videos, Broadcasting and films. The formats listed below are the most
popular ones and are accepted worldwide for the best output.

 Video: DV, DVCAM or DVCPro are all great choices for video release as they give good quality to create a
master. Refer to Figure 2.4.

Figure 2.4: Types of videos


 Broadcast: Ideally speaking BetaSP or Digital Betacam are the most popular for broadcast standards but
apart from that DV, DVCAM or DVCPro can also be chosen if we are willing to do expensive transfer to
DigiBeta.

Figure 2.7: Sony DSR-2000P

Figure 2.5: BetaSP Broadcast Figure 2.6: Digital Betacam

 Film Projection: When we want the best quality in film projection it is advisable to use DVCAM or DIGIBeta.
Refer to Figures 2.5, 2.6, and 2.7.

2.3 Video Compressions and Resolutions


2.3.1 Compression
Video data compression is concerned with reducing the amount of data required to reproduce a digital video. In its
simplest form, video compression takes a series of video images and optimizes them to contain the least amount
of data possible. When video is compressed, data is removed from the video image that is considered unneeded.
Let’s say a background image in the film that we have shot doesn’t change for 10 seconds. Here again, there’s no
Aptech Limited 21
Concepts of Digital Filmmaking & Visual Fx
need to continually redraw an unchanging portion of the image. So rather than including that portion 30 times per
second, the portion may only need to be rendered say 10 times per second. Over a period of 10 seconds, optimizing
that portion of the image saved us from rendering it in 200 frames.

Compression is a key component in facilitating the widespread use of digital video, which is currently, prevented by
the mismatch between the huge storage and transmission bandwidth requirements of video and the limited capacity
of existing computer systems and communications networks.

This compression can affect the quality of our image to a great extent hence we need to be very careful while
choosing the compression format. Compressed video can range anywhere from 10:1 to 1:16:1. Most compression
processions work by reducing unnecessary, redundant color information in each frame. Because our camera can
capture more colors than our eyes can perceive, compression software can afford to throw out the colors that our
eyes are less sensitive to, resulting in less color data and hence less file size.

The human eye is more sensitive to differences in light and dark light values that each color has rather than the
differences in colors. When a digital camera is sampling an image the degree to which it samples each primary color
is called the Color Sampling Ratio.

DV formats use 4:1:1 color sampling, which is an amount of color reduction that is considered visible to the viewer.
In this ratio the first number stands for the Luma signal and the second two numbers stand for the color difference
components (color brightness and chroma).

There are many different kinds of digital video. Each type is encoded in a format designed to compress the video
into a usable form. Some popular compression formats include DV, MPEG-1, MPEG-2, MPEG-4, QuickTime
Video, Sorenson Video, Cinepak, M-JPEG and AVI. Each format has its specialized use, and some are better than
others.

2.3.2 Resolution
Video resolution is measured by the number of vertical lines that fit across the image horizontally. This is also called
Horizontal Line Resolution.

The more scan lines we have, the better the vertical resolution possible. Since each video standard has a fixed
number of scan lines, Vertical Line Resolution is the same (525 lines for NTSC of which 485 are visible, 625 lines
for PAL of which 575 are visible) for all TV sets of the same standard except the poorest quality, defective, or very
small sets.

Advertised resolution is the horizontal resolution and it varies with the quality of the TV set. Side by side thin upright
lines, whose cross section is a row of dots, are used to measure horizontal resolution. To resolve horizontal detail,
the electron beam changes to make dots (and dashes) of different colors, along each scan line. The finer the detail
we need to reproduce, the more changes the electron beam must be able to make in a short span. Refer to Figure
2.8.

Figure 2.8: Resolution

22 Aptech Limited
Understanding Video Technology
 Pixel shape
In case of rectangular (non-square) pixels (usual in TV) one has to maintain the aspect ratio when measuring
objects, because the dimensions of stored frame aren’t equal to true dimensions; resolutions along x and
y- axis aren’t the same. Square pixels are the pixels of same x and y dimensions.
Each monitor breaks images into tiny pixels that display the image. Computer monitors’ pixels are perfectly
square while television pixels are rectangular. Titles and images created on a computer can appear
“stretched” when displayed on a television if this difference is not taken into account.
Use of square pixels solves such problems - picture elements are equally arrayed in both directions, and
allow easy addressing. Thus, aspect ratio of the image does not require adjustment. This is needed in
image processing tasks requiring accurate image measuring. Refer to Figures 2.9 and 2.10.

Figure 2.9: Image shot with rectangular pixel


Figure 2.10: Distorted look of the same image on
camera
square pixel computer monitor

Many video formats use square pixels. This means that a screen with a resolution of 640 x 480 pixels will have an
aspect ratio of 4:3.

After understanding the videotape formats and features we need to understand the various platforms, video
interfaces and computer systems that can be used to execute the project that we are going to start. It is essential to
use the right kind of computer to work on, as we will be using it for editing, special effects and postproduction.

2.4 Choosing a Computer System


If we have a computer available then we can always upgrade it according to the digital video requirements. To do
this we need to check the compatibility of old components with the new ones. If we are buying a new system then
we must make sure that all the components are guaranteed to be compatible with each other.

The operating systems have their pros and cons, so we have to decide what kind of OS is the most compatible
and suitable for the software packages that we are going to use. Let us a take a quick look at the most widely used
operating systems and their features from the digital video and film point of view.

2.4.1 Operating Systems


Refer to Table 2.2 to view the list of operating systems and their descriptions.

Aptech Limited 23
Concepts of Digital Filmmaking & Visual Fx

Operating Descriptions
Systems
Macintosh OS  Widely used in postproduction houses
 Configuring of various cameras, interfaces and storage options is much easier
 In built Firewire DV support
 All higher end packages are not compatible

W i n d o w s  Less expensive and good technical support


95/98  Different hardware and software options available
 Low-level OS support for Firewire based digitizing
 Windows 98 supports multiple monitors network

Windows NT  Used for higher-end 3D softwares and special effects


 Does not support multiple computer monitors, but third party solutions available
Table 2.2: List of operating systems and their description

Apart from these BeOS, Unix and Linux OS are also used commonly.

2.4.2 Video Interface


Video interfaces help us to get a video into and out of our computer. There are two ways to transfer a video to our
computer either through an Analog Digitizing Process or through a Direct Digital Transfer. Let us understand these
two methods.

■ Digital Video Interface


As today’s DV formats already store the data into digital formats all we need to do is transfer that data
into the computer to edit it. This process is done with either Firewire or SDI interface. Both interfaces can
transfer video and audio, and provide device control through a single wire.
 Firewire
● High-speed connectivity to mass storage, cameras, scanners and new networks

● Network connectivity up to 64 devices possible

● We can get a transfer speed up to 100 mbps (12.5 MB)


 SDI
● Stands for Serial Digital Interface

● Works as interface between high-end digital video formats and non-linear editing systems

● We can get a transfer speed up to 200 mbps

● Supports much longer cables than Firewire’s four foot limitation

■ Analog Digitizers
All the cameras do not store data in digital formats. Some old formats like Hi8 and Betacam SP do not have
digital interface so we can’t plug those cameras into a Firewire port on our computer. Even some DV format
hardware lacks digital I/O. In such cases we need an Analog Digitizer, which takes an analog signal from our
camcorder or deck and digitizes it using the computer. Targa 2000, Canopus Rex Media 100 and Adid products
are the most popular analog digitizers. Refer to Figure 2.11.
24 Aptech Limited
Understanding Video Technology

Figure 2.11: Analog digitizing system where special hardware installed in the system takes care of
compressing and decompressing video in real time

2.4.3 Important Features of Computers


As we are going to use the system for film and video work, we need a system that can handle lot of load and has
a good storage space. There are a lot of features that need to be up to the mark before starting the work. A good
hardware combined with the right software can provide us with a good, full-
featured post-production facility. Refer to Figure 2.12. Let us take a look at
the important hardware components that we need for video editing:
 CPU: The CPU processor should be very powerful as we are going
to do various tasks like video editing, compositing and running
various special effects softwares and image editors.
 RAM: As the video editing is more disk- intensive than RAM
intensive but special effects demand lot of RAM so a RAM of 128MB
to 256MB is good enough.
 Storage: The storage that we need depends on the kind of project
that we are working on. If we are shooting for a two hours final
output video then we need a storage space that can accommodate
at least ten hours shoot as in normal case for two hours final output
we will shoot six hours of footage and we will need the remaining
space for other applications.
Figure 2.12: Velocity system with
 Monitors: We need to opt for a video card and monitor that can work dual stream uncompressed video
at higher resolution. and a SCSI drive array
 Accessories: Apart from the above-mentioned hardware we can always increase the workability of our
system by installing co-processor cards to accelerate complex effects filters and to increase the audio
output, install additional special audio editing hardware. Systems like ProTools provide multichannel, high-
quality digital recording with real-time effects and editing.

2.5 Summary
In this session, Understanding Video Technology, you learned that:
 Digital media itself has particular quality characteristics. Generally, digital media present a crisper, cleaner
product that can be viewed an unlimited number of times. The largest advantage, however, is that it’s a
lossless medium.
 Interlacing technique avoids the image flickering on TV screen.
 There are many different kinds of video signals, which can be divided into either television or computer
Aptech Limited 25
Concepts of Digital Filmmaking & Visual Fx
types. The format of television signals varies from country to country.
 Timecode systems assign a number to each frame of video analogously to the way that film is manufactured
with edge numbers to allow each frame to be uniquely identified. Time data is coded in binary coded decimal
(BCD) digits in the form HH:MM:SS:FF (Hours: Minutes: Seconds: Frames), in the range 00:00:00:00 to
23:59:59:29 for 30 Hz frame rate systems.
 Most of the tape formats are not longer than 90 minutes but if the length of the project is longer than that
then we have to use two tapes for our master.
 Compression is a key component in facilitating the widespread use of digital video, which is currently,
prevented by the mismatch between the huge storage and transmission bandwidth requirements of video
and the limited capacity of existing computer systems and communications networks.
 Each monitor breaks images into tiny pixels that display the image. Computer monitors’ pixels are perfectly
square while television pixels are rectangular. Titles and images created on a computer can appear
“stretched” when displayed on a television if this difference is not taken into account.
 The operating systems have their pros and cons, so we have to decide what kind of OS is the most
compatible and suitable for the software packages that we are going to use.

2.6 Exercise
1. The videotapes that are used for TV are interlaced.

a. True b. False

2. Our computer monitor and many new digital television formats use progressive scan.

a. True b. False

3. D
V formats use 4:2:1 color sampling, which is an amount of color reduction that is considered visible to the
viewer.

a. True b. False

4. D
TV includes standards for broadcasting like, ATV (Advanced Television, HDTV (High Definition Television) and
SDTV (Standard Definition Television).

a. True b. False

5. W
hen digital camera is sampling an image the degree to which it samples each primary color is called the Color
Sampling Ratio.

a. True b. False

6. This timecode is the professional industry standard set up by the Society of Motion Picture and Television
Entertainers (SMPTE).

a. True b. False

7. The speed at which the scan lines run on the TV set is called its frame rate.

a. True b. False

26 Aptech Limited
Guidelines for Selecting a DV Camera

3 Guidelines for Selecting a DV Camera

Learning Outcomes
In this session, you will learn to -

 Describe the various features and functions of the cameras


 Explain the guidelines for evaluating and selecting the right camera for the project
 Use the camera

3.1 Introduction
In the previous session we discussed about video formats and studied the differences
between analog video and digital video. Though new digital video formats deliver better
quality than most of the old analog formats, it doesn’t matter how good our format is if our
camera is not up to the mark and shoots bad images. As a feature filmmaker we should
be most concerned about image quality, particularly if we are planning on transferring to
film. Apart from choice of format, the camera that we choose will make a difference in the
image quality of our final footage.

In the past few years, a lot of improvements have taken place in DV cameras and the DV
filmmaker can now buy an affordable camera that rivals professional cameras of just a
few years ago. Before we start working with the actual camera it is essential to understand
the features and techniques of various cameras.

In this session we will understand various features and functions of the cameras and
guidelines for evaluating and selecting the right camera for our project. By the end of this
session we will know what all those buttons on our camera are and will have a good idea Figure 3.1: The stan-
of how to use them. After that, we’ll be ready to start shooting! Refer to Figure 3.1. dard controls found
on any DV camera
(Refer to the Colored
Section for the
colored image.)

3.2 Preparing Our Camera


When the camera is actually rolling we have to be able to move quickly to get the shots we want. The way by which
we can achieve that is by learning all the details of our camera and equipment. Let’s begin with the basics; we need
to know how to turn our camera on! Most of the cameras can run in different modes:
 Movie Mode - for shooting video
 VCR Mode - for watching the tape
 P.SCAN Mode - a special version movie mode that shoots a different type of video
When we are ready for the shoot, we turn the camera on by selecting Movie Mode. When we need to review our
footage, dump footage in a computer or cue a tape, we put the camera into VCR mode.
Aptech Limited 27
Concepts of Digital Filmmaking & Visual Fx
Before we can turn on our camera, it has to have power. Batteries and plugs do the needful to supply the power.

3.2.1 Power: Batteries and Plugs


When we are shooting in the studio, we usually power
the camera with an AC adapter that we can plug into
the wall but while we are shooting outdoors we will use
rechargeable batteries. Batteries are simple to use; all
we have to do is charge them, use them until they run
down and recharge them. Running out of batteries in the
middle of the shoot is a real pain and can be avoided by
taking precautions. Refer to Figure 3.2. Here are some
tips to increase the life of our charged battery:
 Always make sure that the battery is completely
dead before we recharge it.
 When we first get any type of new rechargeable
battery, charge it completely.
 Don’t use the batteries if we don’t have to.
 Ejecting and loading a tape takes some extra Figure 3.2: Batteries used in DV camera
power. So avoid it if not necessary.
 In cold weather our batteries might have shorter life so
warm up the batteries by putting them in our pocket or
store them in a warm place.

3.2.2 Videotape
We can load the videotape into the camera and shoot on it.
Though the process is simple we need to take care of few things
as listed below. Refer to Figure 3.3.
 Be careful when loading a DV tape and use a tape
transport mechanism for it.
 Never touch the inside of the cassette.

3.2.3 Controls
On every camera there are many buttons and controls. To shoot Figure 3.3: DV tapes and tapes transport
a good footage we must know how these controls work. Refer to
Figure 3.4.
 Zoom: With the zoom controls we can Zoom In the image to make it look larger and closer whereas we use
the Zoom Out controls to make object look farther and smaller. Some cameras have a digital zoom feature,
which creates a fake zoom effect, which zooms the image in high amounts but usually looks grainy and
noisy with bleeding colors so it’s best to keep it off.
 Start and Stop: These controls start and stop the recording.

28 Aptech Limited
Guidelines for Selecting a DV Camera

Zoom Focus

Figure 3.4: Zoom


�����������������������
and Focus controls

3.2.4 Manual Controls


Nowadays most video cameras have complete automatic operation yet even the best camera is not perfect and we
will need to adjust some parameters manually. Fortunately, most of the cameras have the special manual options
that give us control over our shooting parameters. We can set the following parameters manually: Controls like
camera’s focus, aperture, shutter speed, audio levels and white balance are essential for good shooting.
 Focus Controls: Adjusting manual focus becomes necessary when we want to create a very unusual
framing. For instance, if we are shooting an outdoor scene of mountains and want to focus on a particular
waterfall at the top of the mountain then an automated camera might zoom on the surrounding trees as well.
For these kinds of things we will need to adjust our focus area manually. Refer to Figures 3.5a and 3.5b.

Figure 3.5a: Manual focus allowed to focus on the Figure 3.5b: With just a slight shift in focus, the
hand in this image. A large aperture kept the depth- head has been made sharp and the hand in the
of-field shallow so the background is out of focus foreground soft
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
 Aperture: It is also called Iris or Exposure. It controls how bright our image is. Sometime when we require
a bright light in the scene the automated camera may tone down the brightness thinking the scene is too
bright. Being able to control aperture lets us control depth of field-the area from foreground to background
that’s sharp in the image. This lets us either throw the background out of focus or keep both the background

Aptech Limited 29
Concepts of Digital Filmmaking & Visual Fx
and the foreground sharp. Refer to Figures 3.6a and 3.6b.

Figure 3.6a: A small aperture gives us greater Figure 3.6b: A large aperture lets us throw the
depth of field background out of focus
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
 Shutter Speed: Shutter speed is another way that the camera controls how bright the camera image is.
A faster speed lets in less light, while a slower speed lets in more. Most cameras automatically select a
shutter speed based on their aperture setting, a process called Shutter Priority but with manual adjustments
we can many times get a better effect. Refer to Figures 3.7a and 3.7b.

Figure 3.7a: Fast shutter speed can freeze a


hummingbird in flight. However, it would have take Figure 3.7b: A slow shutter speed makes his
an even faster shutter speed to freeze the wings moving arm paint a blurred image

 F-Stop: The f-stop ring controls a small delicate transparent part (diaphragm) in the lens. This diaphragm is
used to regulate the amount of light reaching the film plane. In its most simplified sense, the f-stop is used
to obtain a usable exposure. If the f-spot is not set properly, the film will be over or under exposed. The f-
stop ring has a series of numbers. The thing we need to remember is that the smallest number represents
the widest opening, or maximum aperture, and the highest number represents the smallest opening, or
minimum aperture. So the lower f-stops let more light and higher f-stop let in less light. Hence, choosing the
accurate f-stops becomes essential when we want a picture with normal exposure. For example if the light
is too bright, the stop must be small so too much light does not reach the film where as in the low light the
stop must be large. Refer to Figures 3.8a and 3.8b.

30 Aptech Limited
Guidelines for Selecting a DV Camera

Figure 3.8a: F-stop aperture sizes

Figure 3.8b: The left side image indicates the light exposure with f5.6 f-stop setting where as the right
side image indicates with f8 setting
F-stops and shutter speeds work together to control the amount of light that reaches our film. An f-stop
determines the size of the hole the light passes through, and our shutter speed, the amount of time that
light is allowed to reach the film. With the right f-stop and the right shutter speed our picture will be correctly
exposed; not too light and not too dark. The light our lens sees is the light that’s there. There’s only so
much of it and we can’t change that. The same thing goes for our film. It’s designed to give us a correctly
exposed picture if it’s exposed to a certain specific amount of light; no more, no less. We can’t change
that either. In other words, we have a constant at each
end, and that’s why the two adjustments in the middle
- f-stop and shutter speed - must remain in a constant
fixed relationship. If we cut down on the amount of light
by changing the f-stop to a smaller aperture we have
to compensate by increasing the amount of light by
changing to a slower shutter speed.
 Audio: Cameras audio facilities are also worth being
concerned about. The microphones included on the
camera often pick up camera motor noise, as well as
the sound of our hands while we move the camera.
Manual audio gain controls let us adjust or attenuate
the audio signal coming into the camera, making it easy
to boost quiet voices or lower the level of a roaring car
engine. Refer to Figure 3.9.

Figure 3.9: SONY DSR-PD150 DVCAM Cam-


corder

Aptech Limited 31
Concepts of Digital Filmmaking & Visual Fx
 White Balance: The concept on which the white balancing is based is very simple. As we all know that
light is made of bunches of different colors. When we see a rainbow, what we are seeing is normal sunlight
split into all of its separate colors by drops of rain. The concept behind white balancing is that if the camera
knows what white looks like under the current light, then it will know what every other color is supposed to
look like since when white light is composed of every other color. With all the automatic features, automatic
white balance can get confused under certain situations as the camera assumes that the brightest object in
the scene is white when it can be in any other color like green or orange, which will make the entire scene
imbalanced. This same problem also occurs when we have mixed lighting. Due to these reasons manual
white balancing becomes important. Refer to Figures 3.10a, 3.10b, and 3.10c.

Figure 3.10a: Auto White Balance Figure 3.10b: Manual White Bal- Figure 3.10c: Manual White Bal-
ance Sunny look ance Cloudy look

3.2.5 Widescreen
In the past, video cameras used vacuum tubes for capturing images. Now video cameras use special imaging chips
called CCD, or charge-coupled devices. CCD based cameras use either a single CCD to capture a full-color image,
or three chips to capture separate red, green and blue data, which is then assembled into a color image. Two factors
that contribute the most to our camera’s image quality are the camera’s lens and the number of chips the camera
uses to create an image.

Widescreen TV uses an aspect ratio of 16:9, compared with the common TV aspect ratio of 4:3. At a ratio of 4:3,
the width of the screen is 33% more than the height, whereas at a ratio of 16:9 this is 75%. The 16:9 aspect ratio
corresponds to the normal visual field of human beings. It will therefore feel more natural to watch and it is for this
reason that most motion pictures are made in this format. Refer to Figure 3.11.

Figure 3.11: Comparison between normal TV and widescreen

Movies are typically shot using formats that are very wide; much wider in width than height. Whereas Television is
almost square. When it comes to transferring a movie to make it fit onto a TV, a studio has three options:
 Letterbox: Putting the black boxes around the extra area to change the shape of our TV screen into
something more rectangular.
 Pan & Scan: Crop the edges to make the movie fit our TV. Due to the cropping we often see scenes in
32 Aptech Limited
Guidelines for Selecting a DV Camera
videotape movies where someone’s head is chopped off or when one person is talking to another but that
person is half visible on the screen. Refer to Figure 3.12.

Figure 3.12: Letterbox (left) and Pan & Scan (right) as ways to put a 16:9 format movie or program on a 4:3
broadcast
 Movie Compress: The 16:9 image is squeezed proportionately, horizontally only. People appear thin
on a 4:3 TV set. Widescreen TV users can choose the ‘Widescreen’ mode, and the image is stretched
proportionately to fill the 16:9 screen. 4:3 sets users can choose ‘Movie Compress’. The 4:3 image is then
squeezed vertically, to restore the original 16:9 proportion. On a 4:3 set, the viewer will then see letterbox.

Most of the filmmakers find Wide formats to be much more appealing than the square TV formats. As with format
we can capture wide landscapes and create an epic, sprawling landscape on the screen. It’s important to note that
the Widescreen mode of a camera does not actually shoot a wider image but they crop off top and bottom to make
the image into a wider shape.

Note

If we are using editing software that does not suit the Widescreen angle then avoid using it.

3.2.6 Lenses
Lens perspective refers to the way lenses represent space. Different kinds of lenses have different effects on the
way we perceive depth and dimensionality within an image. This aspect of lenses has a great impact on the process
of choosing a lens. Directors and cinematographers generally do not choose lenses for how close they bring the
viewer to the subject. That can be controlled by simple camera placement. Lenses are usually chosen for how they
represent space.

Just as film camera uses a lens to focus light onto a piece of film, a digital video camera uses a lens to focus light
onto the imaging window of a CCD. The quality of lens of the lens of our video camera can mean the difference
between sharp images with good color and soft images with muddy colors. Refer to Figure 3.13.

Figure 3.13: Interchangeable lens system

Aptech Limited 33
Concepts of Digital Filmmaking & Visual Fx
While evaluating any lens we must take into consideration the following points:
 The lens should produce images that are brighter at the middle rather than at the edge.
 While zooming In and Out with lens the image should not appear darker or look blown up.
 At the wide angles the images should not get distorted at the corner edges.
 The lens should focus equally on all wavelengths of light.

■ Wide-angle lenses
The defining characteristic of wide-angle lenses, also called short lenses, is that they elongate space, that is,
objects appear more distant from each other than they actually are. Wide-angle lenses also bend lines in the
composition outward. Extreme wide-angle lenses bend corners and give almost funhouse-mirror distortion to
objects or people being filmed. They are rarely used for portraiture because they balloon people’s faces and
make them look heavy or freakish. Refer to Figures 3.14a, 3.14b, and 3.14c.

Figure 3.14c: Ultra wide angle


Figure 3.14a: Wide angle Figure 3.14b: Extra wide angle

■ Telephoto lenses
Telephoto lenses do the opposite of wide-angle lenses. Rather than elongating perspective, telephoto squashes
perspective, that is, makes things look closer together. This is a very common effect, and, once pointed out,
many examples come to mind. In films set in big cities, directors and cinematographers like to use telephoto
lenses to shoot crowd shots on streets. It makes people look jammed together and cramped. It exaggerates the
effect of people crowded like rats packed together. Refer to Figures 3.15a, 3.15b, and 3.15c.

34 Aptech Limited
Guidelines for Selecting a DV Camera

Figure 3.15a: This picture was Figure 3.15b: The same piano as Figure 3.15c: Telephoto lenses
shot with a wide-angle lens, the in above figure has been shot are widely used in sports
distance between the front and with a telephoto lens. Compare coverage, to get a “closer” view
the rear of the piano is elongated, how the distance between the of the action
giving the image an illusion of front and the rear of the piano
great depth appears (Refer to the Colored Section for
the colored image.)
(Refer to the Colored Section for (Refer to the Colored Section for
the colored image.) the colored image.)

■ Zoom Lenses
The term zoom as it is meant here has existed long enough that most people understand what it means.
The zoom lens includes all of the focal lengths discussed previously. The zoom effect is created by movable
elements in the lens that either bring the subject closer to or push it farther away from a stationary camera. This
allows shots, like some camera movements, to go from very random information to very specific information and
vice versa. Refer to Figures 3.16a and 3.16b.

Figure 3.16a: Image shot with zoomed in angle Figure 3.16b: Image shot with zoomed out angle

3.3 Types of Cameras


In the previous section we had a look at the main features of the camera. Let us proceed further and have a look at
the most popular cameras that are used in digital filmmaking industry. All of the cameras listed below use MiniDV
format and are well suited to feature film or documentary production that will be delivered on video or transferred
to film.

Aptech Limited 35
Concepts of Digital Filmmaking & Visual Fx
3.3.1 Three-Chip Camera
 Sony VX-1000: This camera has excellent image quality; manual controls and has a comfortable feel but its
biggest limitation is lack of LCD viewfinder. It is the most popular camera among all. Refer to Figure 3.17.

Figure 3.17: Sony VX-1000


 Canon XL-1: Its interchangeable lenses give it more shooting flexibility than any of its competitors. It also
has excellent image quality, progressive scan mode and manual controls but does not have LCD viewfinder
and overall the camera is heavy to hand hold because of the front heavy weight of the lens.
 Canon GL-1: This camera is a good low price option compared to its high-end cousins. Excellent image
quality due to fluorite lens and zoom lens, LCD viewfinder, full manual controls, progressive scan mode but
lacks in manual gain controls.
 Sony TVR-900: Though its lens is not up to the quality of the GL-1, it still provides very good image quality.
Apart from that is has good manual controls including manual audio controls. Unfortunately lacks manual
sharpness control.
 JVC GY-DV 500: Currently the most full-featured, truly professional Mini DV camera available. Many high-
end features included but the only drawback is that it is very expensive. Refer to Figure 3.18.

Figure 3.18: JVC GY-DV 500

3.3.2 Single Chip Camera


 Sony DCR-PC100: It is a tiny version of DV camera, is very easy to handle and best to shoot with in single
chip camera and also get a good quality image, manual controls and S-video in and out.
 Canon Elura: Better quality image compared to PC 100, optical image stabilization, S-video in and out and
a true progressive scan mode. As a disadvantage it does not have built in external headphone jack. Refer
to Figure 3.19.
36 Aptech Limited
Guidelines for Selecting a DV Camera

Figure 3.19: Canon Elura 20MC DV Camcorder


 Sony DCR-TRV 10: A good all-around camera. Provides good image quality but has larger body and S-
Video is lacking.
 Sony DCR-PC1: The tiniest DV camera of all and has the same video quality as the PC 100, and also gives
good still image quality and S-video. Provides mic and headphone jacks onboard. One major drawback is
that when the camera is off the wall current it gives an intolerable hum when using external microphone.

3.4 Creative Cinematography


While making a film, what today’s filmmaker needs to know about is camera structure and operation, lenses, film
stocks, filters, lighting and light measuring, and accessory equipment. In addition he needs up-to-date information on
sound recording, editing, video transfer, studio and location shooting, production logistics, and modern techniques
of picture manipulation with optical printer. But knowing just the technical aspects of filmmaking is not enough. It is
a medium of expression and at a time many people can watch our film and feel like part of it. The creativity is the
soul of any successful project.

Film is a practical art. In this art cinematography plays a very important role. It is nothing else but coordinating the
director’s desired presentation of the film story with the primary technical equipment of filmmaking: camera, lights
and film stock. Establishing a shot is a painstaking and time-consuming process. It requires a responsible approach
to electrical engineering and safety.

The cinematographer is the Director of Photography (DP) who works closely with the director, organizes the shots
and shot sequences, determining the type of lighting needed and the creation of a visual mood. The DP coordinates
closely with the gaffer (head electrician) and camera operator to meet the needs of the picture and he must take
into account the source and quality of the light, the color of the set and costume, the skin tone and makeup of the
performer. The DP chooses which lights and which placement of those lights will create the image desired: day,
night, shadow, visual depth and scene focus.

One of the most beautiful films ever filmed is “Days of Heaven” (1978). Photographed by the legendary
cinematographer Néstor Almendros, much of this movie was filmed during “magic hour” a luminous time of day
when the sun sets but there’s still enough light to expose the film. When we view a work like “Days of Heaven”, as
shown in Figure 3.20, it is clear that filmmaking is dependent on one aspect of production more than any other and
that is cinematography. Cinematography is about light, lens and picture making. It is the look “the aesthetic” that
defines the visual quality of the picture.

Aptech Limited 37
Concepts of Digital Filmmaking & Visual Fx

Figure 3.20: A scene from film “Days of Heaven” which is famous for its cinematography

3.5 Characterization Through Camera


Characterization of a character has to be decided along with the storyboards. As per the characterization the other
aspects of the scene and atmosphere can be generated according to the scene. This can be done in two ways
namely by the actor himself and secondly with the help of the camera.

While the actor thinks about his character he should portray it in such a way that his personality should become a
part of the character. There are many ways by which an actor can portray his character but the main things that can
change his personality are as follows:
 Movement-this includes such things as walk, gestures and posture. Changes could incorporate pace, rate,
rhythm and style.
 Voice-this includes accent, diction, sound and vocabulary. Changes might include pitch, rate, volume and
words specific to the character type.
 Personality-changing the movement and voice will often change the personality of the actor to fit the
character. Or the characterization might take place in the opposite way-the actor could change personality
traits and the voice and movement changes would follow naturally.
Apart from this, good make up and appropriate costumes are always additional benefits.

Through techniques in films and understanding the storyline thoroughly, we can characterize scenes, segments,
and individual frames in any film. When we talk about technical aspects, the camera plays the most important role
in portraying a character. One important aspect of characterization is interpretation of camera motion. Many a times
good placement and adjustment of the camera positions can add a marvelous effect in a simple scene. Many a
times in Hindi films we see that camera motions are used to show the confusion on a characters face. Here we
notice the character may not have variations in expressions yet the impact of camera motion and lighting creates
the overall impact. If the character is cunning then the camera is mainly focused on the facial expressions rather
than overall get up to make the audience understand his cruel thoughts.

When the character is set in the director’s mind then it is a challenge to turn it into reality. It is a creative job, which
is executed with the help of camera techniques.

38 Aptech Limited
Guidelines for Selecting a DV Camera

3.6 Summary
In this session, Guidelines for Selecting a DV Camera, you learned that:
 Most of the cameras can run in different modes: Movie Mode, for shooting video, VCR Mode for watching
the tape and P.SCAN Mode, a special version movie mode that shoots a different type of video.
 Movie Expand is a feature to fill a 16:9 screen with a 4:3 broadcast. The 4:3 picture is ‘blown up’, or enlarged
to fill up the black bars to the left and right. The image also expands upwards and downwards, so a part of
the image on the top and the bottom ‘falls off’. When a movie is broadcast in 4:3 and letterbox (with black
bars on top and bottom of the screen), the black bars on the 4:3 image will fall off, and no part of the picture
will be lost. There are several ways commonly applied to broadcast a 16:9 format movie or program on a
4:3 broadcast: Letterbox, Pan & Scan and Movie Compress.
 In the past few years, a lot of improvements have taken place in DV cameras and the DV filmmaker can
now buy an affordable camera that rivals professional cameras of just a few years ago.
 The cinematographer is the Director of Photography (DP) who works closely with the director, organizes the
shots and shot sequences, determining the type of lighting needed and the creation of a visual mood.
 Cinematography is the process of capturing a vision onto film. For the cinematographer this dynamic
process involves a combination of technical skill with creative judgment. The process is both a craft and an
art, involving composition of light, shadow, time and movement.
 When we talk about technical aspects, the camera plays the most important role in portraying a character.
One important aspect of characterization is interpretation of camera motion. Many a times good placement
and adjustment of the camera positions can add a marvelous effect in a simple scene.

3.7 Exercise
1. M
ost of the cameras can run in different modes: Movie Mode, for shooting video, VCR Mode for watching the
tape and P.SCAN Mode, a special version movie mode that shoots a different type of video.

a. True b. False

2. It is not necessary to make sure that the battery is completely dead before we recharge it.

a. True b. False

3. Never touch the inside of the cassette.

a. True b. False

4. A faster speed lets in more light, while a slower speed lets in less.

a. True b. False

5. The f-stop ring controls a small delicate transparent part (diaphragm) in the lens.

a. True b. False

6. Widescreen TV uses an aspect ratio of 16:9, compared with the common TV aspect ratio of 4:3.

a. True b. False

7. Telephoto lenses do the opposite of wide-angle lenses.

a. True b. False

8. T
he cinematographer is the assistant director of photography who works closely with the director, organizes the
shots and shot sequences, determining the type of lighting needed and the creation of a visual mood.

a. True b. False

Aptech Limited 39
This page has been intentionally left blank.
Basics of Lighting and Art Directing

4 Basics of Lighting and Art Directing

Learning Outcomes
In this session, you will learn to -

 Explain lighting for films


 Describe interior and exterior lighting
 Describe creative lighting for films

4.1 Introduction
In the previous session we learnt about camera concepts and other equipments related to camera. Although it’s
important to choose a good camera for our shoot, the one thing that will have the greatest impact on the quality of
our image has nothing to do with our camera. Lighting our scene is more related to the quality of our image and
good visual impact.

Traditionally, shooting on film has been technically more challenging than shooting video because film stocks need
to be exposed properly, and proper film exposure needs lot of sound. Lighting for film is an art form by itself. Witness
only, many of the fine (and not so fine) films produced during the past decades. In addition, film is a wonderful and
valuable medium to capture and then study lighting and lighting techniques.

Lighting for film is a bridge between the cameraman, his film and the processing lab. Film lighting techniques are
heavily dependent on the knowledge of how a particular film stock will react to a particular type of light - in respect
to; intensity, contrast and color temperature. A multitude of image qualities are available by manipulating; exposure,
color temperature and film processing.

Lighting is greatly important for films along with the other aspects of good cinematography - composition, camera
movements, and the choice of colors, set designing, and make up.

4.2 Lighting for Films


Lighting is everything. It adds atmosphere, it shows style, and it sets the mood. It can highlight our scenes and hide
our mistakes. An over lit scene looks like too bright, an under lit scene looks dull - despite good camera angles,
acting and make-up. It will make or break our scene; it is as simple as that.

4.2.1 Types of Lights


Professional lights fall into two basic categories: Tungsten balanced (indoor lights) and daylight balanced (sunlight).
These two categories represent two very different areas of color spectrum. The normal indoor light bulb tends to
look orange or yellow, where as the light outside at mid-day tends to appear more white or blue. From camera
settings we can choose which light we need to use for a particular scene. This information helps the camera to
understand what colorcast we are expecting for that particular scene. The light and color are interrelated with each
other hence understanding light without color is incomplete. Apart from these two main light types there are other
light types that are used for films very often. These are as depicted in Table 4.1.

Aptech Limited 41
Concepts of Digital Filmmaking & Visual Fx

Types of Lights Images


Fluorescent Lights: These lights have color
temperature that ranges from 2700-65000 K and are
much brighter than normal lights. We need to be very
careful while using these lights as they flicker a lot and
produce a greenish tint, which can make film or video
look worse. We can avoid this problem by fitting special
Kino-flo tubes into fluorescent fixtures. Refer to Figure
4.1.

Figure 4.1: Example of fluorescent lights


Sodium Vapor: These lights have color temperature
that ranges from 21000 K and are yellow-orange in color.
These lights are yellowish-orange in color and use very
limited section of visible color spectrum. The result is an
almost monochrome image. If we want to color correct
the footage shot with these lights, we get very little color
information to work with. Refer to Figure 4.2.

Figure 4.2: Example of sodium vapor


Neon Lights: These lights vary in temperature. Neon
lights in normal exposure also always appear to be
overexposed due to their saturated colors. Refer to
Figure 4.3.

Figure 4.3: Example of neon light


(Refer to the Colored Section for the colored
image.)
HML: (Known as Halide Metal lamps): These lamps
have a color temperature of 5600°K. The temperature
is close to daylight color temperature, this means an
HML can be used to augment daylight without filters,
or be the sole lighting when using daylight film stock.
HMLs have a purple tinge when first switched on, but
turn white as the bulb heats up. These Bulbs should not
be touched by hand under any circumstances, else they
will discolor, overheat and shatter. Refer to Figure 4.4.

Figure 4.4: Example of HML


Table 4.1: Table displaying the types of lights

42 Aptech Limited
Basics of Lighting and Art Directing
4.2.2 Color Temperature
Light is measured in terms of color temperature, which is calculated in degrees Kelvin (k). Indoor tungsten lights
have a color temperature of 32000 K, whereas daylight has an approximate color temperature of 55000 K. Daylight
balanced lights are much more stronger than tungsten lights, and if we try to mix them together the daylight will
look much stronger than tungsten lights. Wherever it is essential to mix both the lights we need to account for the
color temperature differences by balancing our light sources. There are many accessories available to balance the
temperature of light and make it either tungsten balanced or daylight balanced.

4.2.3 Quality of Light


In addition to having different color temperatures, lights have different qualities. They can be direct or hard, soft
or diffuse, or they can be focused like a spot light. Hard light casts a sharp, clearly defined shadow. When hard
light is used to illuminate a face, imperfections in the skin stand out. The result is less than flattering. But in other
applications, such as bringing out the texture in leather, or the engraving on a piece of jewelry, this can be an
advantage. The light from a clear, unfrosted light bulb, a focused spotlight, or the noonday sun in a clear sky, all
represent hard light sources. Soft light sources are used in production to create a broad, even area of light. Since
soft light tends to hide lines, wrinkles and blemishes, it is desirable in doing glamour work. In the following diagram
we can see a subject lit with hard key light and diffuse light. Refer to Figures 4.5a and 4.5b.

Figure 4.5a: Image shot with soft light Figure 4.5b: Image shot with a hard light source

(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)

4.2.4 Light Accessories


There are many light accessories that can be used to control the quality of professional lights. Refer to Table 4.2.

Light Accessories Description


Fresnel lens attachment: A special fresnel lens
attachment lets us adjust the angle of the light beam
from flood to spot light. For several decades the Fresnel
(pronounced fra-nell) light has been the primary source
of illumination in most film and TV studio productions.
Refer to Figure 4.6.

Figure 4.6: Example of fresnal lens attachment

Aptech Limited 43
Concepts of Digital Filmmaking & Visual Fx

Light Accessories Description


Barn door attachment: Barn doors attach to the light
itself to help us control where the light falls. R o u n d
scrims fit into a slot between the light and the barn doors
and allow us to decrease the strength of light without
changing the quality. Refer to Figure 4.7.

Figure 4.7: Example of barn doors attached to


lights
Lighting gels: When we add light to a scene we usually
end up mixing light of different color temperatures. To
make light all of the same color we have to fit gels over
the lights. Lighting gels are translucent sheets of colored
plastic that are placed in front of the light not only to
alter the color of the light but to decrease the brightness
also. Refer to Figure 4.8.

Figure 4.8: Example


������������������������
of lighting gels
(Refer to the Colored Section for the colored
image.)
Bounce cards: Bounce cards are often just a piece
of white foam core that are used to create soft indirect
lighting. Refer to Figure 4.9.

Figure 4.9: Example of bounce cards

44 Aptech Limited
Basics of Lighting and Art Directing

Light Accessories Description


Reflectors (Shiny Boards): A reflector is a piece of
silvery, gold or white fabric stretched over a frame. They
are used to re-direct lighting from a bright light source,
such as the sun. Refer to Figure 4.10.

Figure 4.10: Example of reflectors


C-stands (Century stands): C-stands hold flags, nets
and other objects in front of the lights to manipulate and
shape the light that falls on the subject. Refer to Figure
4.11.

Figure 4.11: Example of C-stands


Light Meter: Before taking a shot there are many things
that need to be perfect to get the best shot, like location,
costume, makeup, set designing, camera angle, lighting
etc. As we know that lighting plays a very important part
in visual part of the film, it needs to be accurate as per the
scene. Nothing more or less; just perfect. The light we
see with normal eyes might seem like perfect but after
the shot is over it might look bad in the postproduction
and require lot of touch ups. To avoid this problem, light
meters are used to check the light on the indoor as well
as outdoor locations. Refer to Figure 4.12.

Figure 4.12: Minolta’s Auto Meter IV F is an exam-


ple of a top-rated incident light meter
Table 4.2: Light accessories and their description

If we are unable to approach a sunlit subject because it is some distance away, across a river or road, for example,
simply take a reading of the same sunlight that is striking the subject from where we currently are. This is known as
a “substitute reading.”

There are two ways by which light is normally measured on the sets: Incident light and reflective light.
 Incident light: In measuring incident light, a light meter is held at the subject and pointed toward the
camera. From that particular position the meter determines how much light is falling on the subject.
 Reflective light: In measuring reflective light, light meter measures the light in the opposite direction of
incident light. In this the meter is held at the camera to determine how much light is being reflected from the

Aptech Limited 45
Concepts of Digital Filmmaking & Visual Fx
subject back to the camera lens. Many cameras have reflective light meters inbuilt in their viewing system.
These internal light measurement systems are called through-the-lens (TTL) metering systems. Except for
TTL systems, light meters are handheld and can usually measure either reflective or incident light. Meter
typically comes with both a white bulb for reading incident light and an interchangeable flat grid for reading
reflective light. Refer to Figure 4.13.

Following is the description of the labels seen in the


image alongside.
 a: Exposure Index setting
 b: Bulb or Grid
 c: Activating Button
 d. Needle
 e. Foot Candle scale
 f. pointer for readings without HIGH slide
 g. pointer marked with an “H” which stands for
HIGH Slide
 h. 1/sec scale
 i. The f-stop reading
 j. the cine scale

Figure 4.13: Example of reflective light

4.2.5 Exposure
When we shoot a film, the exposure isn’t uniformly distributed over all the areas and angles. Highlights (brighter
areas) in the scene reflect the most light, and the areas of the sensor onto which they are focused are exposed
a great deal. Darker areas, like shadows, reflect much less light, so the areas of the sensor onto which they are
focused receive much less exposure. The perfect exposure retains details in both the highlights and shadows. Refer
to Figure 4.14.

46 Aptech Limited
Basics of Lighting and Art Directing

Figure 4.14: Example of the different effects of exposures


(Refer to the Colored Section for the colored image.)

As depicted in the series of images in Figure 4.1.4, the middle photo is correctly exposed. The left photograph was
overexposed and is too light. The right photo was underexposed and is too dark.

The art of lighting involves choosing an acceptable exposure. We can control an exposure using the auto exposure
control of camera only to some extent as most of the time auto-exposure mechanism in our camera might do exactly
the opposite of what we want. For example we may decide to purposely overexpose the sky to help in keeping a
good exposure on our actor’s face. But in actual process the auto-exposure might make sky brighter causing our
actor’s face to fall into a shadow.

While creating a film, try to make a good balance of exposure on all the areas of the scene because if the image is
underexposed the scene will look dark or muddy. Remember there’s always some area of overexposure in properly
exposed film, usually reflective highlights and bright white areas. As a rule, it’s better to have more light than not
enough light.

4.3 Interior and Exterior Lighting


Both daylight and artificial sources are commonly used for film lighting. Lighting fixtures for the film industry are
similar to stage lighting fixtures, except, they are larger and of higher wattage. Although very bright fixtures are still
used, many new fixtures using H.I.D. (high intensity discharge) sources are also commonly used.

4.3.1 Interior Lighting


Lighting for interior locations is a big challenge. The ideal situation in this case would be that we have lots of
available power, overhead lighting grids and enough space to set up the lights. In reality it is not always possible to
have all three advantages. For example if we are shooting in an office where we might have a realistic set but we
might have space problem or power supply problem.

Another problem that is commonly faced is the mixing of daylight and interior light. As they both have different color
temperatures mixing them together is a difficult task. In this situation we should choose the most important light
source and balance the other lights accordingly.

For example, if we have to shoot in a house where light source is coming from door or window then that will be our
main light source and we have to adjust other lights according to its color temperature. Refer to Figure 4.15.

Aptech Limited 47
Concepts of Digital Filmmaking & Visual Fx

Figure 4.15: Examples of indoor lighting


(Refer to the Colored Section for the colored image.)

4.3.2 Exterior Lighting


Lighting for the outdoor stage includes both open air and semi-enclosed facilities. Many outdoor facilities are
orientated so that the sun illuminates the scene, from behind the audience. This usually promotes maximum visibility
and usually keeps the direct vision of the sun from the audience. Refer to Figure 4.16.

Figure 4.16: Examples of exterior lighting


(Refer to the Colored Section for the colored image.)

The one fundamental concept that the lighting designer working on an outdoor scene must learn is: It is very
difficult to compete with Mother Nature. Scene lighting during a bright sunny day is almost impossible and has no
impact. Scene lighting during a cloudy or overcast day may have some impact but usually at best provides basic
illumination. During the day, the designer may need to provide 100’s of kilowatts of lighting to view even a minor
expression on an actors face. The lighting may only tend to fill in the shadows at best. If suddenly a cloud passes
over the sun, the scene lighting levels will seem to rise drastically. Once the sun has started to set however, a fixture
of just 1 kilowatt can appear brighter to the audience than the 100’s of kilowatts previously required to provide the
same visual impression.

48 Aptech Limited
Basics of Lighting and Art Directing
4.3.3 Light at Night
Lighting at night is no fun at all. However much light we seem to pour onto a subject it still looks dark and grainy,
either that or our subject looks blasted out - white and washed out, like a rabbit caught in a car’s headlights.

The best bet is to shoot all our night stuff just before light is about to go, when it looks like night but there is still some
light on the horizon (we better be quick), or shoot it day for night.

4.4 Creative Lighting for Films


Lighting is one of the most important elements to consider if we are thinking about making a film that will have
people raving about it’s picture quality. Otherwise we may find people wondering why our video has overexposed
pieces and dark spots and fuzzy noise all over it. The most common solution for picture quality problems is to control
the light. Cameras love light. No matter what our film is about, creating lighting can add an extra layer to enhance
the mood and emotion of our story.

4.4.1 Three-Point Lighting


This is the way most professionals light video and films. One light comes in and covers the left hand side of the
subjects face. The other light is softer and fills the shadows created by the first light. The first light is called a Key
Light and the shadow-filling light is called a Fill Light. A third light can also be used to create a feeling of depth called
backlight or kicker light.
 Key light: It is used to illuminate the subject and is usually positioned at an angle. It is generally placed in
front and just off center of the subject (It will be somewhere to the right or left of our camera). Usually it will
be at or slightly above our subject’s eye level; if it is placed below eye level it tends to produce odd looking
shadows. It is a strong and dominant light source and is often aggravated by some existing light source in
the scene. Refer to Figures 4.17a and 4.17b.

Figure 4.17a: Image indicating the key light Figure 4.17b: Image indicating the effect of light on
position the subject
 Fill Light: It is used to fill in the strong shadows created by the key light. Fill light is usually placed at a 45
to 90 degree angle opposite the Key light (left of the camera if the Key is right, and vice-versa). The fill light
may be placed high or low. Usually the fill light is dim and more diffuse than the key light. Subduing the
shadow gives a pleasing effect of light and shadow on the subject. Refer to Figures 4.18a, 4.18b, 4.19a,
and 4.19b.

Aptech Limited 49
Concepts of Digital Filmmaking & Visual Fx

Figure 4.18a: Image indicating the fill position Figure 4.18b: Image indicating the effect of light on
the subject

Figure 4.19a: Image indicating the key light and fill Figure 4.19b: Image indicating the effect of light on
light positions the subject

 Backlight or kicker light: This is positioned behind the subject and is used to separate it from the
background. The backlight is usually placed high behind the subject, at an angle nearly opposite to the key
light. The aim here is to add a highlight along the edge of the subject’s silhouette, usually on the darker side
of their body (the side with the fill light). This separation lends a sense of depth to the image and helps make
our subject stand out better. Refer to Figures 4.20a and 4.20b.

Figure 4.20a: Image indicating the backlight Figure 4.20b: Image indicating the effect of light on
position the subject

50 Aptech Limited
Basics of Lighting and Art Directing
Lighting a person is much more complicated than lighting an object as the human face has a lot of angles and there
is always a change in expressions. The lighting here needs to be done very carefully as incorrect lighting will cause
strange shadows on the actors face. Also, a direct lighting might show little flaws of the actor’s face very clearly. To
avoid all these problems, make a lighting creatively and try to get the best results. Refer to Figure 4.21.

Figure 4.21: Illustration depicting a complete lighting setup

4.5 Art Direction Basics


A good art director is a combination of the symbolic and practical. The work of an Art Director in the film and
television industry is to supervise and co-ordinate all visual aspects of a production. Most of the work for an Art
Director is in pre-production.

While creating a visual symbolic set the good lighting is an easiest way to achieve it. Colors have strong association
with the thinking of people. Black symbolizes death, red hints at blood, violence or danger as well as love and
passion, blue is peaceful and charming as well as an indication of sadness and so on. It is the art director who needs
to take care of accurate symbolic displays using suitable techniques.

In addition to externalizing the themes of the story, production design also aids in focusing the viewer’s eyes and
this job is the art director’s responsibility. Apart from lighting, the art director also needs to take care of set dressing
and props, and location alterations.

4.5.1 Dressing a Set and Arranging Props


Whether we have found a location or a built set the next step is to dress it according to the Production Designers’
vision of how it should look. A built set would probably need more dressing as the set will be empty whereas a found
location might have limited dressing as the production designer may not want to lose its natural beauty. The art
director must agree the costs involved with the Production Manager. The Art Director is responsible for directing
the realization of the Production Designers’ ideas within the art departments’ budget. He/she works closely with
the Director, Construction Manager and craftspersons in the construction of the sets. Refer to Figures 4.22a and
4.22b.

Aptech Limited 51
Concepts of Digital Filmmaking & Visual Fx

Figure 4.22a: Example 1 - Complete set Figure 4.22b: Example 2 - Complete set

The Art Director acts as a go-between with the props department and set dresser in the completion of the sets.
A good prop (property) can really sell a weak location or make the shooting much easier. For example if our
film involves weapons and fight scenes, we will need special props like glass, breakaway tables and chairs, fake
weapons. Every time it is not possible to buy such things as it might turn out to be very expensive so nowadays
rental props are also available.

4.5.2 Location Alterations


A location may need to be altered (e.g. painted, electric wiring moved or taken down) in order to suit the style of the
production or period the production is set in. The Art Director monitors all the functions of the art department during
production. It is a highly creative role. Lower budget productions would often have one person performing both the
roles of Art Director and Production Designer. Refer to Figure 4.23a and 4.23b.

Figure 4.23b: Example 2 - location


Figure 4.23a: Example 1 - location sample
When everything is properly set, the art director oversees the finished look of the sets with the Director prior to
shooting. While the Director and crew are filming on one set, the Art Director would be working ahead on the next
set or location required.

52 Aptech Limited
Basics of Lighting and Art Directing
4.5.3 Tips and Tricks
Let us have a look at some tips and tricks that any art director should know beforehand.
 When determining how we want our lighting to look, we should first form a style of lighting that will
accommodate the scene or job that we are about to shoot. We need to establish our own look and style.
 When shooting outdoors, choose a background that is not brighter or lighter in color than our subject, if we
can help it. Shooting our subject against a bright background will be difficult to light since it can reflect light
as much as two or three stops brighter than what would be on our talent. Choosing a darker background
outdoors to shoot against will allow us to light our talent without the background becoming overexposed.
 Mirrors can look great, but create special problems for lighting and shooting. Work with director of
photography before putting a mirror in a set.
 Extremely saturated colors look good to the human eye, but not to the video camera. Take care to avoid
highly saturated colors and stripe patterns, as they look bad on video usually resulting in a distracting moiré
effect.
 Lighting a scene, whether it is a one person interview or a large interior, is determined by the script, the
environment, a desired mood, or feel, and what the program should depict to the viewer, not the format of
the camera.
 For outdoor lighting, try to place the subject toward the sun if possible and place ourselves in front of the
subject. It is best to film in the early morning or late afternoon to get a golden effect on our video.

Filming at high noon creates narrower or harder shadows on the subject’s face. It is best to use some sort of
light diffusion, if possible, if we are filming at high noon. For e.g. shoot under the shade of a tree if we are filming
portraits.
 For indoors: Try to use as much available light as possible. We may also need to enhance the available
light with the use of a camera light. This will ensure that we properly expose the subject.

Remember these small problems become a big headache when we transfer our video to a film.

4.6 Summary
In this session, Basics of Lighting and Art Directing, you learned that:
 Under normal conditions a human perceptual adjustment called approximate color consistency comes
into play and automatically makes adjustments for sources of light that we assume are white. Strangely,
when we look at video or film, approximate color consistency doesn’t work in the same way. Unless color
corrections are made, we’ll notice significant (and annoying) color shifts between scenes when they are cut
together.
 From the lighting instruments themselves, we now turn to attachments that are used with these lights.
Adjustable black metal flaps called barn doors can be attached to some lights to mask off unwanted light
and to keep it from spilling into areas where it’s not needed.
 A Fresnel is mounted on a floor stand for film and on-location video work, but in the studio these lights are
typically hung from a grid in the ceiling.
 In addition to having different color temperatures, lights have different qualities. They can be direct or hard,
soft or diffuse, or they can be focused like a spot light.
 The most common solution for picture quality problems is to control the light. Cameras love light. No matter
what our film is about, creating lighting can add an extra layer to enhance the mood and emotion of our
story.
 The Art Director acts as a go-between with the props department and set dresser in the completion of the
sets. A good prop (property) can really sell a weak location or make the shooting much easier.

Aptech Limited 53
Concepts of Digital Filmmaking & Visual Fx

4.7 Exercise
1. Professional lights fall into _________basic categories.

a. One b. Two
c. Three d. Four

2. F
luorescent Lights have color temperature that ranges from 2700-65000 K and are much brighter than normal
lights.

a. True b. False

3. HMI Lights have a color temperature of 8600°K.

a. True b. False

4. Light is measured in terms of color temperature, which is calculated in Degrees Kelvin.

a. True b. False

5. A special fresnel lens attachment lets us adjust the angle of the light beam from flood to spot light.

a. True b. False

6. L
ight Meters are translucent sheets of colored plastic that are placed in front of the light not only to alter the color
of the light but to decrease the brightness also.

a. True b. False

7. I n measuring reflective light, a light meter is held at the subject and pointed toward the camera. From that
particular position the meter determines how much light is falling on the subject.

a. True b. False

8. Fill light is usually placed at a 45 to 90 degree angle opposite the Key light.

a. True b. False

54 Aptech Limited
Dealing with Audio

5 Dealing with Audio

Learning Outcomes
In this session, you will learn to -

 List the types of microphones and headphones


 Explain the important terminology and aspects of audiography

5.1 Introduction
So far we have covered cameras and lights in the previous sessions. We know the importance of these two aspects
now. Apart from these two areas the next area that has tremendous importance is Sound. It is one of the most
powerful tools in our creative palette. Remember cinema is a nothing else but making a series of images talk and
communicate with the audience and take them into a whole new world. With the right music and sound effects,
we can do everything from evoking locations to defining moods and building tensions. While watching a movie we
close our eyes and listen to the background music, we get a fair idea of what is happening in the scene. We know
whether the scene has a scary situation, a happy mood or whether a suspense scene is going on. Without sound
the impact of a movie is very low.

While watching a movie an audience sometimes is not aware of the low picture quality. They sometimes find even
a low quality movie appealing but if they can’t understand audio they won’t be engaged in the story.

Even though most of today’s filmmakers are aware of the importance of audio, a few DV filmmakers are still lacking
in audio experience and technology. And when we consider that audio is just as important as the visuals, this lack
of knowledge can ruin a movie.

Though it is possible to edit the quality of our sound in post-production, it has lots of limitations. It is much easier
to get good editing results for images and animation but not for audio. If we have got extra sound, or low recording
levels, correcting our audio will be extremely difficult. There are two ways to get good audio results, put the actor into
a vacuum chamber so every other possible sound is gone, or put the microphone as close to the actor as possible.
Since getting a vacuum chamber big enough for an actor and the entire set is not possible, we’ll focus on getting
that microphone as close to the actor as we can.

One thing is important to understand and that is though it’s difficult or impossible to correct a sound, or remove an
unwanted sound from the recording, it’s easy to mix the sound in. So if we have a primary sound of an actor recorded,
we can always add additional sounds like background music or other object sounds later in the postproduction
stage.

5.2 Types of Microphones and Headphones


The word “microphone” was first coined by Wheatstone around 1827 and was used to describe a purely acoustic
device, like a stethoscope, which he had developed to amplify weak sounds. The word is Greek in origin, with
“micro” meaning small and “phon” meaning sound. The first step in micing a set is to block the action. This involves
working out actor and camera movement with the director and cinematographer. This process will help us make
Aptech Limited 55
Concepts of Digital Filmmaking & Visual Fx
decisions about which microphone to use and where to position it.

Recording a good audio requires a lot of preparation and begins with selecting the right microphone. Rather it all
begins with the microphone. We will start with the microphone that is built into the camcorder. For the most part,
this microphone should not be used to record sound for a movie. There may be some documentaries that must use
it due to limitations of staff or time but this must never be used for a narrative movie. The reason why the on-board
microphone should never be used is due to the main principle behind audio recording for the motion picture film. It
goes something like this. To properly record dialog we must get the cleanest possible recording that we can. This
means that background noise is as low as possible and the dialog does not over-modulate.

To record a good quality audio we need to buy or rent one or more high-quality microphones, which we will connect
to our camera or to a separate audio recorder such as DAT or MiniDisc recorder. Different types of microphones are
designed for different recording situations so we should choose our microphone according to the type of shooting
we are doing.

There are various characteristics in each kind of microphone that define what they will hear. These hearing
characteristics are also known as Directional characteristics. The directional coverage of the mic that we will choose
will have a lot to do with both the content and quality of our recorded sound. Let us have a look at some of these
directional qualities as we proceed further.

5.2.1 Omni Directional Mics


When we are recording in a venue that has a huge area, the audience is fairly quiet and we are fairly close to the
sound source, Omni-directional mics are capable of making excellent recordings and would be the mic of choice. In
addition, when we specifically need to use a very small microphone, omni’s are the better choice.

As the omni directional mics pick sounds from all directions, it might sound good to use it when recording on large
sets with many actors. On the other hand these mics are not practical for many such situations. As with their wide
coverage omni mics can pick up far more sound than we may require, including camera noise, operators noise, as
well as sounds of passing cars or people or many other such things.

Miniature Omni directional mics are called “Binaural microphones”. They are, used in pairs, placed on either side of
a human (or artificial) head and placed in, or as near as possible to, the ears. Omni directional mics pick up sound
in all directional fairly equally, so when they are used in this manner, they pick up sound very much like the human
ear does. Refer to Figure 5.1.

Note

These same microphones are also capable of making stereo recordings.

Figure 5.1: Examples of omni directional mics

56 Aptech Limited
Dealing with Audio
5.2.2 Unidirectional Mics
Cardioids microphones are Unidirectional microphones and pick up sound mostly in the direction that we point
them. They cannot be used to make binaural recordings, but can, of course, be used to make stereo recordings.
Because of this directionality, they have certain advantages over Omni-directional mics in some situations. When
we are recording in a venue that does not have great acoustics, the audience is noisy and/or we can’t get close to
the sound source, Cardioids are the better mics to use.

Since Cardioids are directional mics, they will greatly reduce excess reflected sound coming at the mics from all
over the venue. They do a good job of reducing unwanted audience noise from the sides and rear. While they
can be used up close with excellent results, they excel over Omni mics when recording from a distance. In fact,
there are different levels of directionality available, including Sub-Cardioid, (regular) Cardioid, Hyper-Cardioid and
Super-Cardioid (sometimes called shotgun mics). In general, the further we are from the sound source, the more
directional the mic should be. Refer to Figure 5.2.
Note

Cardioids are also the preferred mic to use on stage for sound reinforcement
applications.

Figure 5.2: Example of unidirectional mics

Good quality Cardioids are about two or more times the size of equivalent sounding omni directional mics. The
Cardioid mics that are commonly available in the same size package as the small omni’s are lacking in deep bass
and high frequency response.

5.2.3 Types of Mics


Getting good sound requires much more than just attaching the mic on an actor and recording the sound. Good
sound recording involves selecting a good mic, setting it up accurately and avoiding non-required sound elements
to get the best results. As we have seen the various characteristics of mics, let us now have a look at the various
types of mics that are used most commonly.

■ Handheld Mics
Omni directional handheld mics are good for vocals, music, or sound effects. To some extent they are sensitive
to handle noise but best used on a stand. We need to be careful around wind or ambient rumble (such as
traffic). To get the best results we need to place them close to the subject rather than anything else. We should
Aptech Limited 57
Concepts of Digital Filmmaking & Visual Fx
also remember that the performer shouldn’t speak directly into the top
of the mic. It’s best to tilt the mic at about a 300 angle. Refer to Figure
5.3.

■ Lavaliers
Lavalier, or clip-on mics are the small condenser mics that we see clipped
to the front of newsreaders. Originally, the term “lavalier” referred only
to the “neck-worn” or “body-worn” class of small microphones. These
days, the working definition of lavalier has been extended to include Figure 5.3: Example of a
virtually any miniature microphone small enough to be worn on the body handheld mic
and/or hidden in the set. These are generally omni directional mics and
because they are kept so close to the speakers mouth they rarely pick up extraneous sound making them
suitable for recording individual performances.

Modern lavaliers can be described as being either “Proximity” or “Transparent”. Refer to Table 5.1 to understand
the difference between the two mics in brief.

Proximity Transparent
A Proximity type lavalier is defined as a microphone They are defined as sounding more like
that works best when kept fairly close to the source omnidirectional recording studio mics. They are
of the voice, emphasizes that voice, and suppresses very sensitive to sounds, and their volume vs.
background. distance characteristics is far more gradual than
that of proximity lavaliers. Transparent mics can
be deployed at greater distances and are far
more forgiving of talent turning their heads away
from the mic. Transparent mics sound much more
natural and less forced than proximity mics. The
drawback of transparent mics is that they are
much more sensitive to background noise, and
also require greater skill to hide under clothing.
Refer to Figure 5.4.
Table 5.1: Table depicting the difference between the two mics - proximity and transparent

Figure 5.4: Lavaliere mic attached to the person

58 Aptech Limited
Dealing with Audio
While placing these mics for a good recording certain precautions must be taken. Let us understand these simple
rules to follow:
 Center the lavalier (left to right). If the lavalier is positioned to one side, the amplified
voice will drop if the actor’s head is turned away from the microphone. Naturally,
the voice will get louder if the head is turned toward the microphone. This results
in very inconsistent levels.
 Position the lavalier so it is at least a full hand spread from the mouth. This minimizes
differences in distance from the mouth caused by head movement. Again, this will
help keep levels consistent.
 Avoid having the lavalier cable hang straight down. This increases cable noise.
Instead, put a gentle bend in the cable near the microphone by bringing the cable
back up through the microphone clip. Refer to Figure 5.5. Figure 5.5: Image
depicting the bend in
 Hide the miniature lavalier microphone in the actor’s hair or beard. The sound level
the lavalier cable
will stay constant since the microphone moves with the head.

■ Shotgun Mics
The long thin microphones that we see sticking out of video cameras are referred to as shotgun mics. These
mics provide the greatest flexibility for micing. Most shotgun mics record stereo audio usually by having two
parts inside, one each for the left and right channels. So these mics are used for high quality film and TV
production requiring mono and stereo in one mic. They are popular with current affairs crews who need a
shotgun for interviews and a stereo for atmospheres but only want to carry one microphone. Many times they
are also excellent for stereo wildlife recording.

There is a tendency among filmmakers to rely too heavily on their camera mounted shotgun mics rather than
separately mounted boom mics. Obviously, having a microphone on the camera is more convenient. But our
objective in the field is not convenience, but gaining the highest quality sound possible. The shotgun microphones
are like a telephoto camera lens. A long lens will isolate and magnify a distant subject, but at the same time it will
compress the perceived distance between subject and background. Everything appears to be closer together
than it physically is. Refer to Figures 5.6a and 5.6b.

Figure 5.6a: Standard Shotgun Mic


Figure 5.6b: Hi-End Shock Mount

Note

Most cameras and tape decks have one place to plug in a microphone. So if we are using
multiple microphones we’ll have to plug them into a mixer to mix their signals down to a
single stereo signal that can be plugged into the microphone input on our camera or deck.

5.2.4 Mic Techniques


One important factor while choosing any of the micing techniques is to pay attention to is “perspective”. The distance

Aptech Limited 59
Concepts of Digital Filmmaking & Visual Fx
between microphone and subject should agree with the distance to the screen or camera. In a long shot, it is natural
for the voice to sound more distant and for there to be a greater presence of ambience. Close-up angles should
consist of more voice and less background.

There are four basic mic placements from which all mic setups are built: boom, plant, lavaliere and wireless. This
priority is sometimes referred to as the Hierarchy of Microphone Techniques. Let’s examine each approach:

■ Boom
Booming involves mounting the microphone to a boompole, and suspending it in front
of the subject for optimal sound pickup. The process is demanding since it involves
coping up with many variables at once, such as holding the mic as close as possible
to the subject, moving the mic from one subject to the next, and keeping the mic out
of the picture frame. The rule of thumb when booming is to get as close as possible
to the subject, which is just outside the camera frame line. The further the mic is
away from subject, the greater the background noise and echo, so every inch closer
improves sound quality. The mic is normally held several inches to a foot over the
Figure 5.7: Example
actor’s head. Refer to Figure 5.7. of booming

Note

Booming is done by a boom operator.

■ Plant
A plant is any microphone fixed in place. It is used to cover a static subject when it is impractical to use a
boom. Plants can be conventional condenser mics or lavalieres. The type of microphone used depends on
the situation. Placement is important because plant mics are effective only if dialogue is directed within their
pickup pattern and range. Another important consideration is that the mic be properly hidden (planted). Mics
can be hidden just about anywhere: behind, on top and below props, furniture and walls. The possibilities are
limited only by our imagination. The newer lavs (lavaliers) are so small the can be used in plain sight and go
unrecognized as a mic.
 Lavaliere: We have already learnt about the Lavaliere, as a type of mic that is also a mic-clipping technique
that is used to record the individual subjects voice.
 Wireless: Wireless (radio) mics send the audio signal over airwaves using a transmitter and receiver.
Their main drawback is that they are subject to RF interference, so a wired mic should be used whenever
possible. They are helpful when the boom operator cannot get close to the action and it is impractical to run
a lav cable. A wireless lav system consists of a normal solution for complex shoots involving lots of mics.
This system consists of a normal lav microphone attached to a small transmitter worn by an actor. Receiver
picks up the audio and routes it on to our recording device.

5.2.5 Headphones
Headphones are the audio equivalent of a field monitor, as we need them to hear the audio as it’s being recorded.
Headphones serve a dual purpose as they block out ambient noise from the set and allow the sound recordist to
monitor the audio directly. With the headphones a sound recordist can focus on the actors’ sound, which is not
possible with normal human ears as it absorbs all sorts of noises from the atmosphere. Refer to Figures 5.8a and
5.8b.

60 Aptech Limited
Dealing with Audio

Figure 5.8a: Example of headphones with cord


Figure 5.8b: Example of cordless headphones

5.3 Important Terminology and Aspects of Audiography


Sure, silence reigned supreme in the Golden Oldies. Then came the Talkies. Ever since, we take sound for granted
at the movies. But if we want our videos to communicate our message or story effectively, we need to pay attention
to the sound. Sound is NOT to be taken for granted. Aside from dialogue, sound gives “presence” to the action on
screen. And the richer the sound, the more “real” the unfolding drama feels.

Audiography is a term that refers to the entire process and techniques that help in making audio for a film. The term
includes sound systems, synchronization techniques, the hard work of sound crewmembers, sound effects applied
during postproduction, sound dubbing and other equipments used to get the best quality of sound. Let’s learn these
terms in detail.

5.3.1 Single and Double System Sound


Double system sound involves recording picture and sound separately with a camera and audio recorder, then
syncing them up in post-production. Single system sound, on the other hand, involves recording picture and sound
together on the same medium. No syncing is required.

Early video cameras left much to be desired in terms of sound, but many professionals used a single system
approach anyway, recording directly into the camera. The main reasons were ease of operation and portability of
equipment (at that time, video was primarily used for documentaries and news gathering).

5.3.2 Synchronization
As the sophistication of video and audio equipment increases, so – almost inevitably – does the delay experienced
by the signals passing through them. Unless great care is taken to match the delays in the audio and video paths,
any small differences can rapidly accumulate and lead to a distracting loss of synchronization between sound and
picture (most notably, loss of “lip-sync”). This is already becoming a widespread problem in television production
and it affects all types of programmes, including live and pre-recorded material, and films.

Currently the problem of maintaining correct audio synchronization is addressed using a tracking “A/V sync” audio
delay. This approach uses an audio delay, which automatically tracks the difference in the timing of the input, and
output video signals across an item of video equipment, i.e. the difference in timing of the video is measured and
applied to the audio.

Aptech Limited 61
Concepts of Digital Filmmaking & Visual Fx
■ Lip-syncing sound for filmmaking
The entire filmmaking process of shooting, recording, transferring, dubbing, editing, and projecting whereby
the end result produces the appearance to the viewer that the on-camera speaker’s lips appear to be precisely
synchronized with the words he/she is speaking. Since, unlike videotape, the filmmaking process is split into
the two separate mediums of picture and sound, the requirement for maintaining lip sync throughout the various
production and post-production stages becomes very important.

Note

Slating is the process of providing positive identification marks for the start of a lip-sync
take on both the picture film and the sound track tape. These markings are very important in
the editing stage to greatly simplify the process of locating the exact sync position between
picture and sound.

5.3.3 Sync Recorders


■ Stereo cassette recorders
These have been used professionally for sync sound field recording since the early ‘70’s with excellent results.
Normally, one of the stereo channels is reserved for recording the 60hz. sync signal from either the camera
(if it is set up for cable sync) or from a crystal sync generator. This leaves the other stereo channel for the
microphone signal.
■ DAT recorder
This can be used as long as the field audiocassettes are resolved correctly. A DAT recorder is very similar to
a videocassette recorder in the way that it scans the tape during record and playback. These recorders are
commonly used by sound labs for resolving DAT cassettes to magnetic film or videocassette. If we will be
editing our film project on a non-linear system, we can also convert the data rate of the audio files after we
import them into the computer.
■ MiniDisc recorder
These can be used in some cases, but they are comparatively more expensive than DAT recorders and do not
accept external sync references. They cannot always be attached to some other external master source such
as a videocassette recorder.

5.3.4 Sound Crew


The sound crew has one of the demanding jobs on any film set because sound recording needs full attention all the
time when an actor is performing. The sound crew is usually asked to do its job without disrupting the set. As shots
are set up the sound crew cannot expect the lighting crew to arrange the instruments in a way that facilitates sound
work. However this does not mean that the sound crew should not be assertive when elements are conspiring to
make recording high-quality sound. They can always give suggestions that will ultimately help in getting the best
result in both the areas.

The sound crew is generally composed of two people:


 Sound Mixer: The production sound mixer (or recordist) records sound during filming. This person is also
responsible for mixing the various soundtracks into the film’s composite soundtrack, which is then put
onto the film with either a magnetic or optical stripe. The mixer has to take responsibility of mainly four
things: Setting the microphone position, addressing any access noise disturbing the required recording,
determining the best way of getting a good quality of voice in the given shooting space and monitoring audio

62 Aptech Limited
Dealing with Audio
levels for optimal recording. Refer to Figures 5.9a and 5.9b.

Note

To balance a strong voice against a weak voice, use our mic angle and placement rather than
riding gain (volume) on our mixer/recorder. Let the strong voice strike the mic slightly off-
axis and/or from a little more distance than the softer speaking actor. This will balance the
relative volume of both people without having the background noise continually changing
during the shot.

 Boom Operator: The boom operator is a sound crewmember who handles the microphone boom, a long
pole that holds the microphone near the action but out of frame, allowing the microphone to follow the actors
as they move. The Boom operation mainly needs to take care of two things: The boom shadow should not
fall on any element present in the scene and should not allow the mic to be visible in the main scene area.

Figure 5.9a: Pivoting a single mic back and forth between two characters is usually the best approach

Figure 5.9b: Subject movement requires the boom to lead the subject through a space
 If the production is large then there is a third member in the sound crew called Cable Puller, who helps with
setup and keeps the microphone cords out of the way during the shot.

Aptech Limited 63
Concepts of Digital Filmmaking & Visual Fx
5.3.4 Automated Dialog Replacement
As we have seen above, the recording of dialogue usually occurs on the set during filming, and this is referred to
as “Production Dialogue”. Sometimes, while actors are on the set, but without cameras rolling—the company will
record additional lines of dialogue to be used later as “Wild Lines”. Examples of wild lines that would be recorded
on the set for future use include other halves of phone conversations, shouts or greetings from afar, background
ambience, alternate dialogue, narration.

Sometimes, for any of a multitude of reasons, production dialogue is unusable and must be replaced during post-
production. Automated Dialogue Replacement (ADR) is a method of creating and replacing dialogue in post process
that used to be referred to as looping. In the old days, dialogue replacement was done by physically cutting out short
sections of the original dialogue (consisting of one or two lines) along with the appropriate picture. These sections
were formed into continuous loops. That’s why the process was called “looping”.

Better technology greatly simplified the process. In the ADR process, the physical loops have been done away
with. Instead, the entire reel of picture and the entire reel of original sound are threaded up in sync. An entire reel
of blank audio stock is set up on a recorder. A computer is fed the start and stop footage of each “loop” that needs
to be recorded. All three machines roll down, in sync, to the first “loop” and the process begins. The actor watches
the projected footage and listens to the cue track on headphones. A series of three audible beeps alerts talent as
the system rolls forward towards the record start point. His take is recorded on the blank stock. At the completion
of each take, the computer rewinds all three machines back to the programmed start point and the process repeats
itself. When the loop has been successfully recorded, the entire system moves ahead to the next programmed set
of cues.

With the current ADR process, editing and replacing of sound has become much easier for the editor.

5.3.5 Sound Effects


Though sound effects are used very frequently, they are often virtually unnoticed. The postproduction team takes
care of the type of the sound effects, their frequency and employment. Whether we know it or not, most of the
sounds we hear in any given scene in any recent movie were added after the filming. The audio captured on- location
is usually only dialog. Other sounds, which create the atmosphere and environment of a scene are separately
recorded or gathered. Take for example a scene in which two people are conversing in a park. Only the actors’
dialog is recorded on-location. Sound technicians will later add the sounds of birds chirping, wind rustling through
tree leaves, a dog barking in the distance, the two characters footsteps as they walk etc. There are libraries full of
different sound effects for the filmmakers to choose from. However, many filmmakers prefer original sound effects
for their film. Often a sound designer will go out and look for sounds to record.

5.3.6 Ambience Sound


Ever stopped to listen to the hum of an air conditioner, the patter of the rain, the sounds of a stream, the wind in the
trees, or just background voices in a classroom? All these types of sound are collectively termed “ambience.”

We might not have even noticed these ambient sounds, because our brain filters them out as insignificant to the
activity or situation we’re focusing on at a given moment. But in video, leave out the ambience, especially if we’re
cutting from different scenes in which the background sounds change, and the results will be rather choppy if not
annoying.

5.3.7 Dubbing
Once the dialog, sound effects, and other audio are recorded, it is time for audio dubbing. Each separately recorded
sound exists on its own sound track. Each of these sound tracks can be manipulated in different ways. They can be
faded in or out, have their pitch changed, or have many other effects applied.

64 Aptech Limited
Dealing with Audio
Usually the director and producer attend these dubbing sessions. Once the director and producer of the film
are satisfied with the final audio mix, it is time to produce the finished composite sound track, which is what the
moviegoers will hear. In the past, the composite sound track existed as only two tracks: left and right. This is called
surround sound, which gives a real 3D sense to the audio. Today, with the new technology, the composite sound
track of a film can exist in many more tracks.

5.3.8 Audio Levels


Good quality sound both adds reality to the action depicted, and amplifies a movie’s impact, especially the small
movies typically featured on the Web or on CD-ROM. But there’s more to editing sound with video than merely
matching it to the pictures on a timeline. All the audio elements narration, effects, ambience, music have to blend
with each other and with the video. This means paying close attention to the levels at which the audio is recorded
throughout the editing process. The Audio Operator controls sound levels. Refer to Figure 5.10.

A sound level that


needs to be adjusted

Adjusted sound level according to


the requirement

Figure 5.10: Audio operator controlling audio levels

5.3.9 Equalizer (EQ)


It is a device used to cut and boost individual frequencies of an audio signal using a number of filters. The name
“equalizer” comes from the original application of correcting distorted audio signals to sound closer to the original
source. It divides the sound input into smaller ranges of tones. This lets us adjust each range of sound to fit our
preferences rather than just the high or low frequencies, as typical treble and bass buttons do. Equalization is one
of the most powerful tools we can use to enhance audio quality. Through the process of equalization (EQ) we can
amplify or reduce selected frequencies or groups of frequencies that comprise a given sound file. Refer to Figure
5.11.

Figure 5.11: Equalizer (EQ)

Aptech Limited 65
Concepts of Digital Filmmaking & Visual Fx
5.3.10 Woofer
The bass and lower midrange sounds are reproduced by the woofer. To operate efficiently, a woofer’s cone should
be made of material that is stiff, yet lightweight. Cones made of polymers, polypropylene, light metals, or poly mixed
with other materials including carbon strands and metals, provide excellent sound. Refer to Figure 5.12.

Figure 5.12: Front and backside of the woofer

5.3.11 Audio Level Controls


There are many controls available to control the audio in any audio system. Each control button has a specific use
and while recording the audio or listening to it we can adjust the control for better audio quality. Let us have a look
at some of these controls. Refer to Figure 5.13.

Figure 5.13: The XPS 210 subwoofer showing the audio control knobs and buttons
 Bass: These control the low end of the audio frequency spectrum, from approximately 20 Hz up to 400
Hz or so. An instrument producing bass tones vibrates air more slowly than one producing Treble tones. In
audio, bass is subdivided into mid-bass and sub-bass regions.
 Mid-bass: The segment of the audio frequency spectrum covering sounds produced in the upper bass and
lower midrange region.
 Treble: These control the highest frequencies in the audio spectrum, above 1.3 kHz and usually with an
upper limit of 20 kHz.
 Mid-range: The segment of the audio frequency spectrum between the bass and treble frequencies. The
mid-range includes most voices and the fundamental tones produced by most musical instruments.
 Attenuation: These controls are useful when we want to reduce the higher and lower audio levels.

66 Aptech Limited
Dealing with Audio

5.4 Summary
In this session, Dealing with Audio, you learned that:
 The key to optimal microphone performance is in the placement of the microphone in relation to the sound
source. If a microphone is too close, too far, or off axis, complications will result, including poor frequency
response, noise and distortion.
 There are three basic mic placements from which all mic setups are built: boom, plant and lavaliere. This
priority is sometimes referred to as the Hierarchy of Microphone Techniques.
 ADR is an acronym for automatic dialogue replacement. In this process the actors are called back during
the post-production process to re-record dialogue that wasn’t recorded properly during the shoot. The editor
supervises this process and matches the newly recorded lines to the actor’s mouth on film.
 Aim the mic from ABOVE the subject so that the mic points DOWNWARD. The line of sight reaches from
the front of the mic, to the mouth, and then towards the ground. Background noise and ambience will strike
the sides of the microphone (which is the maximum rejection angle) rather than striking the mic along its
most sensitive front axis. In the event that it is impossible to mike from above, the next best option is to mike
from below, so that the line of sight terminates with sky.
 Once the dialog, sound effects, and other audio are recorded, it is time for audio dubbing. Each separately
recorded sound exists on its own sound track. Each of these sound tracks can be manipulated in different
ways. They can be faded in or out, have their pitch changed, or have many other effects applied. The
technicians who control these effects are called mixers.

Aptech Limited 67
Concepts of Digital Filmmaking & Visual Fx

5.5 Exercise
1. T
he omni directional mics pick sounds from all directions; it might sound good to use it when recording on large
sets with many actors.

a. True b. False

2. Miniature Omni directional mics are called binaural microphones.

a. True b. False

3. In general, the closer we are from the sound source, the more directional the mic should be.

a. True b. False

4. Lavalier, or clip-on mics are small condenser mics that we see clipped to the front of newscasters.

a. True b. False

5. Modern lavaliers can be described as being either proximity or transparent.

a. True b. False

6. The long thin microphones that we see sticking out of video cameras are referred to as lavaliers.

a. True b. False

7. A plant is any microphone fixed in place.

a. True b. False

8. Corded mics send the audio signal over the airwaves using a transmitter and receiver.

a. True b. False

9. The production tune mixer records sound during filming.

a. True b. False

10. S
ometimes, while actors are on the set, but without cameras rolling the company will record additional lines of
dialogue to be used later as production dialogue.

a. True b. False

68 Aptech Limited
Editing

6 Editing

Learning Outcomes
In this session, you will learn to -

 Explain the basics of editing


 List the various editing equipment
 Create a rough cut
 Explain the phenomena of fine cutting and transitions
 Explain sync editing

6.1 Introduction
Whether we prefer the quick cutting MTV look, a traditional film-cutting style or any other way of cutting the goal of
editing is to successfully tell a story. In this regard editing can be a continuation of the writing process, as when the
film is shot the editor needs to do the rewriting of the script using the footage that exists. As the shooting is already
done the rewriting will be restricting to the footage available in hand.

Through editing, shots are combined in accordance with the script to create the finished movie. A shot must be as
short as possible while still achieving its purpose. If it is too long, the audience will be bored; if it is too short, they
will be frustrated. Once the audience grasps the meaning of a shot, it’s time to cut to the next shot in the scene.

Editing is the act of completing the pacing and narrative structure of a film and its soundtrack by cutting and splicing
the shots together to make a final, comprehensible story. The individual who edits the film is the editor. Very often,
the success or failure of a production may rely on the quality of the editor’s work. Sharp film editing can make a
mediocre production look good and a good production look that much better. Inversely, sloppy editing can unhinge
a solid script and even negate strong efforts by the director, the actors, and technical crews.

The film editor must know how to tell a story, be politically savvy when working with directors and studio executives,
and have a calm and confident demeanor. Millions of dollars of film and the responsibility of guiding the picture
through post-production and into theaters rest in the editor’s hands. Scenes may have been photographed poorly
and performances might have been less than inspired, but a skilled and creative editor can assemble the film so
that the audience will never see these imperfections.

6.2 Editing Basics


Because of the similarity of separate scenes that may not take place one after another, such as location or key
characters, it’s helpful for the filmmaker to be able to shoot out of order. For instance, if the story calls for the location
of the film to start in Los Angeles, move to New York for the middle, move back to Los Angeles for the climax, and
then finally back to New York for the conclusion, it would obviously not be efficient to travel back and forth across
the country throughout the filming. Thanks to editing, the LA scenes can be shot at one time, and then the film crew
can fly to New York for the rest.

Aptech Limited 69
Concepts of Digital Filmmaking & Visual Fx
The first stage of editing begins after the editor is given the dailies (footage). He/She synchronizes the dailies with
the parallel sound track, which has been shifted to magnetic stock. Because the director has probably engaged in
coverage, or the process of shooting all the scenes as many different ways and from as many different angles as
possible, the editor has a lot of flexibility. Let us have a look at the factors that determine proper length of shot.

 Audience Expectation: Sometimes the editor cuts to a new shot simply because the audience does not
generally like long shots. But this can’t be applicable to all shots all the time. The audience may need to
get closer, further away or angled differently to see the action. Audience expectation works on a subliminal
level. Still, if the expectation is not met they will feel it and react disapprovingly. For instance, if a character
is injured in wide shot, the audience will want to see a close-up in order to clarify what happened. If the first
shot drags on too long, it will frustrate the audience and, possibly, impede their understanding of the action.
Dragging a shot too long is a common error with beginner filmmakers.
 Comprehension: Shots require various viewing times for the audience to understand them. Simple
compositions, static subjects and shots similar to their predecessor need minimal screen time for
comprehension. On the other hand, complicated compositions, moving subjects and shots vastly different
than their predecessor need more screen time. Despite this, the speed with which an audience can absorb
the meaning and purpose of a shot should not be underestimated.
 Action Requirements: Some shots include an action that must be completed before cutting to the next
shot. If the action is too long to hold audience interest and curiosity, we should compress it using various
editing techniques.
 Editor Imposed: In most of the situations, the editor decides shot length based on audience needs.
Occasionally, the editor will impose a cut to create a response in the audience. This can be to: create
emphasis, maintain rhythm, surprise the audience, or make a symbolic point.

Sometimes it is better to edit the sound first and make the picture fit. This is usually true when we need to have
certain actions happen when certain words are heard, as in a how-to video, or in the case of a montage sequence
cut to music. If every action must be seen in its entirety, then it makes more sense to edit the picture first.

While doing audio editing also we need to be careful of many things. If we have a digitized file of a live performance
an hour or two in length, separate the file into shorter audio clips by song. Once we have five or ten separate
audio clips, sort through each clip and perform some basic clean-up editing to remove unwanted artifacts, such as
coughs, sneezes, loud inhales on a vocal mic before a song starts, a clank against the mic stand, and so on. After
this basic clean up we can add fade-ins and fade-outs for each clip. In general, there are no set rules about how
long fade-ins and fade-outs should last at the beginning and end of a tune. Try several options and use your ear to
determine the timing of the fade-ins and fade-outs that works best with each song.

Typical problems that emerge in the editing room are, for example: 1) lack of different kinds of continuity; 2) cases
where the emotional intention of a scene is not realized: we don’t laugh at what was intended to be funny, or we
laugh at a scene where we were supposed to cry; 3) the audience lacks information necessary to understand the
relations between the characters or the action; or 4) the narrative creates expectations that are not fulfilled by the
story as it evolves.

Such problems might not arise from the quality of the individual scenes, but from the fact that there are too many of
them or that, when assembled, they do not produce the necessary dramatic flow.

Though the editing part comes in the postproduction stage it is always a good idea to select our editing package
before we start shooting. With this we can get a better idea of shooting the scene and will know the solutions for the
problems beforehand. In general there are two categories in which editing is done. The terms are known as on-line
and off-line. Understanding the difference between these two terms is essential as to know what is the best quality
that we will get.
 On-line Editing: When we capture a high-quality video, edit it, add special effects and finally dump it on
to a videotape final master means we are editing on-line. Outputting from our computer directly to film
using a special film recorder is also considered editing on-line. In both the cases the editor is creating the
final master output. Here the quality output is a must. Yet it is not always the case that on-line editing is

70 Aptech Limited
Editing
performed on our computer. The main reason for this is the amount of cost involved. For this process we
need equipments like Digital Betacam to get the best quality footage, a DigiBeta deck to connect to our
computer. Purchasing them or paying the rent for them is quite expensive.
 Off-line Editing: Off-line editing on the other hand occurs when we are using low quality or proxy footage
to create a rough draft of our project. This term does not refer to specific equipment. It is assumed however
that we are probably working with the highest-quality video equipment to which we have access.

If we’re firm on to cut costs, we can edit offline at a much lower cost per hour than online. It works like this: using a
computer notation editing system (like Avid, Media 100 or Premiere), we can indicate all of our cuts and transitions
for both video and audio on disk (computer diskette) as an EDL (edit decision list). Once we’ve made all of our
editing judgments, they can be printed out or put onto a disk that the online system can use to quickly reference the
time code locations for every cut.

6.3 Editing Equipments


The greatest compliment any editor can receive is that their editing is invisible. An editing assignment is considered
successful when it goes unnoticed on the screen. Ironically, an editor invests weeks or months of intensive work to
achieve the impression that nothing has been done at all. Editing is important because it allows the director to film
out of order, to make multiple takes of each shot, and it allows for aesthetic decisions to be made after filming is
complete.
 VTR (Video Tape Recorder)
Most of the editing systems have two video decks: a high quality deck that can play original tapes from our
camera, and a VHS deck to make viewing copies. Refer to Figure 6.1.

Figure 6.1: SONY HWDF500 HDCAM Video Tape Recorder


 Digital audio decks
DAT, Mini Disc: When the audio is not recorded on camcorder, and is recorded separately or added music
or audio from other sources such as DAT, Mini Disk, or CD then we will need suitable decks for playback.
Refer to Figure 6.2.

Figure 6.2: “DAT”(Digital Audio Tape)

Aptech Limited 71
Concepts of Digital Filmmaking & Visual Fx
 NTSC/PAL video monitors: A video monitor is necessary for us to know what our final output will look like.
This is the one component that we’ll have to have all the way through the post-production process.
 Speakers: All editing systems need a pair of external speakers so we can properly hear sound in our
project. Refer to Figure 6.3.

Figure 6.3: Various types of speakers


 Waveform monitors and vectorscopes: These are used to set proper video levels when capturing video.
Refer to Figures 6.4a and 6.4b.

Figure 6.4a: The three waveform monitors shown Figure 6.4b: 1720 Series Vectorscopes/Waveform
are the Tektronix WFM-601, the WFM-601i and the Monitors
WFM-601M
 Audio mixers: Mixing boards provide on-the-fly fine-tuning of equalization and gain to control the audio
quality of our production and make it easy to manage multiple audio sources. Refer to Figure 6.5.

Figure 6.5: Audio mixer is making audio adjustments

72 Aptech Limited
Editing

6.4 Creating a Rough Cut


When shooting is finished, a “rough cut” is made, combining what seem to be the best shots in such a way as to
constitute a continuous progression. This is accomplished by first breaking the film into each separate shot so that
all of the film can be placed in the order that the narrative requires. After this is accomplished, exact places to cut
into and out of shots can be found. In this stage, control of the story and visual information is important.

There are many ways to build the first cut of a scene using non-linear editing system. The simplest method is called
drag-and-drop editing. While working with this editing method we use the mouse to drag shots from a bin and into
the timeline window. There the shots can then be arranged by dragging them into the order we want.

If our software offers storyboard editing switch to thumbnail view in our bin and visually arrange the shots in order
we think will work, then select them and drag and drop them into the timeline.

It is a method of setting In and Out points to precisely control where and how frames are inserted into a Timeline.
In a three-point edit, we set any three such markers, and Premiere determines the fourth to match the specified
duration. This method results in more precise edit than drag and drop editing.

If the scene is based on dialogue a good way to build the first cut is to create a radio cut. The whole idea behind this
method is to make the scene sound good first without worrying how it looks.

6.4.1 Refining Rough Cut


It is very important to have seamless edit in our film. It should be in such a way that the edit should be as invisible as
possible. The key to creating a Seamless Edit is to make the overall story flow interesting. At any point the audience
should not get a chance to wonder what is exactly happening in the story. If we want to create an edit, which shows
that one shot has no continuity with the next shot, it is called Jump Shot technique. This kind of technique needs lot
of attention while editing and is mainly used to show a dream sequence occurring in a storyline. Apart from these
editing techniques the following ones are also used very often. Refer to Figures 6.6a and 6.6b.
 Cutaways and reaction shots: A cut to a shot that is related to the main action or separates from it. In an
interview, cutaways include reaction shots of the interviewer and shots of people, places, or things referred
to in the interview. A shot of someone looking at an area off screen usually leading into a shot filmed in to
show a reaction right after a subject showing some kind of reaction to any action. For example a person
saying something and others reacting to it, crowd cheering over a cricket game, a person ducking on a
gunshot fired etc.
 Overlapping Edits: These are used to refine a dialogue scene and are also called split edit or L-cuts. If we
cut from one actor to the other at the start of each line of dialogue, the scene can start looking like a ping –
pong match. Here the audio in-edit point is different from the corresponding video in-edit point. The process
is like a transition in which audio and video are not cut together as in a straight cut. In L cut, either audio or
video precedes the straight cut. Overlapping edits help break up this rhythm by extending or shortening the
picture but leaving the dialogue the same, allowing us to see the reaction of the one character as the other
talks and so on.
 Matching Action: Cutting together different shots of an action on a common gesture or movement in order
to make the action appear continuous on the screen. Also called a matched cut. Cutting from a wide shot of
a man reaching for his gun to a close-up of the gun, from a hand turning a doorknob to the door opening,
and a shot of man leading a woman dancing the tango to the reverse shot of her as she is dipped, are all
examples of edit that need matching action.

Aptech Limited 73
Concepts of Digital Filmmaking & Visual Fx

Figure 6.6b: Now the train is going in the opposite


Figure 6.6a: The train passes across the screen direction. This will not march the scene action
from left to right. This shot establishes the on-
screen direction (Refer to the Colored Section for the colored
image.)
(Refer to the Colored Section for the colored
image.)
 Matching Scene Position: When a shot is well composed we can easily calculate where the viewer’s eye
will be directed. For instance in an action shot the viewer will follow the line of action, if the shot is big and
dull the viewer will try to follow the biggest or the most colorful object in the scene. Once we know where
the viewer is going to see at the end of that shot, it is very easy for us to match the next scene position in
the following shot. This technique is especially helpful when we are trying to match action. Refer to Figure
6.7.

Figure 6.7: The two sides of the Liberty. Some might say this is crossing the line. But such a cut is not
confusing; in fact, it makes a strong, informative beginning to a sequence
(Refer to the Colored Section for the colored image.)
 Matching Emotion and Tone: While shooting, the continuity of actor’s emotions and tones is very essential.
Apart from this the camera angle, movement of an actor, lighting also needs to be similar while moving into
the next continuity scene. If our scene is about a heated argument between two people, the scene starting
from long shot to close up will look appealing rather than a shot, which is cut in between in such a way that
the focus on the emotions is lost.

6.5 Fine Cutting


The next phase is “fine cut”, which, after all the frame-by-frame manipulations are accomplished, represents the
final arrangement of precisely trimmed shots and sequences. The fine cut is used as a guide for cutting the original
negative and as detailed instructions involving transition techniques such as fades, wipes, and dissolves. The
sound track is mixed and foleys, sound effects created in order to match certain sequences, are added. Finally, an
“answer print”, which combines image and sound with the placement of an optical track along the edge of the print,
is created.
74 Aptech Limited
Editing

6.6 Transitions
The phenomenon of editing deals with all aspects of filmic rhythm - from the transition of one image to another or
the detailed musical rhythm in a small sequence of edits, to the most general balancing of pace and rhythm in the
overall narrative structure. When we switch from one clip to another, we are making a transition (also known as a
“cut”) between perspectives. This switch is a fundamental part of editing, and getting smooth transitions can take a
lot of time and patience. A smooth transition is one where the viewer doesn’t even realize we have switched their
perspective. The golden rule for transitions: Don’t cut just for the sake of cutting. Make sure that the perspective we
cut to be the best way to show the moment we want to show.

Transitions are effects that are used to establish a link between two shots. The most commonly used is cross-dissolve
and apart from that various types of wipe transitions are also used quite often. In the early days of filmmaking only
fade in and fade out transitions were used. Today’s filmmaker relies more on the techniques given below:
 Hard Cuts: This expression refers to an edit between two very different shots, without a dissolve or other
effect to soften the transition. Hard cuts are used for smoothing out the matching actions, screen position,
and other cues.
 Dissolve, Fades and Wipes: Instead of just a straight cut between clips, we may decide that we want to use
a transition effect such as cross-fading, wipes or dissolves to elaborate the switch between perspectives.
Transition effects can also have secondary effects to the theme and flavor of our content; for example, a
“fade-to-black” transition can suggest the passage of time. Movies in the Star Wars saga used many “wipe”
transitions to indicate changes in place and character. The cross-fade transition can allow for very smooth
and subtle changes. Refer to Figures 6.8 and 6.9.

Figure 6.8: One image getting faded into another image creating a fade effect
(Refer to the Colored Section for the colored image.)

Using a dissolve to transition between scenes can add a feeling of smoothness and serve to slow down the pacing
of our story. Dissolves usually indicate a period between two scenes where the impact of the first scene needs to be
carried forward for a while in the next scene. They can also indicate the start of a dream sequence or flashback.

Dissolve effect

Blinds effect

Figure 6.9: Fade effect on left hand side and right side image indicating dissolve and blinds effect

Aptech Limited 75
Concepts of Digital Filmmaking & Visual Fx
Wipe is visible on screen as a bar traveling across the frame pushing one shot off and pulling the next shot into
place. Rarely used in contemporary film, but common in films from the 1930s and 1940s. Fades and wipe are not
used in films much as they are considered to be old effects bur at times we still see these effects to create comic
effects. Fade is a visual transition between shots or scenes that appears on screen as a brief interval with no picture.
The editor fades one shot to black and then fades in the next. Often used to indicate a change in time and place.

With the current software packages Transitions are easy to use; just select the transition we want (we can view it in
advance in the small screen), set it to what speed we want, and drag the bar that says what our transition is (Cross
Dissolve, for instance) between the two clips that we want to transition.

6.7 Sync Editing


As we have seen in previous session, a good audio is extremely important for our film. It is ok if our images are little
bit of a lower quality, but a bad audio will quickly result in a frustrated, bored audience that most likely won’t be able
to follow our story. A good sound synchronization is not only helpful for strengthening the effect of a video edit but
also covers up bad recording artifacts. A few simple sound effects can ease the transition between pieces of audio
shot at different times or on different locations.

We’re all familiar with the fact that music is often the most obvious sound effect, and the experience of music
providing important pieces of information. Sometimes, it is the musical score that carries all of the dramatic pacing
in a scene. Try watching the last few minutes of Jurassic Park with the sound turned down. We will realize how the
movie doesn’t really have a strong ending. Instead, we’re simply led to the ending by the musical score.

We have learnt the sound effects that are used with images to create a spectacle of reel life dramas. But again
imagine a film, which has great sound and dialogues but they are not matching with the lip movements of the actors.
All we will get is tremendous irritation while watching a movie.

A good sound editing is often used to “dress” a set to make it more believable. Good dialogue synchronization plays
a key role in scenes where there is not much of a background sound, but where a conversation between just two
subjects is going on. At this point the audience is totally concentrating on the dialogue more that the visuals. We’ll
need to do a fair amount of editing to get our dialog organized so that it can be easily adjusted and corrected, and
to prepare it for the final mix.

Checkerboarding (or splitting tracks) is the process of arranging our dialog tracks so that one voice can be easily
adjusted and corrected. We use this method to separate different speakers onto different tracks so that we can
manipulate and correct their dialog with as few separate actions as possible. It’s called checkerboarding because,
as we begin to separate different speakers, our audio tracks will begin to have a “checkerboard” appearance as can
be seen in following image.

It is not necessary that we will separate out every single voice or even every occurrence of a particular speaker.
Though we might be trying to split out a particular actor, splitting tracks during short lines or overlapping dialog
may not be worth the trouble. In addition to splitting up our dialog, we’ll also need to move all of the sound effects
recorded during our production onto their own track.

The Non-linear method of editing makes it very simple to select the relevant portions of the sound track and copy
and paste them to a new track. Remember not to cut, or we’ll throw the remaining audio out of sync. Here we need
to copy the stereo information that is the sound info of both the speakers, for each actor.

Many a times Automatic dialog replacement (ADR) is used to replace badly recorded sound. The ultimate aim is
the audio should be synchronized with video as accurately as possible. It is always advisable to take maximum
precautions while filming a shot and recording dialogues rather than rely on the sound editing and synchronization
techniques.

76 Aptech Limited
Editing

6.8 Summary
In this session, Editing, you learned that:
 When we watch a film, most of us have great difficulty in consciously perceiving the editing. Of course we
know that every time there is a shift from one image to another, it is an edit, and we know that editing in
general has to do with the establishing of rhythm in film. But we are often not sure of the concrete function
of editing, and likewise of the contribution that the editing process makes to the final film.
 In video editing, a transition is used between two different segments of video. The most common transitions
are cuts, dissolves (also known as a fades) and wipes. Other types of transition effects include page turns,
spins etc.
 Sometimes simple cuts from one clip to the next work well, but at other times we might want to use fancier
transitions from scene to scene. For example, we might want to use a dissolve, or a wipe or a fade.
 After the video clips are in the right order and edited, the program is polished. The video clips can be
cropped, resized and color corrected. Titles are inserted or superimposed. Transitions (over 60 types) can
be used between clips. The camera audio, voice-over, and music are equalized and mixed. Special effects
and visual filters can be applied, from a simple sepia tone to the bizarre effects used in a music video.
 A good sound editing is often used to “dress” a set to make it more believable. Good dialogue synchronization
plays a key role in scenes where there is not much of a background sound, but where a conversation
between just two subjects is going on.

6.9 Exercise
1. Outputting from our computer directly to film using a special film recorder is considered editing off-line.

a. True b. False

2. Off-line editing occurs when we are using low quality or proxy footage to create a rough draft of our project.

a. True a. False

3. What does VTR stand for?

a. Video tape recording b. Video tape recorder


c. Video dialog replacement d. Video refining

4. The simplest method of first cut is called drag-and-drop editing.

a. True b. False

5. The three-point edit results in more precise edit than drag and drop editing.

a. True b. False

6. The fine cut is used as a guide for cutting the original negative and as detailed instructions involving transition
techniques such as fades, wipes, and dissolves.

a. True b. False

7. Splitting tracks is the process of arranging our dialog tracks so that one voice can be easily adjusted and
corrected.

a. True b. False

Aptech Limited 77
This page has been intentionally left blank.
Color Corrections and Monitor Calibration

Color Corrections and Monitor


7 Calibration
Learning Outcomes
In this session, you will learn to -

 Explain the factors affecting color


 Explain the importance of monitor calibration

7.1 Introduction
Specialized high-end facilities offer a great service and often use experienced colorists, who know very well what
film should look like, to color correct each shot. They are also able to process the footage in real time once all the
programming is done. However, there are a few disadvantages to using such facilities including a certain lack of
control from the part of the producer (unless he/she is able to be physically there during the process) and the time
it takes to get the master to and from the facility. Fortunately, these days we can get the same results by using
inexpensive personal computers and workstations plus readily available software.

The preference for a film look over a video look seems to be a cultural one. In many countries, including Japan, a
very well produced and crispy looking video is considered better then film for projects that will be ultimately shown
on television or video. The reason is simple: modern television cameras are capable of producing great looking
images, far cleaner looking on television that telecined film. Video doesn’t have the graininess of film plus it plays
back at 60 fields per second instead of 24 frames per second, resulting in smoother motion. In other parts of the
world, including America, people tend to prefer the look of film.

As we have grown up so used to the dreamlike look of film, with its organic grains and slower-than-life 24 frames per
second speed that basically anything produced in the format tends to assume greater than life proportions and grab
our attention more effectively. This is one of the many reasons why a number of television series and commercials
are shot on film instead of video. These days, however, with the reliability of digital video formats and the advent
of HDTV, there is an increasing number of television series that appear to have been shot on film that are actually
produced electronically and are processed to look like they were shot on celluloid. This decreases production costs
considerably while still delivering the film look that audiences like so much.

In the next few sections of our session we will discuss several techniques to make our video footage look like it was
shot on film. They can be very useful if we are shooting a feature on digital video and want give it the same look as
a film shot on celluloid. It wasn’t too long ago that only high-end digital post-production facilities had the capability
to make video look like film. With the facilities that are available today for us the quality of the digital film should be
such that even a professional cinematographer should think that it had been shot on celluloid film.

7.2 Factors Affecting Color


A film with good color balance can improve our storytelling, deliver critical emotional cues, and add impact to our
videos. While creating any kind of film we need it to meet the required industry standards, whether we need to
match video footage to film footage or all we want is to give our video projects a much higher perceived production
value.

Aptech Limited 79
Concepts of Digital Filmmaking & Visual Fx
7.2.1 White Balance
In the preceding sessions we have understood the importance of white balance and how it can be adjusted
manually on the DV camera. Film, on the other hand, is balanced for a specific color temperature and will record
any variances. In both cases the color temperature should be adjusted as accurately as possible. Any adjustments
in color temperature, if not taken care of at the time of shooting through the use of color correction filters and gels,
must be made in the postproduction stage. It’s a little hard to presume the effect that different color temperatures
have on film because our own vision works pretty much like a video camera with the automatic white balance
circuitry always on. Our brain is constantly adjusting the color temperature of light sources for us so that they are
always perceived as white. In fact, this is why we can tolerate the awful green light that certain fluorescent bulbs
emit.

Experienced cinematographers have learned what different light sources look like on film and they can think in
terms of different colors. Even we can be masters of this process if we dedicate a fair amount of time on this aspect
of filmmaking. The changes in color temperature can vary a lot depending on various factors.

With a very simple exercise we can understand how film sees different color temperatures. Get close to a window on
a bright, sunny day, and turn on the light bulbs inside the room. If we focus our attention outside for a few minutes
and then look at the lights inside, they will look very orange. After a few seconds, our eyes will adjust to the color
temperature of these lights and we’ll start to perceive them as white. Now look outside and the sunlight will appear
very blue. As we can see, our brain will try to always give us the impression that almost any light source is white.
All it takes is some time for adjustment. This is the same reason why film can’t adjust the color temperature on its
own.

At its best, a DV camera can shoot beautiful images with nicely saturated, accurate colors. But at times by the errors
done by the operators or bad weather conditions we need to correct the color of our footage. Sometimes we might
make changes in the footage for artistic reasons. Maybe we want to dominate an emotion in a scene using specific
colors or just want to make a particular shot more colorful to make it look like a fantasy.

Most of the editing software provide us with powerful tools that can correct and change the colors in our video if it
has not been shot correctly or for plain artistic requirements. Any compositing program such as Softimage Digital
Studio, Discreet effect, Eeyon Digital Fusion, Nothing Real Shake or Adobe After Effects can be used for color
correction. The main advantage of using such applications is the increased complexity of control that we are given
as a user. Because the traditional compositing process for film involves a lot of color matching between layers and
elements, the color control tools must be more complex in order to ensure perfect results. Another excellent reason
to use compositing programs for color correction is the availability of great plug-ins for this purpose. Learning the
tool options for good color correction is essential and invaluable. Usually color corrections are done during the
editing process. Let us have a look that can affect the color balance of the footage.

7.2.2 Matching Different Camera Footage


Though the tape formats are standardized, the cameras can consist of different components and customizations.
Due to this, cameras from different vendors can result in images with different colors and qualities. At times we have
to shoot footage of a same film using multiple cameras as per the requirement of one scene or multiple scenes.
As a result we could easily end up with different shots in the same scenes that do not have the same color quality
or sharpness. Another problem could be that after the principal shoot is over we might shoot a few extra shots that
are required in the film but were not shot during the main shoot. These could be, for example, scenery, or different
angles of a particular set etc. In this case, though we are using the same camera the color settings might vary if we
are shooting during a different time of the year and hence the light is slightly different. Even this causes a difference
in two shots.

Most of the time, matching a footage from one camera to another can be difficult as it’s an adjustment of odd
combinations of factors that make footage from one camera look different from another. We need to take into
consideration the following points while making the color adjustments:
80 Aptech Limited
Color Corrections and Monitor Calibration
 All cameras do not have the same settings for color levels and sharpness details. These differences can
cause a tonal difference in shots and cause major problems while matching the shots. We can reduce this
problem by using unsharp mask filters to sharpen the details from softer cameras.
 While adjusting the flesh tones, we need to be extra careful. The shadows of the actors while blue screening
appear to be bright whites on the walls, couch and other props. In the realistic shots this looks fake and
needs color corrections for it. When we work on the color corrections in these areas make sure that only
shadows are corrected and the rest of the area is balanced.
 Identify which footage looks the best of all among the many shots. According to that adjust the tonal ranges
of remaining shots to get the best visual output. If the main footage also has some problem then first correct
it using a filter and then use additional filters to remove problems in the remaining footage. We should
remember that these techniques could vary according to the footage we have in our hand and depending
on the contrast between the color tones.
 At times we just have to adjust a part of an image in one color and another part of the same image needs
to be corrected in a different manner. For example, in a scene the background needs to be corrected using
one filter where as the foreground with another. Most of the editing tools allow us to select a particular area
of an image and correct it.

7.2.3 Luminance Clamping


When footage is transferred on the system for editing, most of the time we will see that there is vast difference in
color. To understand this problem and find a solution for it we need to know what luminance is all about. A video
signal has some brightness value in it and is addressed as that video signal’s luminance value. Apart from that
the video signal also contains color information and with these two values that is what color and what brightness
it has our video monitor can recreate image. This clearly indicates that if the luminance and color values change,
the images will differ from the original. The luminance difference takes place because our computer uses a slightly
different measure of luminance than video signals.

Clamping is the process that establishes a fixed level for the picture level at the beginning of each scanning line.
Most of the DV cameras have a luminance setting which works automatically. We can control luminance manually
as well. For manual luma clamping we have to use a CODEC that allows us to deactivate luminance clamp.

Now that we have been introduced to the concept of color correction related to digital filmmaking, feel free to
experiment as much as possible. Do our color correction on a scene-by-scene basis, making sure that the overall
tonality contributed to that particular scene’s mood. Color correction may be used very subtly just to avoid the typical
video “perfect white” or more aggressively to convey strong emotions. We may also go to extremes and use color
correction as a special effect or even to set a style for our whole movie.

7.3 Importance of Monitor Calibration


Images we view on the video should normally be sharp, have good display of color contrast, and display a full
range of colors with a clear image, without blown out highlights and muddy shadow areas. If images don’t sparkle,
our monitor may need tuning. This section will help us to understand the importance of setting our brightness and
contrast settings to get the most of the images displayed on the digital film.

When previewing video output, it’s important that the video monitor be calibrated. Proper monitor calibration ensures
that what we see today will match what we see tomorrow, and that the judgments and decisions we make based
on what we see remain valid. Not everyone sees colors and brightness the same. If monitor settings were left up
to personal preference, everyone’s monitor would be set differently, making it impossible for people to agree on
how the video footage actually looked like and therefore what changes might be needed. By choosing a calibration
standard we don’t necessarily make the picture look its best, that’s a subjective judgment in any case, but we
attempt to make it look standard. If everyone viewing a project adheres to the same standard, then there will be
much less disagreement over what, if any, changes are needed.

Aptech Limited 81
Concepts of Digital Filmmaking & Visual Fx
Even calibrated video monitors will differ in appearance due to a number of factors. Color temperature and other
internal components used in various monitor standards like NTSC and PAL can cause in the difference in picture
quality. Monitoring on a television set is a big step up from trying to do it on a computer monitor. We get to see color
differences and interlace problems, which are the two main items that cause problems with computer-generated
footage. Let’s say we need an image for professional DV film; it may appear on an uncorrected PC monitor that
many of the image details are hidden in shadows. We may be tempted to correct that directly in image with various
adjusting functions. However we should first make Monitor settings corrections and look at it again. Maybe suddenly
the shades are not so dark and the details are quite visible. Television is a little different from computer imagery.
Those who have converted their text and graphics to video have quickly discovered that the colors are different, thin
lines tend to vibrate, and the picture is much fuzzier than the one on their computer screen.

There are really two kinds of TV monitors, those that make the picture look good (designed for the viewing audience),
and those that tell the truth (helpful to those analyzing their pictures to make them the best possible). We will focus
on the latter, the kind of monitor videographers use to measure the quality of their video signal, both the visible
and the invisible parts. There are distinct advantages to using a real video monitor, despite the additional cost. And
by real video monitor, we mean a monitor purpose-built for monitoring video signals, not for watching TV. Refer to
Figure 7.1.

Figure 7.1: NTSC/PAL color bars


(Refer to the Colored Section for the colored image.)

Because NTSC video devices are prone to drift out of adjustment, it is necessary to calibrate the signal and the TV
monitor frequently. Before we start editing, it is important to adjust our NTSC monitor to make sure that it displays
colors as accurately as possible. The easiest way to do this is with color bars, a regular pattern of colors and gray
tones that can be used to adjust our monitor’s brightness and contrast. To calibrate our video monitor, display a set
of SMPTE color bars. Many professional monitors and cameras can generate color bars. Be sure that the bars we
use are correctly calibrated. The simplest calibration methods involve adjustments to the Gamma, Contrast and
Brightness settings of our monitor.

82 Aptech Limited
Color Corrections and Monitor Calibration

7.4 Summary
In this session, Color Corrections and Monitor Calibration, you learned that:
 With the reliability of digital video formats and the advent of HDTV, there is an increasing number of television
series that appear to have been shot on film that are actually produced electronically and are processed to
look like they were shot on celluloid. This decreases production costs considerably while still delivering the
film look that audiences like so much.
 A film with good color balance can improve our storytelling, deliver critical emotional cues, and add impact
to our videos. While creating any kind of film we need it to meet the required industry standards, whether
we need to match video footage to film footage or all we want is to give our video projects a much higher
perceived production value.
 Most of the editing software provide us with powerful tools that can correct and change the colors in our
video if it is not been shot correctly or for plain artistic requirements.
 Most of the time matching a footage from one camera to another can be difficult as its an adjustment of odd
combinations of factors that make footage from one camera look different from another.
 Clamping is the process that establishes a fixed level for the picture level at the beginning of each scanning
line. Most of the DV cameras have a luminance setting, which works automatically. We can control luminance
manually as well.
 By choosing a calibration standard we don’t necessarily make the picture look its best, that’s a subjective
judgment in any case, but we attempt to make it look standard.

7.5 Exercise
1. M
ost of the editing software provide us with powerful tools that can correct and change the colors in our video
if it is not been shot correctly or for plain artistic requirements.

a. True b. False

2. All cameras have same settings for color levels and sharpness details.

a. True b. False

3. The shadows of the actors while blue screening appear to be bright whites on the walls, couch and other
props.

a. True b. False

4. _
___________is the process that established a fixed level for the picture level at the beginning of each scanning
line.

a. Footage b. Clamping
c. Color correction d. Monitor calibration

5. C
olor temperature and other internal components used in various monitor standards like NTSC and PAL can
cause a difference in picture quality.

a. True b. False

Aptech Limited 83
This page has been intentionally left blank.
Compositing and Rotoscoping

8 Compositing and Rotoscoping

Learning Outcomes
In this session, you will learn to -

 Explain the factors affecting color


 Explain the importance of monitor calibration

8.1 Introduction
The creation of visually fantastic and seamlessly integrated effects in today’s films requires a level of technical and
visual sophistication in training that few multimedia-training centers can offer.

Creating realistic digital effects is a difficult task even for professionals, but with the innovative features and
accessibility of powerful graphics software (like Maya, 3ds max, After Effects, etc.) creating fascinating, commercial-
quality effects is now within the reach of many artists and animators.

The latest special effects projects include: interactive cinema, audience participation, and exclusively computerized
movies. In the interactive cinema, the audience would be moved around in seats while watching 3-dimensional
films. Audience participation in movies is also on the horizon of special effects. There will be more exclusively
computerized movies, like Ice Age and Matrix.

Soon, we will see that even Indian films will be visually as rich as western films. Filmmakers will combine the latest
computer technology, digital trickery, motion camera photography, and computer-generated movements to recreate
the stars of the past. Who knows when Madhubala, Meena Kumari, Kishore Kumar, and other old movie stars will
reappear????

8.2 Titling
Every production may not need fancy special effects such as 3D- rendered dinosaurs, or complicated composites
and morphs, but will definitely need a title sequence at the beginning and end credit roll at the end. For a project
like a documentary, we will use titles to identify interviewees and locations. Though most of the editing packages
give the facility to create titles, it may not be possible to create cool, animated sequences or even a simple list of
rolling credits.

8.2.1 Creating Titles for Films


When we have to make titles for films, high –resolution is one technical aspect that we have to take into consideration.
The internal tool of NLE system will create a very low-resolution title, so to avoid this problem we need to create
titles in software like Photoshop and After Effects and have them transferred on to the film.
 Safe titles: In the previous sessions we have discussed that to compensate the possible differences
between different television sets, our TV monitor crops off some amount of the extra area on the top, the
bottom and the side areas. As it is not possible to accurately determine how much a particular TV or monitor
will over scan, some lowest-common-denominator averages have been determined. If we stay inside these
boundaries, our video will not get cropped outside the edge of monitor.

Aptech Limited 85
Concepts of Digital Filmmaking & Visual Fx
 The Action Safe and Title Safe are the two terms that we will hear commonly while making titles for any
film. Let us understand what they are all about and see what they are used for. Refer to Figure 8.1.

Figure 8.1: Regions defined for Title Safe and Action Safe Area

● Action Safe: This area is the larger of two regions. This amounts to about 90% of the total picture area.
It is symmetrically located inside the picture border. Our home sets are over scanned. We don’t see the
entire picture, the edges being lost beyond the border of the screen. Safe action area is designated as
the area of the picture that is “safe” to put action that the viewer needs to see. Refer to Figure 8.2a.

● Title Safe: The area for safe title is inside the safe action area and amounts to about 80% of the total
picture area. Titles and text are usually kept within the safe title area to make sure they can be seen in
their entirety. Refer to Figure 8.2b.

Figure 8.2a: Animation within Action safe area Figure 8.2b: Title within Title safe area

Apart from these two areas, we need to keep in mind the Safe Color concept, which is equally important while
making titles. NTSC and PAL video have much smaller gamut than our computer monitor. This means the colors
that appear to be fine on the computer monitor may not look good on the NTSC monitor and appear to be plain.
Most of the titling tools of editing programs choose colors that are not NTSC safe. We can study the color difference
on NTSC monitors or Vectorscopes and make the color corrections that look the best or use tools like After Effects
and Photoshop, which provide Broadcast Color filters that will convert our graphics to NTSC-safe colors.

While choosing colors for the title, we also have to pay attention to the background scenes that are going to play

86 Aptech Limited
Compositing and Rotoscoping
behind the titles. For example, a white title might look good against a dark background but may not look appealing
on the light background. If the title is not readable, the audience will not understand whom the credit goes to for a
particular section of work done in film.

8.2.2 Terminology of Titling


While we learn the concepts of titling, we also need to know the terms that are commonly used in the area of
titling.
 Title card: A non-moving title. Refer to Figure 8.3.

Figure 8.3: Example of a static title


 Head credits: Typical movie titles which show the head credits roughly in the following order: Studio,
production, movie title, lead actors, casting, music supervisor, costumes and make-up, designer, director
of photography, executive producer, writer, producer, director. Normally these titles fade in and fade out on
the screen.
 Tail credits (or End credits): Here we see a long list of credits rolling from bottom to top indicating the
inputs from senior to junior actors. Refer to Figure 8.4.

Figure 8.4: Rolling title credits


 Title crawl: A line of titles that move mostly horizontal in direction across the screen at the bottom. Refer
Aptech Limited 87
Concepts of Digital Filmmaking & Visual Fx
to Figure 8.5.

Figure 8.5: Crawling title


 Supered: Titles that are superimposed over other video. Refer to Figure 8.6.

Figure 8.6: Superimposed titles


(Refer to the Colored Section for the colored image.)

While placing the titles over a video we need to give some thought to their placement. Though titles are not the
original part of footage, they should not look separate from the video completely. The viewer should not think titles
are different from the screenplay. Whether the titles are going to be moving or static they have to be readable.
Usually this means at least four seconds for a fairly short title, excluding fade in or out. Titles can serve as another
punch in our story telling process, so give them some thought.

8.3 Compositing
Compositing is a technique by which one shot is super-imposed on another, resulting in a composite shot. In theory
compositing sounds like a very basic effect but in practical use, it is one of the most powerful tools at our disposal.
Many spectacular shots can be achieved using Compositing. In fact it is one of the most important and widely
used effects techniques. No Special FX movie can be made without using Compositing. In the movies (and TV),
“compositing” is done by filming the actors and actresses against a green or blue screen, removing the color from

88 Aptech Limited
Compositing and Rotoscoping
the filmed scene, and then inserting a new background. Refer to Figure 8.7.

Figure 8.7: Images from Jurassic Park, a popular example of the compositing techniques applied by
Hollywood
(Refer to the Colored Section for the colored image.)

Let us have a look at the countless possibilities this technique offers.


 We can shoot our actors in the studio and composite them onto any outdoor location shot we choose.
 Sometimes the film budget does not allow the creative director to make huge sets with lots of props in it but
at the same time the story requires that kind of get up for the shot. In that case, an elaborate set can be built
on a miniature scale. A shot of this set then serves as the background image onto which the actors can be
composited. Refer to Figure 8.8.

Figure 8.8: Example of composited shots


(Refer to the Colored Section for the colored image.)
 If we replace the model aeroplane in Shots A or C with an actor suspended from the ceiling, we in effect
have SuperMan!!!
 A saleable actor doing a double role in many films is very common. Especially the bollywood storylines are
created keeping this aspect in mind. So obviously the question that comes to mind is, “how do they do it?”
How we see a single actor and his twin standing in front of each other and talking to each other? Well the
technique is very simple.
 When an actor has to perform a double act with himself, we can composite a shot of the actor playing
the first character onto the shot of the same actor playing the second character! In fact, the number of
characters doesn’t have to stop at just two. Once we composite footage of the character 2 on character 1,
we have a filmstrip showing two characters on the same film as if they are two different people performing
simultaneously. Now we can shoot the third character with the same actor and composite this on the original
shot if required. Thus, we get three characters on the same film. Likewise we can go on and on.
 Or for instance if we have only one good bunch of models as an audience. We can keep compositing the
bunch of actors on an earlier shot of itself and thus create an auditorium packed with audience without
paying extra money to junior artists. This way a lot of money and time can be saved.

Aptech Limited 89
Concepts of Digital Filmmaking & Visual Fx
3D compositing can be extremely challenging but at the same time, very rewarding. When we import our footage
files in any editing programme the clips need to be arranged in proper order. After that we have to choose the
method that we are going to use for compositing. Compositing methods fall into two categories: keys and mattes.

8.3.1 Keys
A common example of keying is our everyday weather forecast on TV. The weather map is a separate computer
generated shot onto which the announcer is super-imposed, making it look as if he/she is standing in front of a giant
TV screen flashing different weather images. In reality the weatherman is standing in front of a blue or green screen
that is electronically keyed out and replaced with the image of the map.

The word “key” was derived from a digital mixer’s capability for cutting an electronic keyhole in a video picture. Into
this keyhole is placed another video signal or a matte color, which results in one image being placed over, or within,
another. Key effects go by a variety of names including Luminance Key, Chroma Key (Color key), External Key and
Downstream Key. All of the key effects work on the principal of taking an image shape presented from one video
source and using it as the cut-out for placing another video source (or matte color) inside of it or around it. And
typically this cutout is defined by its brightness level in contrast to whatever else is in the scene. Let us understand
the functions of these keys in detail.
 Chroma Key: Chroma Key looks for a specific color or hue to be used as the cutout reference. Such is
the case with the TV weatherman standing in front of a blue or green wall as those specific shades of blue
or green are replaced by a computer generated weather map. Because we have to shoot in front of a
specially colored screen, and have to have very precise lighting, chroma key effects are not ideal for every
compositing task. Refer to Figure 8.9.

Figure 8.9: Background replaced by using Chroma Key


(Refer to the Colored Section for the colored image.)
 Luminance Key: Luminance Key detects the dark portions of a video picture and electronically replaces
them with another image, generally from another video source being fed into the digital mixer. Rather than
keying out pixels of a certain color, a luma key keys out pixels of certain brightness. This keying method can
be used for situations where we can’t put a blue-screen behind our foreground element. For instance if a
footage has a too bright sky behind an object, we can remove it using luma key. Also, this key can be used
to touch-up parts of a chroma key that have not keyed out properly. Refer to Figure 8.10.
 External Key: External Key is a feature on a digital mixer that permits the use to “cut” his or her keyhole
with an image shape fed into a discrete video input of the mixer. Any other video footage can be placed in
the cutout shape.
 Downstream Key: It is also known as Superimpose and does just that. Like External Key, it also requires
an image shape (“Key Source”) to cut the keyhole and it requires some form of fill signal (“Key Fill”) to be
placed inside of it. The Downstream Key’s effect is most often used to superimpose titles or graphics over
two other video sources that are transitioning between each other.

90 Aptech Limited
Compositing and Rotoscoping

Figure 8.10: Result of Luminance key


(Refer to the Colored Section for the colored image.)

8.3.2 Mattes
As it’s hard to get a good composite using a key, and because it’s not always likely to hang a colored screen behind
something, we can also achieve composites using a special type of mask called a matte. Just like a stencil, matte
can be used to cut out an area of footage and the underlying footage can be visible through it. For instance, if we
want to show a giant butterfly flying behind the building and has covered the entire building. While shooting we can’t
put such a big blue screen behind the building, so in this situation we can make use of matte technique to remove
the background to make visible the butterfly, which is placed on the underlying layer.

In the analog film world, matters are drawn by hand for each frame by matte cutters. The process is very time
consuming and the results not always up to the mark. But the digital world has made these kinds of effects very
simple.

8.4 Rotoscoping
Drawing around something in the frame so that an effect can be applied to that part of the film is a tough job if the
object is moving around in the scene. If an animated creature has to go behind something in the live action piece of
film, that object can be drawn around so a matte can be created, so that the creature will not show over the top of
that object. If the camera were moving, then each frame of film would have to be rotoscoped. If the camera is still,
then the same matte can probably be used for all frames in that shot.

Rotoscoping was first used by the Fleischers for making cartoons. A rotoscope was a kind of projector used to
create frame-by-frame alignment between filmed live-action footage and hand-drawn animation. Mounted at the top
of an animation stand, a rotoscope projected filmed images down through the actual lens of the animation camera
and onto the page where animators draw and compose images.

The Rotoscope consists of an animation camera and a light source (usually using a prism behind the movement
and the lamp house attached to the camera’s open door) that projects a print through the camera’s lens and the
projected image is then traced to create a matte. The lamp house is then removed and the raw stock placed in the
camera and the drawings are filmed through the same lens that projected the image. The resulting image will then
fit the original image if the two strips of film are run bi-packed in the same projector movement (using an optical
printer). In digital film effects work, rotoscoping refers to any drawn matte, as both images can be seen composited
while the matte is being drawn, so good results can be achieved. Refer to Figure 8.11.

Aptech Limited 91
Concepts of Digital Filmmaking & Visual Fx

Figure 8.11: Greenscreen extraction, wire removals, and lots of rotoscoping make this a densely
complicated shot
(Refer to the Colored Section for the colored image.)

The digital filmmaker has many applications for rotoscoping like:


 At the preliminary level, painting directly onto the frames of our video can be used for achieving certain
types of effects.
 Speaking more practically this technique is used to remove wires from models or actors, touching up the
seams in a composite, adding optical effects like lighting, clouds, smoke and to do color corrections.
 In addition, rotoscoping tools let us create complex alpha channels that can be used for fine composition
work.

A rotoscoping texture (sometimes called a sequence map) is the use of video within an animation, something like
an animation within an animation. For example, in a cartoon animation, the television set could show a program
containing another animation. Or in a background to an animation in the foreground, we could include some clouds
that slowly changed during the foreground animation. The frame rate for both the main animation and the “animation
within the animation” must be the same.

In computer graphics, to rotoscope is to create an animated matte indicating the shape of an object or actor at each
frame of a sequence, as would be used to composite a CGI element into the background of a live-action shot.

8.5 Visual Effects


The history of special visual effects begins even before the invention of the camera itself. During the 1700s,
magicians utilized many techniques to perform optical illusions and amaze their audiences. These techniques
formed the foundations of special effects. The greatest changes in the revolution of Special Effects happened in
the 20th century, with computers. Computers helped revolutionize the world of Special Effects in movies. Now with
computers we are able to create such wonderful realistic and imaginative world that we would never have thought
possible earlier. With the help of computers we can create people, buildings, animals, monsters, aliens, and many
other creations. Our creations in the computers can come to life with a touch of a button. Special Effects have gone
to the extreme with the use of computers.

While creating special effects for films the special effects crew comes into the picture. They are involved in every
part of the production, from assembling together real elements to decorate the truth with realistic-looking fakes. The
post-production editing and effects market promises to be a bonanza for the multimedia software industry. Post-
production is now poised on the brink of a reformation that will result in a massive digital convergence. The fact is
that in India currently, more than 90% of this market is still occupied by “old tools” analog, hardware-based, difficult,
and expensive. These tools make it difficult to create mind blowing digital effects in a low budget. Also, another
reason for not using digital effects in Bollywood is that most of the current directors are unaware of the power of
these effects and the various tools available in the market. But this scenario is changing rapidly and over the next

92 Aptech Limited
Compositing and Rotoscoping
five years this entire market will convert to “new tools” -- digital, software-based, friendly, inexpensive. As digital is
becoming the standard, postproduction is becoming faster and easier, providing the director with greater control and
less signal degradation. Refer to Figure 8.12.

Figure 8.12: Visual effect

8.5.1 Challenges Faced by Effect Animator


Creating realistic visual effects is a difficult task even for professional artists. But with the innovative features
and accessibility of powerful graphics software like Maya, Max, Combustion, After Effects and other digital effects
packages creating fascinating, commercial-quality effects is now within the reach of many artists and animators.
As an effects animator or technical director, one of the most crucial skills is his or her problem-solving ability. In a
typical production environment, everything is in a state of fluctuation. Unexpected changes often occur. Therefore,
it is of paramount importance that our problem-solving skills enable us to meet the new requirements, modifications,
and challenges. In addition creating visual effects animation, other important skills such as modeling, rendering,
compositing, and lighting, are equally important in the film environment.

8.5.2 Types of Effects


Effects can be created using various techniques. Let us have a look at the categories of effects used in films.
 Special FX: It is a process of creating special visual images either through camera or through other process.
It also refers to effects created using physical pyrotechnics and explosives devices.
 Optical Effects: Here, the images are produced by interlocking the camera and the projector, which are not
normally photographed by conventional means under ordinary conditions. In the olden days special visual
effects were created using optical effects process. The optical effects fall under visual special effects.
 Mechanical Effects: These effects are part of special effects. These effects are created with the mechanical
devices during initial filming of the scene, such as artificial fog, rain etc.
 Digital FX: This refers to effects created purely by means of a computer. The word “digital” in the phrase
digital visual effects means that computer hardware and software is primarily used to create effects.
Computer-generated effects make imaginary characters like Godzilla possible, and they also create almost
every effect that used to be done using models. The advantages of CG effects are their realism, flexibility

Aptech Limited 93
Concepts of Digital Filmmaking & Visual Fx
and relatively low cost (compared to the alternatives).
 Visual FX: refers to effects created using either the digital or special FX, or a combination of both. The
visual effects can be created using various kinds of techniques. In modern days almost all the visual effects
are created digitally. Let us have a look at some of the effects created in the past and then see the modern
day technology.

8.5.3 Understanding Visual Optical Effects


Lots of effects were also used in the earlier days of filmmaking, but the process of creating these effects was very
tedious and lot of manual work was required. All effects like dual film traveling matte system, blue screening could
be achieved with optical effects technique. The method of combining animation footage with live action footage,
a matte painting or with footage of a miniature model where camera and projector interlocking is essential is also
considered as optical effect. In these methods the optical printers were used for processing and compositing of
footage. As technology has improved, the method of creating these kinds of effects has also changed. Now the
latest technology of electronic and computer sciences are improvising the methods of creating effects. We must
remember that though the techniques are changing over the period of time, the fundamentals are still the same. And
only the future can tell us, up to what extent, shape or form of the technology of optical effects, if not its language
will be affected.

In the earliest stage the optical printers were custom built by the major studios and film laboratories, and were
usually designed and made suitable for particular requirements. The first modern standardized optical printing
equipment made in 1943 made it possible to create innumerable effects to cater to the entire motion picture industry.
Improvements over the years of more sophisticated equipment, new duplicating films, special purpose lenses, and
improved film processing techniques, as well as skilled technicians, have increased the use of the optical printer to
a point where its great creative and economic value is common knowledge in the motion picture industry.

■ Techniques to Create Special Effects


Let us have a look at some of the older techniques to create the special effects. Today also we will notice that
the basic concepts are still the same only the media has changed to digital. Earlier film cinematographers faced
lots of problem while capturing their visualization on the film as resources were limited and lots of manual hard
work was required. Yet all the techniques that are used today with a computer were achieved using a number
of different ways. Those ways can be described as being:
 In-the-camera effects, in which all the components of the final scene were photographed on the original
camera negative.
 Laboratory processes, in which duplication of the original negative through single or multiple generations
were done, before the final effect was produced.
 The third way was combining both the above effects, in which some of the image components were
photographed directly on to the final composite film, while others were produced through duplication.

From the above three ways the various techniques of creating special effects can be categorized as per the
following:
 In-The-Camera Technique: These techniques were less expensive than the laboratory process. The
equipments were not so expensive and the shot could be made by regular production crew. But this technique
also had some disadvantages like it was quite time-consuming hence it could upset the production schedule.
In-the-camera technique consisted of three main categories namely: Basic Effects, Image Replacement
and Miniatures. Let us understand each one of them. Refer to Table 8.1 to view the various categories.

94 Aptech Limited
Compositing and Rotoscoping

Categories Description
Basic Effects These effects include the following:

Changes in object speed, position or direction

Distorting the image

Optical transitions

Superimposing the images

Day-for night photography


Image Replacement These techniques were useful to save construction cost and other problems faced
during outdoor shoots where if the construction of a set would be too expensive for
the budget of the film or if the sky was dull and needed to be brightened or to remove
some unwanted elements from the landscape etc.
 Split-screen photography: In this process while doing the still photography,
half shot was clicked keeping half the camera lens blocked with black plate
and then the plate was shifted to the other half as per the accurate markings
and the remaining shot was clicked blocking the first half.
 In-the-camera matte shots: Here, the part of the image, which was recorded
by the camera, was hidden during first exposure by a matte (an opaque
card or plate inserted into the external matte box) to prevent the recording of
certain portions of a set or scene. During a subsequent exposure, a counter-
matte whose outline conformed exactly to those of the matte was similarly
inserted and a new scene was exposed and fitted into place with the rest of
the already recorded image.
 Glass shots: In this process a large sheet of glass was placed in front of
the camera with a painting of appropriate representation images upon the
portions of the glass. Then the actual scene and painted portions used to
be shot together and blended with each other simultaneously through the
camera shoot. As a result, the painted portions of the glass would look like a
part of the original scene. Refer to Figures 8.13a and 8.13b.
 Mirror shots: These shots were used for a variety of superimposition
purposes. Ghostly, see through figures could be added to live action scenes,
pictorial details of transparent or titles could be added over the background
image. For example if a ghost shot was required, a partially transparent
mirror was placed in front of the camera in around 450 relative to optical axis
of the lens. The actor used to stand in one corner of the camera and then the
actions of the actors were shot and then superimposed over the main image
of the set. Refer to Figure 8.14.
Miniatures Many a times when it was not economically possible to shoot scenes like a ship
sinking, a real building or street or city needed to be destroyed by fire, earthquake or
other disasters, these situations and many others demanded for miniatures. These
were the symbolic models, which were built, operated and photographed so as to be
genuine in character and full-scale in size. Miniatures could be classified as static or
mobile. Usually, buildings, landscapes and set pieces which were viewed from the
long shot fell under first category where as human figures, explosions, railway trains,
ships fell under the second. Some complex miniature sets included both static and
mobile components. Refer to Figure 8.15.
Table 8.1: Categories of In-the-camera technique

Aptech Limited 95
Concepts of Digital Filmmaking & Visual Fx

Glass painting
of arches

Final output

Figure 8.13a: Most commonly used glass shot technique

Actually built set

Glass painting

Trunk used to hide frame junction

Figure 8.13b: Complicated glass shot where two glass frames are used to create effect

96 Aptech Limited
Compositing and Rotoscoping

Sheet of glass placed at 45 degrees


Super-imposed
image

Sound-stage set up prepared for


Final result
a superimposition mirror shot

Figure 8.14: Mirror shot

Figure 8.15: The miniature model will be pulled with wire by crewmember, while propulsion jets fixed in
the tail of the rocket. Vapour from the launch pad is obtained with steam injected through an aperture in
the set
 Laboratory processes: This process includes the following techniques:
● Bi-pack printing: In contrast to the “in-the-camera” matte shot technique, for which all of the image
manipulations and replacements were performed upon the original negative, bi-pack printing allowed
such work to be done from master positives, which were struck from the negative. Refer to Figure
8.16.

● Optical printing: Virtually every film that was made in 35mm, made use of an optical printer before
it was complete, if only for its share of optical transitions. With optical printers it was possible to
manipulate space, time and image. Around 80% of the optical printer’s mechanical life was devoted
to the preparation of optical transitions like fade, dissolve, wipe and push offs. Along with this these
printers were used to speed up, slow down, reverse or stop the actions. Image modifications and
distortions, image size changes, split screen and distortion of colored images were possible through
optical printers. Refer to Figures 8.17a and 8.17b.

Aptech Limited 97
Concepts of Digital Filmmaking & Visual Fx

The original background of trees Final composited image


Second bi-pack matte used to
is blocked with bi-pack matte
capture background details

Figure 8.16: First printing for Bi-pack matte shot combining two live action scenes

Figure 8.17a: The research products set-optical Figure 8.17b: On this optical printer camera can be
printer tilted through several degrees on its optical axis

● Traveling mattes: This technique is also of matting and is used to place the moving figure of the actor
into the background image and to produce a convincing composite. Unlike static matte, this matte
changes its position, size and configuration from frame to frame. Its silhouette conforms exactly to the
shape and movements of the actor, allowing him to move anywhere within the picture. In the earlier
times this technique was done with the help of optical printers whereas today it is done on a computer
using software. Refer to Figure 8.18a and 8.18b.

Figure 8.18b: Travelling mattes were used to com-


Figure 8.18a: Shrinking man effect created by the
bine separate shots of (a) a coffee cup (b) black
diminishing figure of the performer composited
liquid poured into a clear glass cylinder and (c)
with the full-scale scene using traveling matte
steam and highlight effect

98 Aptech Limited
Compositing and Rotoscoping
● Aerial-image printing: An aerial image is a real image, which is cast into space, rather than on to a
screen, ground grass, or piece of film. Let’s say for example we set up an 8” x 10” view camera, and
focus a scene upon its ground glass, we have caused a real image to be produced at the plane of
the viewing glass, where, suitably diffused, it becomes clearly visible. If with the camera in the same
position, we remove the ground glass entirely, the real image becomes an aerial image. The focused
image still exists in the same position, but since it exists only in space, it cannot be seen with the eye.
Here the important thing to understand is that this invisible aerial image can be re-focused by a second
optical system so as to produce a real image at another position. At the same time, any artwork, which
is placed into the plane of the aerial image, will be picked up by the supplement optics with equal clarity
and definition. Using this technique, it was possible to execute many kinds of composite shots, in which
painted or photographic artwork was combined with live action footage. Refer to Figure 8.19.

Figure 8.19: A diagrammatic representation of the Oxberry aerial-image system


 Combination techniques
In composition techniques, background projection was the most popular, where like the traveling matte
process, background projection provided a method by which the figure of an actor could be combined with
a background scene, photographed elsewhere. Till 1980 this technique was popular in Hindi films and was
used frequently.
● Background Projection: The major advantages of this technique were:
- The director and his crew could see the finished composite at the time it was photographed.
- The actor could see the background scene and was able to react appropriately to it.
- The director could truck, dolly, pan and tilt his camera while the actor performed in front of the
projected background.

The background can be classified into two parts, namely, rear projection and front projection. Rear projection
was an expensive technique and was best-suited for well-financed productions. Refer to Figure 8.20.

Aptech Limited 99
Concepts of Digital Filmmaking & Visual Fx

Background projected with projector

Actors performing the scene

Figure 8.20: Examples of rear projection shots


Front projection was of a much lower cost and produced almost same effects and advantages of rear-
projection technique. Refer to Figure 8.21.

Figure 8.21: Example of front projection

8.5.4 Methods of Creating Digital Visual Effects


A typical movie might have 1,000 to 1,500 shots. A shot might be one or two seconds long, or 30 to 60 seconds
long. A visual effects team is responsible for all of the effects shots in a single film. A given scene in a movie might
be filmed with a number of different cameras so that there are wide angles, close-ups, changes of perspective and
so on. In the final movie, these different viewpoints are mixed together to create the scene. Therefore, a single
scene might contain dozens of individual shots. By merging all the shots together in the correct order we create the
complete movie. Hollywood movies are typically shot on 35mm film at 24 frames per second. The first step of the
visual effects process is deciding which of the shots need to have visual effects applied to them. Once the shots are
decided, it’s the method of creating visual effects, which plays an important role. There are many methods available
to create these kinds of effects, let us have a look at them in brief.
 Scanning and printing: By using equipment like Cineon film scanner, it is possible to scan a film at
extremely high resolution (up to 12,750,000 dots per frame), store it, manipulate it digitally and then write it
back out to film at the same resolution.
 3D Character modeling and animation: The artists create and realistically animate characters and then
integrate them into scenes; everything from Godzilla to space ships can be added to a film. Along with this
even water, background scenery, clouds, flags, buildings, vehicles, explosions and so on can be created in
3D and added to the scene. These CG elements are totally realistic and are seamlessly integrated either
into CG backdrops or filmed scenes.
 3D camera tracking: In order to lay 3D characters into a filmed scene, there must be a model of how the
camera moves and zooms when the scene was shot. This model can be created by adding encoders to the
camera, or it can be created after the fact. In either case, a 3D model of the scene is created and how the

100 Aptech Limited


Compositing and Rotoscoping
camera moves within it is tested to get a clear idea on what is to be shot and what will look the best in actual
film. The tracking department uses markers added to the scene to create a 3D model of the scene and then
a 3D camera. The goal is for the 3D camera to exactly mimic the motion of the real camera so that the 3D
elements added to the scene look right and move correctly as the camera moves in the actual scene.
 3D setup - Setup is the process of adding a “skeleton” of bones and joints to a 3D model so that the
different shapes in the model move correctly with respect to one another. In some cases the bones and
joints are created by hand. In other cases they come from motion capture data.
 Motion capture - Motion capture employs the use of 3D technology to capture the motion of an actor. The
intended motion could be a normal walk cycle, or facial animation, or any action that can be performed by
an actor. The actor employed is usually a human, though any conceivable object could be used as an actor.
To gather motion capture data, an actor is fitted with a suit that has reflective markers or lights at every joint.
The actor moves on a special stage and 3D cameras watch the actor from a number of different angles.
Computer software is then able to track all of the markers and, with the help of a technician, bind them
together into a stick figure that accurately duplicates the motion of the actor.
 Rotoscoping: Rotoscoping is the process of outlining and “lifting” elements of a filmed scene off the frame
so that other elements can be added to the frame either in front of or behind the rotoscoped elements.
 Painting: In this process imaginary scenery is created using CG. It also involves what was once called
“airbrushing”, the process of adding or removing things from a scene. With painting the team can:

● Create matte paintings for backdrops

● Paint out wires, harnesses, brackets and other safety equipment

● Paint over the “holes” sometimes created by rotoscoping

● Touch up things like grass that has lawn mower tracks in it


 2D compositing: Compositing is the act of adding all of the different elements to a final scene.
Combined together, these techniques allow a visual effects team to create nearly anything that the director can
imagine. Refer to Figure 8.22. While creating the digital visual effects in a wide variety in movies, the team consists
of artists, technicians, producers and managers who work together to create scenes that are realistic, stunning and
totally convincing to the audience. It is next to impossible for a single person to create all the visual effects in the
film so this department also works in a group.

Most of the effects are created using digital tools, which are getting more and more popular worldwide for the
following reasons:
 Faster: Computers provide random or non-linear access to information. Applied to video, computers allow
users to jump from frame #6 to frame #88 directly, without passing intervening frames en route. This reduced
access time means increased creative time.
 They are very simple to use: Any person who is not so confident in working with these tools can also create
good effects suitable for any broadcast formats. As most of the tools are user friendly, even a beginner can
try out new things on footage and explore his creativity to the maximum level.
 We get more control over the creative process: With a general-purpose computer platform, video editing
and special effects are open to a world of post-production options. These software-based applications
include animation, paint, special effects, titling, and musical accompaniment.
 Less expensive: In the professional video editing market, software for editing and effects on the Macintosh,
PC, and Silicon Graphics platforms has begun to multiply rapidly. Software companies such as Adobe,
Alias, Avid, SOFTIMAGE, Wavefront and many others are drawing post-production professionals with low-
price products that don’t demand compromises. Apart from these, to enhance still images, software like
Adobe Photoshop, Fractal Design Painter, and KPT filters are equally popular and go hand in hand with
editing software.
 Small work set up and more money: Independent small Post-Production houses can also apply editing
and effects to projects that require broadcast quality, such as network television shows or feature films. This
is increasing more and more capital investments in these kinds of businesses.

Aptech Limited 101


Concepts of Digital Filmmaking & Visual Fx

Figure 8.22: Visual effects applied on an image

8.5.5 Types of Digital Effects


In the preceding part of this session we got the idea of optical effects, which were popular in early years of cinema.
In this section we will learn the types of digital effects that are very popular since past few years. These days many
2D and 3D digital effects are created using particles, dynamics and compositing phenomena. Lots of software like
3ds max, Maya, Combustion, smoke, Flame, After Effects, Digital Fusion, Inferno etc. provide us with tools with
which we can either drag and drop the effects or tool settings with which we apply effects on the objects. Most
common and popular effects like fire, fluid, dust, clouds, blast etc. are created using these digital techniques. Refer
to Figure 8.23.

■ Particle Systems
A particle system is a collection of independent points, which are animated using a set of rules, with the intention
of modeling some effect.

These points are assigned attributes, and animated over time. Common characteristics for a particle are:
 Position
 Velocity
 Energy
 Color – could also add transparency
 Lifetime – how long before itself destructs

As the animation progresses, these characteristics change. For example, at time = 0, a particle may have a
very high velocity, and a lot of energy. As time progresses, this velocity and energy may increase or decrease
as per the changes we make in settings, and its path will change. Explosions are one of the easier and most
impressive things to create using particle systems.

102 Aptech Limited


Compositing and Rotoscoping

Figure 8.23: Fluid and Blast effects created with particle systems

■ Dynamics
This method creates simulations between multiple objects using mathematical calculations. Basically this
method is used keeping in mind two aspects, first the physics approach and other, the entertainment approach.
The physics approach corresponds to simulation methods based on the various laws of physics, such as the
law of gravity. The simulation method is often mathematically complex to implement. And it almost always takes
a relatively long time to play in real time. Most of the time, simulation methods are required for science and
accurate visualization while entertainment methods are used to convey an idea, motion, and even emotion. The
entertainment approach is more inclined towards the traditional animation. Depending on the circumstances in
the scenes the suitable method is applied. For example, using the exaggeration approach, we achieve the main
objective of entertainment and that is to entertain. Refer to Figures 8.24a and 8.24b.

Figure 8.24a: Dynamics applied to a ball Figure 8.24b: Pins will fall realistically by the
mathematical calculations when the ball hits them

Let us understand physics and entertainment approach with an example. If we want to create an animation where
a model is bouncing on the ground, we will use the physics dynamics to create accurate bouncing movements.
This will help us in getting perfect bounce of the character as per the surface it’s bouncing on. But to make it look
more entertaining we might want to add a traditional animation principle like squash & stretch and overlap to it.
Though squashing and stretching might look exaggerating, this will add a lot of entertainment to the viewer. The

Aptech Limited 103


Concepts of Digital Filmmaking & Visual Fx
entertainment approach is easier to implement and does not require a great deal of mathematics. However, it does
not accurately depict what we see in real life.

Note

Motion capture animation is useful when both the entertainment and simulation approaches
fall short. For example, in creating facial animation, there are hundreds of detailed muscles
to control in order to convey any facial expression.

■ Digital Compositing
Using different types of digital compositing software, we basically generate the mattes for superimpositions,
tracking and stabilizing the shots. When the effects are created and finalized, they are superimposed on the
footage to get the final result. Using the above mentioned three methods we can create following effects:
 Water effect
Among all digital effects, creating a water effect is one of the most challenging tasks the artist will face.
Creating realistic looking digital water has always been a difficult task for both the artist and technical
director. The scene in which this effect is required needs to be analyzed as to what the final output should
convey. At times, the image must convey the mild-mannered peace that water brings. But at other times, the
images must convey the strength and awesomeness of water. Beginners or anybody from the non-technical
audience of the film might think that creating these kinds of effects must be some simple trick. The fact is
that creating realistic looking water is not really a trick. That is, despite all the development achieved so far
in creating the ideal mathematical model of digital water, there is no such easy formula to create good water.
Every new scene will require a new form of water and probably a new technique of creating it. Although
digital effect software provide us with general equations, functions and tools to create wavelike water, the
real success in directing the water flow with accurate water characteristic requires a good visualization
power and lots of patience and skills. Refer to Figure 8.25.

Figure 8.25: Water effect


While creating water the artist should consider three general types of water namely, solid (ice), liquid, and
gaseous (in the form of steam).

Though water does not have a fixed shape, it has a definite volume and while creating any kind of water we
must remember this key area. While creating water in any shape, like, rainwater, waterfall or ocean with lots
of ripples, we must not forget that the overall volume of the water must remain the same.

104 Aptech Limited


Compositing and Rotoscoping
 Fluid effects
Creating fluid effects are as tough as creating water. Here the artist needs to calculate the way fluids will
interact among themselves or with other objects. While creating fluid effects like fountains, splashing water,
pouring liquid or erupting volcanoes most of the software provide us particle system tools which requires
greater processor power and a lot of trial and error. Refer to Figure 8.26.

Figure 8.26: Fluid effects


 Explosion effects
Explosions are the air effects, which cause lot of combustion in the surrounding area. When two or more
objects like planets, airplanes or bombs forcefully collide with each other they result in huge explosions. The
force of the air, generated fire, the type of particles and smoke indicate the force of the collision. Showing
this effect digitally needs a lot of hard work and concentration. Refer to Figure 8.27.

Figure 8.27: Explosion effects


 Dust and cloud effects
These two digital effects are relatively easy to create compared to other effects. These are subtle air effects
and the results are based on turbulence and lighting parameters. While creating effects of the outdoor
animation it is almost inevitable that our camera will show part of the sky. The clouds indicate the depth of
the sky. A well-presented cloud will give a better idea of the scene and helps to establish the mood of the

Aptech Limited 105


Concepts of Digital Filmmaking & Visual Fx
picture. Dust and clouds change and take many different forms based on the daylight, wind force, season
and so on. Refer to Figure 8.28.

Figure 8.28: Cloud effect


Though in these effects the particles are small and subtle, the lighting elements have great effect on
the realistic look. These two elements affect the mood of the scene and hence need to be used very
cautiously.
 Fire effect
Fire can be explained from a chemical perspective as a phenomenon of combustion that manifests itself
in light, flame, and heat. It has no particular form, but yet can take the shape of any combustible medium.
We cannot grab it, but we can hold it. We cannot touch it, but we can feel it. Fire can also mean liveliness,
passion, excitement, and enthusiasm. As varied as the explanations given to the meaning of fire, the
computer graphics (CG) form of fire that most animators and technical directors are required to create is
equally varied. Burning effects and rapid movement are the categories of fire effects.

Burning effects: These kinds of effects give off spontaneous and smooth burning flames. Some of the real
life examples of these effects would be burning wood in a fireplace, candle flames etc. If we observe these
natural fires we will realize that there are sound effects associated with each kind of fire. While creating
these effects on the computer, the artists have to take into consideration the other aspects related to a
burning fire like the sound of fire, the effect of surroundings, the density of flames, the particles generated
by the fire etc. These mock fire effects are very useful when it is difficult to shoot the fire in the real life due
to high cost or inappropriate season. Refer to Figure 8.29.

Figure 8.29: Burning fire effect


Rapid movement: As the name suggests, these effects have spontaneity in them and they present stronger

106 Aptech Limited


Compositing and Rotoscoping
threat to the viewers. It show direct contrast to the first type mentioned above, which is also spontaneous
yet smooth. This kind of effect does not have any limitations and can grow rapidly and in any direction
according to the material it is using to burn. Refer to Figure 8.30.

Figure 8.30: Rapid movement Fire Effect


We should remember that in many types of software most of the above effects can be achieved using
particle systems, but the dynamic force works as the inbuilt force of the particle system to create desired
effects.
Apart from those mentioned above, there are many digital effects that can be created and are commonly
used to create visual effects. Digital effects like creating terrain (creating ground), grass and so on are also
very useful to create a good balance with actual footage and digitally created footage. These effects also
help in creating a variety in films and are useful in cost reduction too. Once the footage is ready with digital
effects, it is composited with the scene and from that visual effects are created. To enhance the impact of
these effects, sound is added in the background.
Although we are able to generate very believable digital effects relatively quickly and easily, it would be a
mistake if an animator relied solely on the advantage of technology to create effects. Just as having the
world’s most advanced machine guns won’t make us the best soldier, relying on advanced technology, be it
software or hardware, without knowing the fundamentals is as good as building a castle on sand. Our castle
may stand today, but it probably won’t tomorrow.
The success of visual effects widely depends on the way they blend with the original footage, the way
special effects and digital effects are combined wherever necessary and how they compliment the scene.
Here compositing plays an important role.

8.6 Summary
In this session, Compositing and Rotoscoping, you learned that:
 Digital compositing is the digitally manipulated integration of at least two source images to produce a new
image. This is a new, rapidly growing, and increasingly important art form. It is the process of assembling
multiple images to make a final image, typically for print, motion pictures or screen display.
 Prior to computers, an animation stand called a Rotoscope was used to project a sequence of action frames
against a surface so that a set of animation frames could be traced or created. The same work can now be
done with digital images and special computer software.
 The work “key” was derived from a digital mixer’s capability for cutting an electronic keyhole in a video
picture. Into this keyhole is placed another video signal or a matte color, which results in one image being
placed over, or within, another.
 Luminance Key detects the dark portions of a video picture and electronically replaces that with another
image, generally from another video source being fed into the digital mixer.
 Most of the effects are created using digital tools which, are getting more and more popular worldwide for
the following reasons:

Aptech Limited 107


Concepts of Digital Filmmaking & Visual Fx
● Faster

● They are very simple to use

● We get more control over the creative process

● Less expensive

● Small work set up and more money


 While creating special effects for films the special effects crew comes into the picture. They are involved
in every part of a production, from assembling together real elements to decorate the truth with realistic-
looking fakes.

8.7 Exercise
1. The __________and _________are the two terms that we will hear commonly while making titles for any film.

a. Safe titles, action safe b. Action safe, title safe


c. Rotoscoping, superimposing d. Title card, head credits

2. Action Safe area amounts to about 60% of the total picture area.

a. True b. False

3. Title Safe area for safe title is inside the safe action area and amounts to about 80% of the total picture area.

a. True b. False

4. A line of titles that move mostly horizontal in direction across the screen at the bottom is called_________.

a. Supered b. Head credits


c. Tail credits d. Title crawl

5. Rotoscoping is a technique by which one shot is superimposed on another, resulting in a composite shot.

a. True b. False

6. __________looks for a specific color of hue to be used as the cutout reference.

a. Chroma key b. Luminance


c. External d. Downstream

7. ___________Key is also known as Superimpose and does just that.

a. Action safe b. Title safe


c. Downstream d. Chroma

8. R
otoscoping technique is used to remove wires from models or actors, touching up the seams in a composite,
adding optical effects like lighting, clouds, smoke and do color corrections.

a. True b. False

9. I n _____________ process a large sheet of glass was placed in front of the camera with a painting of appropriate
representation images upon the portions of the glass.

a. Mattes b. Glass shots


c. Special Fx d. Digital Fx

108 Aptech Limited


Types of Outputs

9 Types of Outputs

Learning Outcomes
In this session, you will learn to -

 Explain videotape masters


 Explain the process of completing the audio mix
 Explain the process of creating a video CD

9.1 Introduction
In developing an idea for film, three primary questions need to be addressed: What are we going to film? How are
we going to film it? How are we going to structure it? Beginning with script development through the final wrap
shot, all aspects of production for film and television are studied, with special consideration of the in-school and
thesis projects. Independent film and television projects are used as the basis for a coordinated production outline,
script breakdown, shooting schedule, and budget program. Locations, lighting, travel, logistics, camera angles,
wardrobe, casting, set dressing, action props, and other elements affecting the budget and planning process are
also examined.

In the previous session we learnt topics like titling, compositing, rotoscoping and visual effects. Now we know that
once the movie footage is ready there are a lot many things that have to be done after shooting the scenes. Once
our movie is edited, polished and is ready with all the effects, titles and credits, we are ready for our big opening.
Though all the films may not be made for theaters with film print, there are still plenty of ways to deliver our movies
using suitable formats. The final step of making the movie is to show it. The key factor to remember here is to show
to it to an audience we believe matches the film. If we make a documentary film about wildlife in our area, most
teenagers will not like it; on the other hand, the local chapter of environmentalists or nature enthusiasts would
probably enjoy watching it. By avoiding groups that would turn the film down simply for its genre, we are well on our
way to making films that an audience can enjoy.

Outputting the movies in a right way is a major factor once the movie is done. The filmmaking is incomplete without
understanding the outputting of digital films. In this session we are going to learn the topics like audio mixing, DVD
authoring and transferring the movie onto videotape. Let us begin with videotape masters.

9.2 Videotape Masters


When it comes to outputting final masters, generally people think that it’s either “ no big deal”, just insert the tape
in the deck and hit record or “terror”, they think their budget will exceed much more than they had decided as the
words like, on-line, film recording make them scared. Both the reactions are because of the lack of knowledge as
creating a high quality final output (Masters) is a method where a lot of care of technical aspects, research and
attention to every minute detail is involved. If we take care of these things then we are ought to get the best results
we are looking out for.

Aptech Limited 109


Concepts of Digital Filmmaking & Visual Fx
The number of masters we will create will depend on our type of final output. If the project is a feature film, we
will create more than one master copy for the purpose of distribution. We will need VHS viewing copies for film
festivals. We will need streaming media output if we are publicizing the promos and trailers on Web and videotape
online to get a good videotape master. Let us understand this with an example. Say we want to release a film in
theaters in various countries then broadcast it on the independent channel and also on European TV. We need to
decide whether to broadcast from our videotape on-line master or create a new master from the film print by having
it back telecined (process by which we can transfer a film onto a videotape) back to videotape. We need to make
the changes in the audio as well according to the different broadcast specifications of the independent Film channel
and the European broadcasters.

Many a times it is difficult do decide whether to create directly a film print or a videotape master. The most cautious
choice is to start small, but keep our options open: create a videotape master and audio mix as viewing copies to
pass around. When we are making a videotape master, we need to take into consideration the format we shot on,
the format we wish to master and are we going to make a film print eventually.

If the footage is in the digital format, and we are thorough with NLE, we can save lot of money by creating the master
ourselves. The final quality of the master depends a lot on how we have captured the original footage. If the original
footage is of a very low quality then even the best NLE professional will be unable to get the best output.

9.3 The Final Audio Mix


When we complete the film, we may have many tracks of sound in the final cut if it’s a complicated film. While
outputting a film we need to mix the multiple tracks into a more manageable number, usually two or four tracks. If
the videotape format is high-end, it usually has four tracks audio whereas a low-end format and consumer videotape
has only two. In the audio mixing process the audio levels are set correctly for each piece of sound and then they are
combined into a final mix. Let us understand what are the standard types of mixes that are used very frequently.
 Mono: Mono mix is mixing all our sounds into a single track. This type of sound mix is sufficient for VHS
viewing copies only. If we are targeting anything else as a final output then we should have two tracks.
 Stereo Left and Right: Stereo mix is the standard for broadcast and films. In this process the multiple
tracks of our project are mixed down to two channels, one left (Track 1) and the other right (Track 2). Some
stereo mixes feature a particular sound or beat that moves from the left to the right channel creating a
different sound effect within the stereo mix.
 Dialogue, Music, and Effects (DM & E): This is a four channel audio mix method. In this the sync dialogue
is mixed down to one channel, while the stereo music is placed on the second and third channel and the
forth channel is used for sound effects.
 Surround Sound: Surround sound may seem out of the league of the independent filmmaker, but with the
introduction of the “home theater” and HDTV, surround sound may soon become a standard. Dolby Digital,
DTS (Digital Theater System and SSDS (Sony Dynamic Digital Sound) are the presently available digital
surround sound formats.

Figure 9.1: Dolby digital

110 Aptech Limited


Types of Outputs
● Dolby Digital and DTS formats use 5.1 channels: Left, Center, and Right Speaker in the front of the
theatre, left and right surround speakers in the rear, and an LFE (Low Frequency effects) subwoofer
channels. Refer to Figures 9.1 and 9.2.

Figure 9.2: S4 MidiLand 8200 v2.0 A 200 watts Dolby Digital/DTS Home Theater System

Note
The subwoofer only uses a tenth of the dynamic range of a normal channel, so overall we
get a total of 5.1 channels.

● SSDS uses 7.1 channels, adding center-left and center-right speakers in the front.

9.3.1 Tips for Professional Audio Mix


Most of the digital audios in the NLEs are recorded with a quality of 44.1 or 48 kHz, which is good for using tracks
directly from our NLE as a source for our mix. To do this we need to split-track output from our NLE. We can split the
track in any number of tracks but it is usually considered impracticable if we have split it in more than eight tracks.
The video decks can record only two channels but with Digital Betacam or a high-end Betacam SP deck, we can
record up to four channels. If we need to output more channels then we will have to use additional videotape stock
with matching timecode. One popular solution for multiple track recording is Tescam’s DA88 recorder, which records
eight channels of digital audio. Refer to Figures 9.3a and 9.3b.

Figure 9.3a: HPTV’s 32-channel audio mixer Figure 9.3b: The Tascam DA-88 is an eight track
16-bit digital recorder utilizing the Hi-8 recording
medium

Aptech Limited 111


Concepts of Digital Filmmaking & Visual Fx
9.3.2 Process of Audio Mixing
A basic professional starts with sound editing session, which is also called Short Sweetening. If we want to add
special effects in it then it’s better to give idea to sound effects editor in advance so that he or she can have some
options loaded up and ready to work with at the beginning of our sweetening session. For this the sound editor might
use higher end tools like Pro Tools, Fairlight etc.

After the sound is tweaked it’s time to mix. Our tracks will be sent through a large mixing board and out to high-
quality sound recording formats like 24-track tape DAT. The mixer will set the audio levels that will suit our video
master player and the resulting audio will be recorded. Once the entire project is checked and approved, the tracks
from 24-Track will be recorded back onto our videotape master, a process known as lay back. If we want more than
one type of mix like stereo and DM & E mix, we need to make two videotape masters to lie back onto.

No matter how many audio tracks we have, we’ll want to start by mixing them down to eight tracks. A standard eight-
track configuration includes two tracks of checkboarded sync production sound, a track of voice-over if voice-over
is there in the project, a track of ambience, two tracks of sound effects, a track of stereo left music, and a track of
stereo right music.

To create a DM & E mix, mix the sync production sound and voice-over down to one track, and the effects and
ambience tracks down to another track. Leave the music as it is on the last two tracks. If we are creating a stereo
mix then we have to mix the dialogue, effects and stereo music left to channel one and the dialogue, effects and
stereo music right to channel two. While doing any kind of audio mix we need to balance the audio levels correctly
or else the dialogues and effects might overpower the music.

9.4 Creating a Video CD (VCD)


VCD stands for ‘Video Compact Disc’ and basically it is a CD that contains moving pictures and sound. If we’re
familiar with regular audio/music CDs, then we will know what a VCD looks like. In Europe and Asia the VCD format
has become more popular than VHS tapes. VCD was ancestor to DVD. The normal VCD format uses MPEG1
compression to store the data up to 74/80 minutes of full-screen, full-motion NTSC or PAL video on a normal
650/700 MB CD or DCR. VCDs can be played on VCD players or some DVD players also support CD-ROM delivery
option, but with the extra advantage of full-screen video, and more playback options. It is also possible to use
menus and chapters, similar to DVDs, on a VCD and also simple photo album/slide shows with background audio.
The quality of a very good VCD is about the same as a VHS tape based movie but VCD is usually a bit blurry. Refer
to Figures 9.4a and 9.4b.

Figure 9.4b: Panasonic portable VCD player

Figure 9.4a: VCD-M99 single disc VCD player

112 Aptech Limited


Types of Outputs
The original format of VCD is VCD 1.1 but in the current state, there are many additional VCD formats like VCD 2.0,
VCD-ROM and VCD-Inter and all of which fit on normal CD’s or CDR’s.

9.4.1 Process of Creating a VCD


Creating a VCD is a simple process. After we finish our final edit and are fully satisfied with the final outcome we
have to compress our video with MPEG1 compressor. Once the video is compressed, we can write it to a CDR
using a standard CD recorder. We will need software that can write it to a VCD format. Once the CD is burnt our
VCD is ready for viewing.

9.5 DVD Authoring


As the popularity of DVD is increasing day by day, and the installed base of players is also growing equally,
outputting to DVD is becoming a viable, cost-effective way to sell it to the customers. Though we all must have
heard the word ‘DVD’ very often, many of us may not know what it is all about. So let us proceed further in this
section and understand what DVD is all about?

DVD is a short form for Digital Versatile Disc (or Digital Video Disc). A family of optical disc formats used both for
pre-recorded content, especially movies, and as recordable media for consumer devices and computers. A family
of data format standards for video, audio, and data storage (that is, DVD-Video and DVD-Audio) for consumer
electronics products and computers.

DVD discs are the same diameter as CD discs (120mm. or 12 cm, in diameter), and most formats hold 4.7GB
of data on a side. A smaller size mini-DVD disc is also used, especially in camcorders. With its massive storage
capacity and versatile, programmable interface structure, DVD provides a strong, high-quality delivery medium
with built-in simple authoring environment. In addition to large data storage capacity, the DVD video system also
benefits by giving interactive controls such as menus, buttons, chapters and random access. Creating a DVD is not
mere compressing of data and giving a high-quality output. It needs lot of hard work and efficiency to get the best
results.
 DVD Compression: Compression is a process of converting data into a more compact form for storage or
transmission. MPEG (Mostly MPEG-2) is the compression system of choice for DVD-Video and Video CD.
MPEG (Moving Picture Experts Group) is the ISO committee, which is responsible for defining the various
MPEG video specifications. While compressing the video into MPEG 2 format it can take lot of time so we
will have to schedule a time for this process. Depending on the speed of our computer and software, MPEG
2 compression can take anywhere from 1 to 3 hours per second of a video. Through hardware MPEG2
encodes work much faster, but it turns out to be quite expensive compared to software compression.

● MPEG-1 originally defined in 1992 was aimed at full screen video stored on a CD-ROM. It has since
been incorporated into the Video CD specification and is used on CD-ROMs.

● MPEG-2 came later and was intended for digital television applications and is used for DVD-Video. It
supports interlaced video and variable bit rate (VBR) encoding.

● MPEG-3 was intended for HDTV but this was later incorporated into MPEG-2.

● MPEG-4 is intended for video conferencing, Internet distribution and similar applications using low
bandwidths.
 Authoring in the video world refers to a process where already-encoded video files are transferred into a
specific format that describes how the data should be kept on storage media, such as CD or DVD. Most
common use of the term is when speaking of DVD authoring, using a separate DVD authoring software.
 DVD Authoring: The process of creating a DVD production. This involves creating the overall navigational
structure, preparing the multimedia assets (video, audio, images), designing the graphical look, laying out
the assets into tracks, streams, and chapters, designing interactive menus, linking the elements into the
navigational structure, and building the final production to write to DVD, CD, hard disk, or tape. Refer to

Aptech Limited 113


Concepts of Digital Filmmaking & Visual Fx
Figure 9.5.

Figure 9.5: Intel Pentium 4 Processor commonly used for complete DVD Authoring

In short, DVD authoring has five basic steps:


 Capturing video
 Exporting it
 Editing it
 Burning it

 Watching our finished product


Transferring our home movies from videotape to DVD is the most important thing we can do to ensure that we
have those memories down the road. Videotape degrades in quality faster than we may think even when it is sitting
around the house. Over time, the magnetic particles (iron oxide) on Video Tape that make up the picture and sound
on our videotapes begin to lose their bond to the plastic tape. We may also permanently damage (or burn) our film
during simple playback. The video and audio signals from our videotape are converted to digital media (1’s & amp;
0’s) and then physically ‘burned’ onto the DVD disc via laser whereby preserving our video for up to 300 years.

If we want to see our family videos in twenty years we better transfer our footage to DVD as soon as possible. Refer
to Figure 9.6.

Figure 9.6: (from left to right) Standard DVD player and Sony DVPNS400 DVD/CD/VCD player with DOLBY
Digital Decoder & DTS Pass Through

Conclusion: The major technical progress of the 1990’s has been the beginning of the Digital Age. All across the

114 Aptech Limited


Types of Outputs
world, especially in America, people are going digital, with CD’s having completely replaced vinyl and tapes, DVD’s
becoming increasingly popular, and camcorder’s and camera’s becoming sharper and sharper. Hollywood is not to
be left behind; in fact they are far ahead.

Numerous breakthroughs in computer effects editing make it not only possible to alter the look of a film in a computer,
but also make it extremely cost effective, as more productions use the computer to delete out mistakes in filming, or
expand the magnificence of a scene. Perhaps the most important step comes from the pioneer of the digital world,
George Lucas. Releasing Star Wars: E1 in three theaters using completely digital projectors (no film reels needed)
and making his preparations to film the next two using completely digital cameras and encouraging release on
completely digital theaters. It is now clear to Hollywood and the rest of the world that digital is the next evolution in
film.

9.6 Summary
In this session, Types of Outputs, you learned that:
 Many a times it is difficult do decide whether to directly create a film print or a videotape master. The most
cautious choice is to start small, but keep our options open: create a videotape master and audio mix as
viewing copies to pass around.
 The number of masters we will create will depend on our type of final output. If the project is a feature film,
we will create more than one master copy for the purpose of distribution. We will need VHS viewing copies
for film festivals. We will need streaming media output if we are publicizing the promos and trailers on the
Web and videotape online to get a good videotape master.
 If the videotape format is high-end, it usually has four tracks audio where as a low-end format and consumer
videotape has only two.
 Dolby Digital, DTS (Digital Theater System and SSDS (Sony Dynamic Digital Sound) are the currently
available digital surround sound formats.
 The video decks can record only two channels but with Digital Betacam or a high-end Betacam SP deck,
we can record up to four channels.
 A standard eight-track configuration includes two tracks of checkboarded sync production sound, a track of
voice-over if voice-over is there in the project, a track of ambience, two tracks of sound effects, a track of
stereo left music, and a track of stereo right music.
 DVD provides a strong, high-quality delivery medium with built-in simple authoring environment. In addition
to large data storage capacity, the DVD video system also benefits by giving interactive controls such as
menus, buttons, chapters and random access.

9.7 Exercise
1. The number of masters we will create will depend on our type of final output.

a. True b. False

2. Mono mix is mixing all our sounds into a single track.

a. True b. False

3. DM&E is a four channel audio mix method.

a. True b. False

4. M
ost of the digital audios in the NLEs are recorded with a quality of 16 or 28 kHz is good for using tracks directly
from our NLE as a source for our mix.

a. True b. False

Aptech Limited 115


Concepts of Digital Filmmaking & Visual Fx
5. DVD is the short form for __________.

a. Digital versatile disk b. Data versatile disk


c. Data video disk d. None of the above
6. DVD Authoring is the process of creating a DVD production.

a. True b. False

116 Aptech Limited


Glossary

Glossary

A
Anamorphic Widescreen Video

Vertical resolution improves when the anamorphic method of widescreen presentation is employed. This is where
the horizontal dimension is squeezed so that more of the vertical space can be used. The squeeze ratio is 1.78
to 1.33, since that is what has been used in Europe and Japan for a number of years now. Anamorphic video is
horizontally squeezing a widescreen image into a 1.33:1 image. The advantage of the anamorphic format is 33
percent more vertical detail in widescreen images over the letterboxed image. The DVD format is an anamorphic
widescreen capable format. A 1.78:1 anamorphic film original image on DVD has a potential of 640 vertical and 480
horizontal lines. In addition there is far better color resolution capability on this format than on a LaserDisc.

Animation

A film in which inanimate objects or individual drawings are photographed frame by frame in order to create an
illusion of movement on the screen when the film is projected at the standard speed of 24 frames per second (fps).
By manipulating the objects or drawings minutely for each frame the filmmaker can make objects or characters in
the film appear to move as if they are “animated.” Also called animated film.

Aspect Ratio

The ratio of the visible-picture width to the height. Standard television and computers have an aspect ratio of
4:3(1.33). HDTV has aspect ratios of either 4:3 or 16:9(1.78). Additional aspect ratios like 1.85:1 or 2.35:1 are used
in cinema.

Available lighting

The illumination that actually exists on location during filming, either natural (sunlight) or artificial (lamps, fires,
etc.).

B
Baby spot

A spotlight with 500 to 1000 Watts of illuminating power.

Back Porch

The area of a composite video signal defined as the time between the end of the color burst and the start of active
video. Also loosely used to mean the total time from the rising edge of sync to the start of active video.

Balanced Audio

A method that uses three conductors for one audio signal. They are plus (+), minus (-) and ground. The ground
conductor is strictly for shielding, and does not carry any signal. Also called “differential audio” and “balanced line’.

Black Level

More commonly referred to as “brightness,” the black level is the level of light produced on a video screen. The level
of a picture signal corresponding to the maximum limit of black peaks at which level a video screen emits no light at
all (screen black). The bottom portion of the video waveform which contains the sync, blanking and control signals.

Aptech Limited 117


Concepts of Digital Filmmaking & Visual Fx
The black level is set by the (incorrectly labeled) Brightness Control.

Blanking Interval

There are horizontal and vertical blanking intervals. Horizontal blanking interval is the time period allocated for
retrace of the signal from the right edge of the display back to the left edge to start another scan line. Vertical
blanking interval is the time period allocated for retrace of the signal from the bottom back to the top to start another
field or frame. Synchronizing signals occupy a portion of the blanking interval.

Blanking Level

Used to describe a voltage level (blanking level). The blanking level is the nominal voltage of a video waveform
during the horizontal and vertical periods, excluding the more negative voltage sync tips.

Blimp

A soundproof camera housing which prevents the noise of the camera’s motor from being recorded on the sound
track.

Breezeway

The area of a composite video signal defined as the time between the rising edge of the sync pulse and the start of
the color burst.

C
Cels

Transparent plastic sheets on which animators draw or letter images to be photographed frame by frame for an
animated film or to be superimposed over live action. Animation done from such drawings is called cel animation.

Chroma

The color portion of a video signal. This term is sometimes incorrectly referred to as “chrominance,” which is the
actual displayed color information.

Cinemascope

The trade name for wide-screen films photographed and projected with anamorphic lenses on the camera and the
projector.

Cinerama

A wide-screen motion picture process that employs three cameras and three projectors, a wide curved screen, and
stereophonic sound. Three separate images are projected simultaneously onto the curved screen, widening the
picture into the viewer’s peripheral vision.

Cinema verite

In French, literally, “cinema truth.” A style of documentary filmmaking in which the filmmaker interferes as little
as possible with events being filmed. Cinema verite, also called direct cinema, is characterized by direct and
spontaneous use of the camera (usually hand-held), long takes, naturalistic sound recording, and in-camera
editing.

Clamp

A circuit that forces a specific portion (either the back porch or the sync tip) of the video signal to a specific DC
voltage, to restore the DC level. Also called “DC restore.” A black level clamp to ground circuit forces the back-porch
voltage to be equal to zero volts. A peak clamp forces the sync-tip voltage to be equal to a specified voltage.

118 Aptech Limited


Glossary
Clipping

If a signal passing through an amplifier or other electronic circuit exceeds that circuits’ voltage or current limits it will
appear on an oscilloscope as if the tops of waveforms had been clipped off by a pair of scissors, hence the term.
Clipped signals are generally distorted in a number of ways both audible and measurable.

Color Burst

The color burst, also commonly called the “color subcarrier,” is 8 to 10 cycles of the color reference frequency. It is
positioned between the rising edge of sync and the start of active video for a composite video signal.

Color Saturation

The amplitude of the color modulation on a standard video signal. The larger the amplitude of this modulation, the
more saturated (more intense) the color.

Component Video

A three-wire video interface that carries the video information in its basic RGB components or luma (brightness) and
two-color-difference signals.

Composite Video

A video signal that combines the luma (brightness), chroma (color), burst (color reference), and sync (horizontal and
vertical synchronizing signals) into a single waveform carried on a single wire pair.

Crane shot

A shot taken from a studio crane, a large mechanical arm that can move the camera and its operator smoothly and
noiselessly in any direction.

Crossover

An electronic circuit device, used most commonly in loudspeaker systems, that divides an input sound spectrum
into higher and lower bands about a specified frequency. Crossovers can either be passive or active. They are also
found in many home theatre and cinema sound processors and are used to direct low frequency sounds to the
subwoofer.

D
Deep-focus photography

A cinematographic technique which keeps objects in a shot clearly focused from close-up range to infinity. Also
called pan-focus photography.

Depth of field

The distance in front of the camera lens within which objects appear in sharp focus.

Digital Light Processor

The Digital Light Processor is a micro mirror technology from Texas Instruments. It is also known as DMD or
Digital Micromirror Device. A DLP chip is made up of an array of mirrors. One of the configurations currently being
marketed has 848 elements per row and 600 rows. That’s 508,800 mirrors per chip. DLP is efficient in light output
both in active area of each element and reflectivity. The DLP imager requires a progressively scanned source to
drive it. That means that all DLP projectors capable of accepting an NTSC input have line doublers as part of their
processing electronics.

Aptech Limited 119


Concepts of Digital Filmmaking & Visual Fx
Differential Gain

Important measurement parameter for composite video signals. Not applicable in Y/C or component signals.
Differential gain is the amount of change in the color saturation (amplitude of the color modulation) for a change in
low-frequency luma (brightness) amplitude. Closely approximated by measuring the change in the amplitude of a
sine wave for a change in its DC level.

Differential Phase

Important measurement parameter for composite video signals. Not applicable in Y/C or component signals.
Differential phase is the change in hue (phase of the color modulation) for a change in low-frequency luma
(brightness) amplitude. Closely approximated by measuring the change in the phase of a sine wave for a change
in its DC level.

Documentary

A non-fiction film, usually photographed on location, using actual people rather than actors and actual events rather
than scripted stories.

Double exposure

The superimposition of two (or more) images on a single film strip. Also called multiple exposure.

Dubbing

Adding sound to a film after shots have been photographed and edited. Also, to insert dialogue, sometimes foreign,
into a film after it has been shot.

Dynamic Range

All audio systems are limited by inherent noise at low levels and by overload distortion at high levels. The usable
region between these two extremes is the dynamic range of the system. Otherwise defined as the range between
the loudest and softest sounds a sound format or system can reproduce with noise or distortion. Expressed in dB.

E
Emulsion

The chemical coating on film stock, which contains light-sensitive particles of metallic silver.

Epic

A film genre characterized by sweeping historical themes, heroic action, spectacular settings, period costumes, and
a large cast of characters.

Establishing shot

A camera shot, usually at long range, which identifies, or “establishes,” the location of a scene.

Ethnographic film

An anthropological film that records and perhaps comments on an ethnic group and its culture.

Expose

An investigative documentary that reveals, often in shocking ways, discreditable information or events.

120 Aptech Limited


Glossary

F
Fast film

Film stock that is highly sensitive to light, usually with an exposure index of 100 or higher. Also called fast-speed
film. See slow film.

Fast motion

Shots photographed slower than the standard speed of 24 frames per second (fps) so that the action on the screen
appears faster than normal when projected at standard speed. See slow motion.

Feature film

A full-length motion picture produced for commercial distribution.

Feminist criticism

Film analysis and criticism from a feminist perspective, concerned primarily with the social and political implications
of how women are depicted in films.

Fiction film

Any film that employs invented plot or characters; usually called narrative film.

Film criticism

The analysis and evaluation of films, often according to specific aesthetic or philosophical theories.

Film noir

In French, literally, “black film.” A type of film, mainly produced in Hollywood during the 1940s and 1950s, which
depicts “dark” themes, like crime and corruption in urban settings, in a visual style that features night scenes and
low-key lighting.

Film stock

Unexposed motion picture film with variable characteristics, such as gauge (16mm, 35mm, 70mm), color (black-
and-white, color), speed (fast film, slow film), grain (high-grain, or low-grain).

Filmography

A bibliographic listing of a film artist’s body of work.

Fisheye lens

An extreme wide-angle lens that distorts the image so that straight lines appear rounded at the edges of the
frame.

Flash-forward

A shot or sequence which depicts action that will occur after the film’s present time or that will be seen later in the
film.

Flashback

A shot or sequence which depicts action that occurred before the film’s present time.

Focal length

The distance from the center of the lens to the point on the film plane where light rays meet in sharp focus. A wide-
angle lens has a short focal length; a telephoto lens has a long focal length.

Aptech Limited 121


Concepts of Digital Filmmaking & Visual Fx
Focus

The sharpness or definition of a film image.

Following shot

A shot in which the camera pans or travels to keep a moving figure or object within the frame.

Footage

Exposed film stock.

Formula

A familiar plot or pattern of dramatic action which is often repeated or imitated in films, for example, in genres like
gangster films and westerns.

Freeze-frame shot

A shot in which one frame is printed repeatedly in order to look like a still photograph when projected. Also called a
freeze shot.

Frequency

The speed of vibration of a sound wave, measured in cycles per second or hertz. Frequency determines pitch; the
faster the frequency, the higher the pitch. The human ear can hear frequencies in the range of 20 to 20,000 Hz.

Front Porch

The area of a composite video waveform between the end of the active video and the leading edge of sync.

G
Gauge

The width of film stock in millimeters.

Grain

Minute crystals of light-sensitive silver halide within the emulsion on the film stock. Graininess is the speckle-like
appearance in a film image caused by coarse clumps of individual silver grains.

H
Hand-held camera

A shot where the camera operator, rather than a tripod or a mechanical vehicle, supports and moves the camera
during filming.

Hard lighting

Illumination, which creates stark contrast between light and shadow. See high-contrast lighting.

High-contrast lighting
A style of film lighting, which creates a stark contrast between bright light and heavy shadows.

Horizontal Line Frequency

The inverse of the time (or period) for one horizontal scan line.

122 Aptech Limited


Glossary
Hum

The coupling of an unwanted frequency into other electrical signals. In audio, a “hum” can be heard; in video, it can
appear as waves in the picture. Often it is an audible disturbance caused by the power supply.

I
In-camera editing

Editing done within the camera itself by selectively starting and stopping the camera for each shot.

Insert

A shot of a detail edited into the main action of a scene. Also called an insert shot. See cutaway.

Intellectual montage

Editing intended to convey an abstract or intellectual concept by juxtaposing concrete images which suggest it.

Interlaced Scan
The process whereby each frame of a picture is created by first scanning half of the lines and then scanning the
second set of lines, which are interleaved between the first to complete the picture. Each half is referred to as a
field. Two fields make a frame.

Invisible editing

Editing made unobtrusive by carefully cutting on action or matching action between shots. Also called invisible
cutting.

IRE

An arbitrary unit of measurement equal to 1/100 of the excursion from blanking to reference white level. In NTSC
systems, 100 IRE equals 714mV and 1-volt p-p equals 140 IRE.

Iris

A circular masking device, so called because it resembles the iris of the human eye, which may be opened up or
shut down during a shot.

L
Limbo lighting

A style of film lighting, which eliminates background light and isolates the subject against a completely dark (or
neutral) field.

Live action

Film action with living people and real things, rather than action created by animation.

Long take

A take (shot) of lengthy duration.

Loop

Film footage spliced tail to head in order to run continuously. Looping is sometimes used when actors dub lip-sync
sound to scenes which are already photographed. Also called film loop.

Aptech Limited 123


Concepts of Digital Filmmaking & Visual Fx
LaserDisc (LD)

Optical laser-read home video format, in use since 1978. The LD is the video source of choice for many home
theatre systems. The 425 horizontal line resolution (horizontal luminance or black and white detail) capability in
NTSC is far superior to standard VHS at 260 lines, but not to DVD’s 480 line potential. LaserDiscs feature two PCM
digital audio channels, and stereo analog FM audio.

Luma

The monochrome or black-and-white portion of a video signal. This term is sometimes incorrectly called “luminance,”
which refers to the actual displayed brightness.

M
Magnetic sound track

A sound track that is recorded on an iron oxide stripe at the edge of the film opposite the sprocket holes.

Master shot

A single shot, usually a long shot or a full shot, which provides an overview of the action in an entire scene.

Matching action

Cutting together different shots of an action on a common gesture or movement in order to make the action appear
continuous on the screen. Also called a matched cut. See also invisible editing.

Melodrama

A play or film based on a romantic plot and developed sensationally, with little regard for convincing motivation and
with strong appeal to the emotions of the audience.

Metaphor

A comparison between two otherwise unlike entities, usually achieved in films by montage.

Method acting

A naturalistic style of acting taught by the Russian actor-director, Konstantin Stanislavsky, where the actor identifies
closely with the character to be portrayed. Also called the Stanislavsky Method.

Mickey Mousing

Creating music that mimics or reproduces a film’s visual action, as, for example, in many Walt Disney cartoons.

Monochrome

The luma (brightness) portion of a video signal without the color information. Monochrome, commonly known as
black-and-white, predates current color television.

Monologue

A character speaking alone on screen or, without appearing to speak, articulating her or his thoughts in voice-over
as an interior monologue.

Montage

To assemble film images by editing shots together, often rapidly to condense passing time or events. In Europe
montage means “editing.” See also American montage, Russian montage, dynamic montage, narrative montage.

Motif

A recurring subject, theme, or image in a film.


124 Aptech Limited
Glossary
Multiple-image shot

A shot that includes two or more separately photographed images within the frame.

Multiscreen projection

Projecting motion picture images simultaneously on more than one screen.

Musical

A film genre that incorporates song and dance routines into the film story. Also called musical film.

N
Narration

Information or commentary spoken directly to the audience rather than indirectly through dialogue, often by an
anonymous off-screen voice. See voice-over.

Neo-realism

An Italian film movement after World War II characterized by starkly realistic, humanistic stories and documentary-
like camera style. Neo-realistic films were generally shot on location, using available lighting and non-professional
actors. Also called Italian Neo-realism.

Newsreel

A type of short film that presents a compilation of timely news stories.

Non-fiction film

Any film that does not employ invented plot or characters. See documentary.

Non-synchronous sound

Sound whose source is not apparent in a film scene or which is detached from its source in the scene; commonly
called off-screen sound. See synchronous sound.

O
Out-take

Any footage deleted from a film during editing; more specifically, a shot or scene that is removed from a film before
the final cut.

Over crank

To run film stock through the camera faster than the standard speed of 24 frames per second (fps), producing slow
motion on the screen when the film is projected at standard speed. See undercrank.

Overhead shot

A camera shot from directly above the action. See bird’s-eye view.

Overlapping editing

Cutting that repeats part or all of an action.

Aptech Limited 125


Concepts of Digital Filmmaking & Visual Fx

P
PAL

Phase alternate line. PAL is used to refer to systems and signals that are compatible with this specific modulation
technique. Similar to NTSC but uses subcarrier phase alternation to reduce the sensitivity to phase errors that would
be displayed as color errors. Commonly used with 626-line, 50Hz scanning systems with a subcarrier frequency of
4.43362MHz.

Pan-And-Scan

The technique used for transferring widescreen anamorphic films to standard full screen 1.33:1 aspect ratio video,
so as to avoid black bars at the top and bottom of a full screen 1.33:1 display screen. Since the format cannot show
the entire picture width, the transfer must continually move or pan from one part of the picture to another in order to
show all the on screen action. Frequently misused to describe full frame, non-widescreen anamorphic, spherical flat
photography without top and bottom black bars to properly frame the intended aspect ratio composition.

Pixel

Picture element. A pixel is the smallest piece of display detail that has a unique brightness and color. In a digital image,
a pixel is an individual point in the image, represented by a certain number of bits to indicate the brightness.

Pixilation

A type of film animation in which real objects or people are photographed frame by frame in order to make them
appear to move abruptly or magically when the film is projected. See also stop motion and trick film.

Point-of-view shot (POV)

A shot taken from the vantage point of a character in a film. Also called a first-person shot or subjective camera.

Postsynchronized sound

Sound added to images after they have been photographed and assembled; commonly called dubbing.

Process shot

A shot in which “live” foreground action is photographed against a background image projected on a translucent
screen.

Progressive Scan

The process whereby a picture is created by scanning all of the lines of a frame in one pass.

Property

Any movable item used on a theater or film set. Usually called a prop.

Puppet film

An animated film in which inanimate objects or figures are manipulated and photographed frame by frame in order
to make them appear to move when the film is projected.

Pure film

A type of experimental film that explores the purely visual possibilities of cinema rather than narrative possibilities.
Also called pure cinema.

126 Aptech Limited


Glossary

R
Rack focus

To change the focus of a lens during a shot in order to call attention to specific images. Also called selective focus
or shift focus.

Raster

The collection of horizontal scan lines that makes up a picture on a display. A reference to it normally assumes that
the sync elements of the signal are included.

Reflected Sound

Sound arriving at the listening location after bouncing off one or more of the surrounding surfaces. Because sound
waves lose energy according to the distance traveled and number of reflections encountered, reflected sound wave
are always of less intensity than similar waves arriving directly from the source. The sum total of all reflected waves
determines the room’s reverberation time and acoustical character.

Reaction shot

A shot that shows a character’s reaction to what has occurred in the previous shot.

Realism

A style of filmmaking, which endeavors to depict physical reality much as it appears in the everyday world. Typical
realistic techniques include the prominent use of long shots, eye-level camera angles, lengthy takes, naturalistic
lighting and sound effects, and unobtrusive editing.

Reverse angle, (R/A)


A shot where the camera is placed opposite its position in the previous shot, “reversing” its view of the scene.

Reverse motion

Action that moves backward on the screen, achieved by reversing film footage during editing or by reverse printing
in an optical printer. Also called reverse action.

RGB

Stands for red, green, and blue. It is a component interface typically used in computer graphics systems.

Rough cut

An early version of a film in which shots and sequences are roughly assembled but not yet finely edited together
for the final cut.

Running time

The duration of a finished film.

Russian montage

A style of editing, typical of prominent Soviet filmmakers in the 1920s, which employs dynamic cutting techniques
to evoke strong emotional, and even physical, reactions to film images.

S
Sampling Rate

The rate at which a signal is sampled, given in Hz. A sampling rate of 44.1 kHz, the standard for PCM audio, means
that 44,100 samples are taken of the sound per second.
Aptech Limited 127
Concepts of Digital Filmmaking & Visual Fx
Scene

A unit of film composed of one shot or several interrelated shots unified by a single location, incident, or set of
characters.

Science-fiction film

A film genre characterized by plot and action involving scientific fantasy. Also called sci-fi film.

Screen time

The time covered by a film’s story, as opposed to its running time.

Screwball comedy

A type of Hollywood comic film characterized by zany characters, incongruous situations, and fast-breaking
events.

Scrim

A translucent sheet of material used to soften or diffuse light on a shooting set.

Script

A set of written specifications for a motion picture production, usually delineating the film’s settings, action, dialogue,
camera coverage, lighting and sound effects, and music.

Selective sound

A sound track that selectively includes or deletes specific sounds.

Semiology

A theory of film criticism which views cinema as a language or linguistic system that conveys meaning via signs or
symbolic codes. Also called, semiotics.

Senior spot

A spotlight with 5000 Watts of illuminating power; also called a fiver.

Sequence

A unit of film composed of interrelated shots or scenes, usually leading up to a dramatic climax.

Setup

A reference black level 7.5% (7.5IRE) above blanking level in NTSC analog systems. It is not used in PAL or digital
or HDTV systems. In these systems, reference black is the same level as blanking.

Setting

A location for a film or a film scene.

Shock cut

A jarring transition between two actions occurring at different times or places. Also called a smash cut.

Shooting ratio

The amount of film footage shot compared to the length of the film’s final cut.

Shooting script

The script that the director and the actors follow during filming.

128 Aptech Limited


Glossary
Shot

A single, continuous run of the camera. The images recorded on a strip of exposed film from the time the camera
starts until the time it stops.

Single-frame cinematography

Shooting film one frame at a time to speed up normal motion, to make lifeless objects appear to move, or to do
time-lapse photography. Also called single-framing.

Slapstick comedy

Broad comedy characterized by violent physical action.

Slow film

Film stock that is relatively insensitive to light and that produces finer-grained images than fast film. Also called
slow-speed film.

Slow motion

Shots photographed faster than the standard speed of 24 frames per second (fps) so that the action on the screen
appears to move slower than normal when projected at standard speed. See fast motion.

Sound track

The optical or magnetic strip at the edge of the film which carries the sound. Also, any length of film carrying only
sound.

Star system

A system developed in the early days of Hollywood to market movies based on the appeal of popular actors and
actresses, “movie stars,” who were under contract with commercial motion picture studios to play leading roles in
their productions.

Stereophonic sound

Sound recorded on separate tracks with two or more microphones and played back on two or more loud speakers
to reproduce and separate sounds more realistically.

Still

A photograph taken of a film scene for promotional purposes, not to be confused with a frame enlargement
reproduced from actual film footage.

Stop-motion photography

Filming real objects or live action by starting and stopping the camera, rather than by running the camera continuously,
in order to create pixilation, trick-film effects, or time-lapse photography. Also called stop-action photography.

Sub-text

Implicit meaning in a play or film which lies beneath the language of the text.

Subtitle

A written caption superimposed over action, usually at the bottom of the frame, to identify a scene or to translate
dialogue from a foreign language.

Superimposition

To expose more than one image on film at the same time.

Aptech Limited 129


Concepts of Digital Filmmaking & Visual Fx
Surrealism

An avant-garde movement in the arts during the 1920s which endeavored to re-create unconscious experience with
shocking, dreamlike images. Surrealistic films rejected traditional notions of causality and emphasized incongruous,
irrational action instead.

S-Video

Commonly incorrectly used interchangeably with Y/C. Technically, a magnetic-tape modulation format.

Swish pan

Panning the camera so rapidly across a scene that the image blurs on the screen. Also called flash pan, whip pan,
zip pan.

Symbol

An object or image that has significance within a dramatic context beyond its literal meaning.

Synchronization

A precise match between film image and sound. Also called sync.

Synchronous sound

Sound whose source is apparent in a film scene and which matches the action.

Synecdoche

Use of a part to represent the whole.

Sync Signals/Pulses

Sync signals, also known as sync pulses, are negative-going timing pulses in video signals that are used by video-
processing or display devices to synchronize the horizontal and vertical portions of the display.

T
Take

The shot resulting from one continuous run of the camera. A filmmaker generally films several “takes” of the same
scene and then selects the best one.

Timbre

The subjective tonal quality of a sound. The timbre of any musical or non-musical sound is determined largely
by the harmonic structure of the sound wave. Rich sounding musical tones tend to have a great number of inner
harmonics which contribute to their lush timbre, while thin sounding musical tones tend to be lacking in the presence
of harmonics.

Timbre Matching

A type of equalization applied to the surround channels in home THX processors based on a finding that loudspeakers
used for the surround are different in design (dipole vs. director radiator) in THX-certified systems and tend to be
closer than those for the front, and therefore sound brighter. This equalization compensates for this perceived
effect.

Time-lapse photography (cinematography)

A type of cinematography in which the camera intermittently photographs the same object or scene over an extended
time period in order to speed up on the screen a lengthy process or action, such as the growth of a flower from a

130 Aptech Limited


Glossary
seed.

Treatment

A written description of a film story which may later be developed into a script. Also called film treatment.

Trick film (shot)

A film or a shot created by special camera or optical techniques, such as stop-motion photography or double
exposure.

Two-shot

A medium shot featuring two actors.

Typecasting

Selecting an actor or actress for a film role because of his or her physical type, manner, or personality, or according
to a public image created by previous roles he or she has performed.

U
Undercrank

To run film stock through the camera slower than the standard speed of 24 frames per second (fps), producing fast
motion on the screen when the film is projected at standard speed. See overcrank.

Underground film

An independent film which emphasizes the filmmaker’s self-expression rather than commercial success. Underground
films frequently challenge or experiment with traditional cinematic form and technique, hence they are also called
avant-garde or experimental films.

V
Vertical Field Frequency

The inverse of the time (or period) to produce one field of video (half of a frame). In NTSC it is 59.94Hz.

Vertical Frame Rate

The inverse of the time (or period) to produce one frame of video. Also called “refresh rate” or “vertical refresh
rate.”

Video Bandwidth, Minimum

The minimum analog bandwidth required to reproduce the smallest amount of detail contained in the video signal.

Voice-over

An off-screen narrator’s voice accompanying images on the screen. Any off-screen voice.

W
Watt

A unit of electrical power used to indicate the rate of energy produced by, or consumed by an electrical device. One
watt is one joule of energy per second.

Aptech Limited 131


This page has been intentionally left blank.
Answer Key

Answer Key

Exercise Answers Exercise Answers


Exercise 1.1 Exercise 6.9
1. b 2. a 1. b 2. a
3. a 4. b 3. b 4. a
5. b 6. a 5. a 6. a
7. a
Exercise 2.1
1. a 2. a Exercise 7.5
3. b 4. a 1. a 2. b
5. a 6. b 3. a 4. b
7. a 5. a

Exercise 3.7 Exercise 8.7


1. a 2. b 1. b 2. a
3. c 4. b 3. a 4. d
5. a 6. b 5. b 6. a
7. a 8. b 7. c 8. a
9. b
Exercise 4.7
1. b 2. a Exercise 9.7
3. b 4. a 1. a 2. a
5. a 6. a 3. a 4. b
7. b 8. a 5. a 6. a

Exercise 5.5
1. a 2. a
3. b 4. a
5. a 6. b
7. a 8. b
9. b 10. b

Aptech Limited 133


This page has been intentionally left blank.
Bibliography
Film Production Technique
- Bruce Mamer

Making Digital Videos


- Ben Long

The Digital Filmmaking Handbook


- Ben Long and Sonja Schenk

The Filmmakers Handbook


- Steven Ascher and Edward Pincus

History Of Film
- David Parkinson

Film and Video Budgets


- Michale Wises and Deke Simon

Special effects

AC manual
This page has been intentionally left blank.
This page has been intentionally left blank.
This page has been intentionally left blank.
Concepts of Digital Filmmaking & Visual Fx
Session Name Rating
Excellent Very Good Good Average Poor

Introduction

Understanding Video Technology

Guidelines for Selecting a DV Camera

Basics of Lighting and Art Directing

Dealing with Audio

Editing

Color Corrections and Monitor


Calibration

Compositing and Rotoscoping

Types of Outputs

Overall Feedback: ______________________________________________________________________________


_____________________________________________________________________________________________
_____________________________________________________________________________________________
__________________________________________________________________________________

Suggestions if any: ____________________________________________________________________________


_____________________________________________________________________________________________
_____________________________________________________________________________________________
_________________________________________________________________________________

Student Name: _______________________ Batch Code:_______________

Center: ____________________________________________________________

Region:________________________________________________________________
To,

Customer Care
Aptech Limited
Aptech House
A-65, MIDC,
Andheri(E)
Colored Section

Colored Section

Figure 1.2: Extreme long shot. The human form


Figure 1.1: Long shot is small, perhaps barely visible. The point of view
is extremely distant, as in aerial shots or other
distant views

Figure 1.3a: Medium long shot (MLS). Most, if not Figure 1.3b: Medium shot (MS). The actor is framed
all, of the actor’s body is included, but less of the from the thigh or waist up
surrounding space is visible than in the LS

Figure 1.4: Medium close-up (MCU). The lower


chest of the actor is still visible Figure 1.5: Close-up (CU). The actor is framed from
his or her chest to just above his or her head

Aptech Limited 141


Concepts of Digital Filmmaking & Visual Fx

Figure 1.6: Extreme close-up (XCU). Any framing Figure 1.7: Low angle in which the camera is lower
closer than a close-up is considered an XCU than the filmed object

Figure 1.8: Low angle in which the camera is lower


than the filmed object

Figure 1.9: Eye-level shot

Figure 1.10: Bird’s-eye view

142 Aptech Limited


Colored Section

Figure 1.11: Oblique shot or Dutch Angle

Figure 1.12a: Panning the camera

Figure 1.12b: Tilting the camera

Aptech Limited 143


Concepts of Digital Filmmaking & Visual Fx

Figure 1.13: In deep focus shots, all planes of the image are in focus--as in one shot from a commercial
for GEICO insurance, deep focus enables the viewer to see a pregnant woman in a wheelchair in the back-
ground while a young man speaks on the phone in the foreground

Figure 1.14: The shallow focus

Figure 1.18: A storyboard created on computer

144 Aptech Limited


Colored Section

Figure 1.19: A storyboard created for a video

Figure 2.1a: Original image

Figure 2.1b: Video out 1 (odd field)

Aptech Limited 145


Concepts of Digital Filmmaking & Visual Fx

Figure 2.1c: Video out 2 (even field)

Figure 2.1d: Video out 1+2 final image

Figure 2.2a: Full resolution progressive video frame

Figure 2.2b: Half resolution interlaced video field

146 Aptech Limited


Colored Section

Figure 3.1: The standard controls found on any DV camera

Figure 3.5a: Manual focus allowed to focus on the hand in this image. A large aperture kept the depth-of-
field shallow so the background is out of focus

Figure 3.5b: With just a slight shift in focus, the head has been made sharp and the hand in the fore-
ground soft

Aptech Limited 147


Concepts of Digital Filmmaking & Visual Fx

Figure 3.6a: A small aperture gives us greater depth of field

Figure 3.6b: A large aperture lets us throw the background out of focus

Figure 3.15a: This picture was shot with a wide-angle lens, the distance between the front and the rear of
the piano is elongated, giving the image an illusion of great depth

148 Aptech Limited


Colored Section

Figure 3.15b: The same piano as in above figure has been shot with a telephoto lens. Compare how the
distance between the front and the rear of the piano appears

Figure 3.15c: Telephoto lenses are widely used in sports coverage, to get a “closer” view of the action

Aptech Limited 149


Concepts of Digital Filmmaking & Visual Fx

Figure 4.3: Example of neon light

Figure 4.5a: Image shot with soft light

Figure 4.5b: Image shot with a hard light source

150 Aptech Limited


Colored Section

Figure 4.8: Example of lighting gels

Figure 4.14: Example of the different effects of exposures

Figure 4.15: Examples of indoor lighting

Aptech Limited 151


Concepts of Digital Filmmaking & Visual Fx

Figure 4.16: Examples of exterior lighting

Figure 6.6a: The train passes across the screen from left to right. This shot establishes the on-screen
direction

Figure 6.6b: Now the train is going in the opposite direction. This will not march the scene action

152 Aptech Limited


Colored Section

Figure 6.7: The two sides of the Liberty. Some might say this is crossing the line. But such a cut is not
confusing; in fact, it makes a strong, informative beginning to a sequence

Figure 6.8: One image getting faded into another image creating a fade effect

Figure 7.1: NTSC/PAL color bars

Aptech Limited 153


Concepts of Digital Filmmaking & Visual Fx

Figure 8.6: Superimposed titles

Figure 8.7: Images from Jurassic Park, a popular example of the compositing techniques applied by
Hollywood

154 Aptech Limited


Colored Section

Figure 8.8: Example of composited shots

Figure 8.9: Background replaced by using Chroma keys

Figure 8.10: Result of Luminance key

Aptech Limited 155


Concepts of Digital Filmmaking & Visual Fx

Figure 8.11: Greenscreen extraction, wire removals, and lots of rotoscoping make this a densely compli-
cated shot

156 Aptech Limited

You might also like