Professional Documents
Culture Documents
Concepts of Digital Filmmaking and Visual FX - INTL
Concepts of Digital Filmmaking and Visual FX - INTL
No part of this book may be reproduced or copied in any form or by any means – graphic, electronic or
mechanical, including photocopying, recording, taping, or storing in information retrieval system or sent
or transferred without the prior written permission of copyright owner Aptech Limited.
APTECH LIMITED
Edition 1 – 2014
Preface
Visuals Arts are now a part of everything from TV and film to live performance and multimedia to distance learning,
training and corporate videos. Today’s high-end, nonlinear digital production requires experience in cinematography,
writing, directing, producing, sound design, animation, special effects and many other associated skills. Smaller
productions are often staffed by only a few people. Visual producers are finding themselves responsible for more
and more technical and artistic aspects of production. To enter this field you should love to entertain, tell a story,
organize things and work with high-end technology. Our successful graduates are extremely creative, dedicated
and organized, and possess excellent communication skills. Be prepared for long hours. If you are willing to learn
all the filmmaking concepts in detail, turn to page one and start reading, and work through each session.
The ARENA Design team has designed this course keeping in mind that motivation coupled with relevant training
and methodology can bring out the best. The team will be glad to receive your feedback, suggestions, and
recommendations for improvement of the book.
Table of Contents
Introduction........................................................................................................................................................ 1
Introduction to Digital Filmmaking....................................................................................................................... 1
Process of Filmmaking........................................................................................................................................ 2
Film Language..................................................................................................................................................... 3
Script/Screenplay/Shooting Script....................................................................................................................... 9
Previsualization................................................................................................................................................. 11
Project Planning................................................................................................................................................ 13
Summary........................................................................................................................................................... 15
Exercise............................................................................................................................................................. 15
Editing.............................................................................................................................................................. 69
Introduction........................................................................................................................................................ 69
Editing Basics.................................................................................................................................................... 69
Editing Equipments........................................................................................................................................... 71
Creating a Rough Cut........................................................................................................................................ 73
Fine Cutting....................................................................................................................................................... 74
Transitions......................................................................................................................................................... 75
Sync Editing...................................................................................................................................................... 76
Summary........................................................................................................................................................... 77
Exercise............................................................................................................................................................. 77
vi Aptech Limited
Table of Contents
Glossary......................................................................................................................................................... 117
Iconography
: Hands-on Project
: Note
: Exercise Answers
1 Introduction
Learning Outcomes
In this session, you will learn to -
The key scientific principle on which films are based is the false assumption of motion. It is also commonly known as
persistence of vision. This happens because of the ability of retina to retain an image of an object for 1/20 to 1/5 of
a second after image is removed from the field of vision. The brain has a perception threshold, below which images
exposed to it will appear continuous and film’s speed of 24 frames per second is below that threshold. Hence, in
films we see perceptual illusions generated by a machine.
Filmmaking is a vast topic and covers a lot of things. As a viewer we all know many basic things that are related to
films like acting, make up, direction etc. Apart from this there are innumerable other things that are important while
creating any film. In this book we are going to cover many concepts related to digital filmmaking and after reading
this book we will be surprised to learn how much we already know about filmmaking.
The phrase digital video or film is very simple to understand but there are lots of aspects related to it. For example, a
Quick Time movie downloaded from the Web, an animation generated by a computer programme, transferring video
film from home video camera onto our computer or transferring 35 mm motion picture film into high end graphics
workstation with the help of special scanners, all result in a digital video.
Some people use the term digital video to refer to very specific pieces of equipment like DV camera while others use
digital video as a broader term that includes any type of digitized video or film. But in the broadest definition we can
say that a digital film is a process where our source video is digitized at some point so that it can be manipulated and
edited on the computer and after making these changes the final output that we get is also through a computer.
The process of filmmaking of digital video is similar to the analog filmmaking but the main difference between a
digital and analog camera is that a digital camera digitizes video while we shoot and stores it on tape in a digital
format while an analog camera stores video and audio on tape as analog waves. Here one question that comes to
our mind is what does the word “digitizing” mean? So let us proceed further and understand this term in depth.
Aptech Limited
Concepts of Digital Filmmaking & Visual Fx
■ What is Digitizing?
We have learnt above that a digital video camera digitizes video on-board and simply stores the resulting
numbers on a tape. As a result, if we transfer video from a DV camera into our computer, we don’t technically
digitize the video, because the camera has already done that for us. As a result we are just copying and pasting
that data from that tape into our computer.
Whereas if we are using an analog video camera the digitizing process will happen inside our computer rather
than just inside our camera. For this process we need a special hardware called video capture boards that
can change the analog video signal of our camera into digital information and store it on our computer’s hard
drive.
There are many advantages in using a digital video over traditional video editing and production equipment,
among which the main ones are quality, function, and price.
While creating any film we have to take into consideration the following aspects at various stages. Each
department of film needs equal attention and good planning:
Script
Camera
Direction
Art Direction
Audiography
Acting
Costume and Make up
Editing (Special Effects and Compositing if required)
We will learn about each of these stages as we proceed further. When the film is finished, prints are made,
which are copies of the film that can be shown in screening. Finally, the movie is released or distributed. Movies
are increasingly distributed to a global marketplace and issues of multiple languages, technologies and venues
must be dealt with. Many decisions made during the movie’s production affect what kind of distribution is
possible, and the filmmaker must try to anticipate distribution goals from the very start of the project.
Aptech Limited
Introduction
duration of production or only on the days it is needed.
Postproduction: The postproduction period (often just called post) begins once the principal shooting
is completed. Editing is done to condense what are often hours of film or video footage into a watchable
movie.
Before getting into the actual preproduction work area we must know the terms related to films and understand
them in brief so that it will be easier for us to understand the contents of the book more easily. These are the key
terms and are needed to know by any person who wants to be in film media as every film house uses them very
frequently.
Aptech Limited
Concepts of Digital Filmmaking & Visual Fx
the subject. Presenting a lone figure in a vast landscape will make the figure appear to be overwhelmed by
the surroundings. Refer to Figure 1.2.
Figure 1.2: Extreme long shot. The human form is small, perhaps barely visible. The point of view is ex-
tremely distant, as in aerial shots or other distant views
(Refer to the Colored Section for the colored image.)
Medium Shots: The medium shot (MS) represents how we interact with people in life. In this technique
we see a person from waist up which gives more details than full body. The medium shot literally puts the
viewer on equal footing with the subject being filmed. Refer to Figures 1.3a and 1.3b.
Figure 1.3a: Medium long shot (MLS). Most, if not Figure 1.3b: Medium shot (MS). The actor is framed
all of the actor’s body is included, but less of the from the thigh or waist up
surrounding space is visible than in the LS
(Refer to the Colored Section for the colored
(Refer to the Colored Section for the colored image.)
image.)
Close-up: The close-up (CU) is essentially a head shot, usually from the top shirt button up. Anything
closer than that is extreme close up (ECU). The medium close-up (MCU), which is from midchest up, is
also frequently employed. The CU is the shot that provides the greatest psychological identification with a
character as well as amplifies details of actions. It is the point in which the viewer is forced to confront the
subject and create some kind of internal psychological self. Close-ups are also used to show the details.
The interrelationship of these shots can be particularly successful in creating suspense. Refer to Figures
1.4, 1.5, and 1.6.
Aptech Limited
Introduction
Figure 1.4: Medium close-up Figure 1.5: Close-up (CU). The Figure 1.6: Extreme close-up
(MCU). The lower chest of the ac- actor is framed from his or her (XCU). Any framing closer than a
tor is still visible chest to just above his or her close-up is considered an XCU
head
(Refer to the Colored Section for (Refer to the Colored Section for
the colored image.) (Refer to the Colored Section for the colored image.)
the colored image.)
Angles: Although the term angle is often used on the set to designate simple camera position (setup), it
also has a more limited meaning in terms of camera resources, that is, the height and orientation, or level,
of the camera in relationship to the subject.
Low-angle shot: A low-angle shot is one in which the camera is below the subject, angled upward. It has
a tendency to make characters or environments look threatening, powerful, or intimidating. In this kind of
angle the viewer is presented with low-angle shots of skyscrapers looming over the awed onlooker. The
low-angle shot can also give a distorted perspective, showing a world out of balance. This can produce a
sense of both disorientation and foreboding. Refer to Figure 1.7.
Figure 1.7: Low angle in which the camera is lower than the filmed object
(Refer to the Colored Section for the colored image.)
High-angle shot: The high-angle shot is the opposite of low-angle, and produces the opposite effects as
well. The camera is placed above the subject, pointing down. It tends to diminish a subject, making it look
intimidated or threatened. This is the conventional way of making characters look insignificant. Refer to
Figure 1.8.
Aptech Limited
Concepts of Digital Filmmaking & Visual Fx
Figure 1.8: High angle in which the camera is higher than the final object
(Refer to the Colored Section for the colored image.)
Eye-level shots: These shots are taken with the camera on or near the eye level of the character or
subject being filmed. Eye-level shots tend to be neutral. Much like the medium shot, an eye-level shot puts
the viewer on equal footing with the subject being filmed. It has none of the diminishing or exaggerating
qualities of the high- and low-angle shots. Refer to Figure 1.9.
Aptech Limited
Introduction
Aptech Limited
Concepts of Digital Filmmaking & Visual Fx
Figure 1.12a: Panning the camera Figure 1.12b: Tilting the camera
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
Focus Effects: Occasionally, beginners expect that, as a rule, everything should be in focus. This is,
indeed, the approach of many films. However, focus can be used to create many effects-from those so
subtle that they are rarely noticed by the viewer to others so big that they demand to be interpreted on a
thematic level. As should be expected, there are a number of approaches to using focus as an aesthetic
expression.
Deep Focus: The approach that keeps all elements in the frame sharp is called deep focus. Refer to Figure
1.13.
Figure 1.13: In deep focus shots, all planes of the image are in focus--as in one shot from a commercial
for GEICO insurance, deep focus enables the viewer to see a pregnant woman in a wheelchair in the back-
ground while a young man speaks on the phone in the foreground
(Refer to the Colored Section for the colored image.)
Shallow Focus: Shallow focus is an approach in which several different planes of focus are incorporated
within a single image. This can create a purposefully less realistic image, one that manipulates viewer
attention and suggests different planes of action, both literally and figuratively. Refer to Figure 1.14.
Aptech Limited
Introduction
Shooting out a sequence requires careful attention to detail in order to ensure that all the elements of a scene
remains consistent from shot to shot. While filming a script, a script supervisor needs to take care of various types
of continuities as follows:
Action
Props, Costume and Make up
Historical
Lighting
Sound
Performance
Spatial
Aptech Limited
Concepts of Digital Filmmaking & Visual Fx
Screenplay format makes it easy to quickly judge the budget and scheduling concerns in a script. In any kind of
project we are working on, writing screenplay will make our production job much easier.
The benefit of screenplay format is that it makes it easier to determine the length and pacing of out script. If we are
following standard screenplay margins and layouts, our script will be more or less one minute per page. In reality
this time varies from 3 to 5 minutes per page. In the traditional screenplay format the script will be divided into
scenes defined by slug lines. A slug, tells whether the subsequent scene is interior or exterior, the location of the
scene and whether the scene takes place during the day or night. Let us have a look at an example of this kind of
screenplay.
Voice over PG 1
indication of a
SCREEN BLACK
character
Bharat (V.O.)
Niti has coffee cups in her hand for Bharat and herself. They struggle
intensely.
They are both around 30-35; Niti is good-looking, smart, with pleasant
personality; and Bharat, is tall guy with serious look on his face. They are both
sipping coffee now and enjoying the rains. Suddenly they hear a gun shot from
a very close distance.
Dialog NITI
Bharat falls down on the floor and his face is twisted with pain. NITI sees a
woman standing with a gun in her hand. The gun is still pointed at Bharat and
there is lot of anger on her face.
Ohh Sunita why did you do it? What made you do such crazy thing? Its all my
fault. I should have told Niti the truth. Its too late now. The pain is terrible but the
most terrible thing is that the way you have treated me. You did not even give
me a chance to explain.
Though this example is of one scene, there are many pages and scenes developed in parts and simple language,
which than an actor says and does suitable movements.
10 Aptech Limited
Introduction
1.5 Previsualization
1.5.1 Storyboarding
Our main tool for preparing for our shoot is the storyboard. Storyboarding is the first step for serious visualization.
Storyboards are comic-book like representations of the images in our production. This step of Previsualization
forces us to answer some questions that we may not have dealt with during the script writing. From selecting
locations, till the editing of project, storyboarding gives us a clear picture of how our project is going to shape up. In
this stage we will make practical decisions about how we will shoot our script.
Aptech Limited 11
Concepts of Digital Filmmaking & Visual Fx
What kind of equipment we will need for a chosen location?
Plan for light and sound
Is it possible to create a realistic looking set in the studio itself?
It is very important to talk to our principal crew members when exploring locations. Refer to Figures 1.16 and 1.17.
They are the people who can tell us if it will be affordable, if it is possible to shoot in the chosen location, if the dress
designing and other clothing accessories will be suitable for that particular location. All these issues will be sorted
out when we will actually start the shoot.
12 Aptech Limited
Introduction
1.6.1 Scheduling
Making a movie takes a lot of time and unfortunately for the actors, most of the time is spent standing around while
the crew set up the cameras and lights. To save this kind of waste of time a good schedule can help us get everyone
to the right place at the right time, and ensure what we’re doing when we get there.
We can schedule our project in whichever way we want but it is important that our schedule is simple to understand
to all the cast and crew. Let us understand one simple method:
Create a grid and arrange our scenes: Create our schedule on a big grid, with characters and crew listed
on the rows, and scenes listed along the Columns. Next group all our scenes together according to their
locations. Each scene that needs to be shot in this location should be planned in their own columns. The
advantage of this schedule is that, when we get to a particular location, we want to shoot all the scenes that
take place at that location so that we don’t have to go traveling back and forth from one location to another.
Refer to Figure 1.20.
Aptech Limited 13
Concepts of Digital Filmmaking & Visual Fx
1.6.2 Budgeting
The budget is an accounting of what every aspect of the movie costs. It is a complicated and time-consuming
process that goes hand in hand with scheduling, as it is difficult to predict how long our shoot is going to take. Before
getting into the production, the estimated budget plays a key role in getting the project financed and under way. In
the movie making process budget makes it clear to producers exactly where the money is going to be spent. For
feature films, some production managers and others specialize in reading scripts and drawing up budget estimates
based on the use of locations, size of cast, entire units accommodation and food expenses, special effects and the
like. Preparing and managing the budget is generally one of the producer’s key jobs.
There are many ways to organize a budget. Different types of productions (features, documentaries, corporate
projects, multimedia) call for different budget formats. There are several computer packages (such as Movie Magic
Budgeting) that help us lay out budget and track expenditure. Refer to Figure 1.22.
14 Aptech Limited
Introduction
It often helps to divide the budget chronologically, separating the cost of preproduction (research, casting, scouting,
planning), production (film or tape costs, equipment rental, travel and food for crew) and postproduction (editing and
the various finishing costs like music, mixing, online costs or negative matching, titles and prints and dubs). Every
producer also has to set a budget for publicity of the film once it is over and ready to release.
1.7 Summary
In this session, Introduction, you learned that:
A movie begins with the kernel of an idea, an image, and some small piece of a story or character. From there,
the original concept is built into a working story treatment. The treatment is developed into a screenplay and
the screenplay is made into a movie.
There are a number of steps any film project takes before it shows up at your local theatre. They are:
● Development
● Pre-production
● Production
● Post-production
● Pre-Production
Once the development phase is complete, the project moves into pre-production. This is the preparatory or
primary planning stage.
It is during pre-production that costumes and sets are designed, the remaining crew hired and locations
scouted and chosen. Shooting schedules are also developed and casting continues. Absolutely everything
that can be done prior to principal photography is considered.
Production is the “active” process of making the film. The script is put to the camera, in studio and on location,
with actors, full costume, makeup, lights and sound. This is often referred to as “principal photography”.
This stage is expensive, time-consuming and requires extensive financial and logistical planning.
Post is the compilation phase. The director and/or producer will work with the picture and sound editors
putting together the hundreds of shots and sounds taken during principal photography. This is when special
effects are added and shots are adjusted technically, aesthetically and for greater narrative impact. Dialogue
is fine-tuned. Soundtrack and audio effects are matched to the visual content. Slowly, the film goes from a
rough cut to the finished, polished, final version that audiences will see in theaters.
Once the picture is completed and approved, it is marketed and distributed.
1.8 Exercise
1. The production period essentially begins with the camera roll.
a. True b. False
a. True b. False
3. Proxemics refers to the distance between the subject and the camera.
a. True b. False
4. T
he MCU is the shot that provides the greatest psychological identification with a character as well as amplifies
details of actions.
Aptech Limited 15
Concepts of Digital Filmmaking & Visual Fx
a. True b. False
a. True b. False
6. A
production designer helps to define the look of the movie and to explore how all the elements at our disposal
can be used to strengthen our visual image and our story.
a. True b. False
16 Aptech Limited
Understanding Video Technology
Learning Outcomes
In this session, you will learn to -
2.1 Introduction
Film has proven its power to engage us for over 100 years; radio for over 70 years, television for 50, and computer
media, the new kid on the block, is proliferating faster than its predecessors. Media production demands writing and
rewriting, research, group effort, and clarity of thought. Creating a film offers a means for associated people to talk
to whomever they think is an important audience and earn good money.
Digital video (DV) offers a number of advantages over analog. Digital media itself has particular quality characteristics.
Generally, digital media present a crisper, cleaner product that can be viewed an unlimited number of times. The
largest advantage, however, is that it’s a lossless medium. DV can be transported an unlimited number of times and
still retain its quality. This is not true of analog video, which has a “generation” effect each time it’s transferred.
Digital Video also offers an extended shelf life. So long as the magnetic videotape does not significantly degrade,
video quality will look exactly the same today as years from now. Because of these advantages, users can import
movies into their computer, work on them and export them back to a DV camcorder for storage. Down the road,
they can capture them back to the computer without losing any quality. So in addition to offering good quality, DV
camcorders also act as a good video backup/storage device.
The videotapes that are used for TV are interlaced. In this method our TV first displays even numbered scan lines
called even fields from top to bottom and then it goes back and fills in remaining odd fields. Interlacing technique
avoids the image flickering on TV screen. The speed at which the scan lines run on the TV set is called its frame
rate. Different formats have different frame rates depending on the format supported by that country. Refer to
Figures 2.1a, 2.1b, 2.1c, and 2.1d.
Aptech Limited 17
Concepts of Digital Filmmaking & Visual Fx
Figure 2.1b: Video out 1 (odd field) Figure 2.1c: Video out 2 (even field)
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
Our computer monitor and many new digital television formats use progressive scan. This method draws each line,
18 Aptech Limited
Understanding Video Technology
in order from top to bottom which outcomes in cleaner looking image and mostly higher resolution. Refer to Figures
2.2a and 2.2b.
Figure 2.2a: Full resolution progressive video Figure 2.2b: Half resolution interlaced video field
frame
(Refer to the Colored Section for the colored
(Refer to the Colored Section for the colored image.)
image.)
There are many different kinds of video signals, which can be divided into either television or computer types. The
format of television signals varies from country to country. In the United States and Japan, the NTSC format is used.
NTSC stands for National Television Systems Committee, which is the name of the organization that developed
the standard. In Europe, the PAL format is common. PAL (phase alternating line), developed after NTSC, is an
improvement over NTSC. SECAM is used in France, Russia and Asia, and stands for sequential coleur avec
memoire (with memory). It should be noted that there are a total of about 15 different sub-formats contained within
these three general formats. Each of the formats is generally not compatible with the others. Although they all
utilize the same basic scanning system and represent color with a type of phase modulation, they differ in specific
scanning frequencies, number of scan lines, and color modulation techniques, among others.
Digital TV or DTV has many different formats, specifications and names that fall under the “ digital television” category.
DTV includes standards for broadcasting like, ATV (Advanced Television, HDTV (High Definition Television) and
SDTV (Standard Definition Television). Refer to Table 2.1. It is possible to convert from one standard to another,
either through expensive tape-to-tape conversions, or through special software.
Aptech Limited 19
Concepts of Digital Filmmaking & Visual Fx
2.2.3 Video Timecode
Timecode is important because it identifies individual frames of our video and is used throughout any video editing
system. It’s rather like the page numbers of a book. Timecode systems assign a number to each frame of video
analogously to the way that film is manufactured with edge numbers to allow each frame to be uniquely identified.
Time data is coded in binary coded decimal (BCD) digits in the form HH:MM:SS:FF (Hours: Minutes: Seconds:
Frames), in the range 00:00:00:00 to 23:59:59:29 for 30 Hz frame rate systems. There are timecode variants for
systems having 24, 25, 29.97, and 30 frames per second.
Timecode is fundamental to videotape editing. An edit is denoted by its In Point (the timecode of the first frame to be
recorded) and its Out Point (the timecode of the first frame beyond the recording). An edited tape can be described
by the list of edits used to produce it. Each entry in such an edit decision list (EDL) contains the In and Out points of
the edited tape, and the in and out points of the source tape, along with tape reel number and/or other source and
transition identification.
An edited tape is invariably recorded with continuous “nonbroken” timecode. Nearly all editing equipment treats
the boundary between 23:59:59:29 and 00:00:00:00 as a timecode discontinuity; consequently, it is conventional to
start the main program segment on tape with the code 01:00:00:00
It is a frequent requirement to lock a television station’s master timecode generator to its master clock system,
providing accurate time of day information in the timecode signal for video recorders, station automation systems
and other broadcast equipment. Master clock systems provide a central, highly accurate source of time and
timecode for a broad range of time, date and timecode-referenced devices, from
clocks to computers. A choice of off-air and satellite synchronization references
and various device drivers ensures that the master clock can operate reliably
anywhere in the world, providing a very comprehensive and cost-effective
solution.
There are several standards of timecode, and a deck or camera can only read timecode that it has been
designed to understand.
SMPTE: This timecode is the professional industry standard set up by the Society of Motion Picture and
television Engineers (SMPTE). All professional equipments can understand this timecode.
DV timecode: This timecode is developed for DV format and is becoming increasingly popular.
RCTC: (Rewriteable Consumer Time Code) is a format Sony developed for use with consumer Hi8 equipment
but unlike DVTC, it is difficult to find any support for RCTC outside our own editing system.
Timecode can be stored in a number of physical locations on videotape: on the address track, on an audio track
or as window burn (or “vis” code), which is a visible counter that is superimposed over our video. Most digital
video formats uses address track timecode, but in special situations we might need to restore to video burn or
audio track timecode. Refer to Figure 2.3.
20 Aptech Limited
Understanding Video Technology
So far we have learnt about video formats; now it is essential to understand the format’s specifications to get
the best possible quality all throughout our project. While discussing video formats we will come across many
terms like compressions, resolutions, data rates etc. Let us understand these terms in detail.
Video: DV, DVCAM or DVCPro are all great choices for video release as they give good quality to create a
master. Refer to Figure 2.4.
Film Projection: When we want the best quality in film projection it is advisable to use DVCAM or DIGIBeta.
Refer to Figures 2.5, 2.6, and 2.7.
Compression is a key component in facilitating the widespread use of digital video, which is currently, prevented by
the mismatch between the huge storage and transmission bandwidth requirements of video and the limited capacity
of existing computer systems and communications networks.
This compression can affect the quality of our image to a great extent hence we need to be very careful while
choosing the compression format. Compressed video can range anywhere from 10:1 to 1:16:1. Most compression
processions work by reducing unnecessary, redundant color information in each frame. Because our camera can
capture more colors than our eyes can perceive, compression software can afford to throw out the colors that our
eyes are less sensitive to, resulting in less color data and hence less file size.
The human eye is more sensitive to differences in light and dark light values that each color has rather than the
differences in colors. When a digital camera is sampling an image the degree to which it samples each primary color
is called the Color Sampling Ratio.
DV formats use 4:1:1 color sampling, which is an amount of color reduction that is considered visible to the viewer.
In this ratio the first number stands for the Luma signal and the second two numbers stand for the color difference
components (color brightness and chroma).
There are many different kinds of digital video. Each type is encoded in a format designed to compress the video
into a usable form. Some popular compression formats include DV, MPEG-1, MPEG-2, MPEG-4, QuickTime
Video, Sorenson Video, Cinepak, M-JPEG and AVI. Each format has its specialized use, and some are better than
others.
2.3.2 Resolution
Video resolution is measured by the number of vertical lines that fit across the image horizontally. This is also called
Horizontal Line Resolution.
The more scan lines we have, the better the vertical resolution possible. Since each video standard has a fixed
number of scan lines, Vertical Line Resolution is the same (525 lines for NTSC of which 485 are visible, 625 lines
for PAL of which 575 are visible) for all TV sets of the same standard except the poorest quality, defective, or very
small sets.
Advertised resolution is the horizontal resolution and it varies with the quality of the TV set. Side by side thin upright
lines, whose cross section is a row of dots, are used to measure horizontal resolution. To resolve horizontal detail,
the electron beam changes to make dots (and dashes) of different colors, along each scan line. The finer the detail
we need to reproduce, the more changes the electron beam must be able to make in a short span. Refer to Figure
2.8.
22 Aptech Limited
Understanding Video Technology
Pixel shape
In case of rectangular (non-square) pixels (usual in TV) one has to maintain the aspect ratio when measuring
objects, because the dimensions of stored frame aren’t equal to true dimensions; resolutions along x and
y- axis aren’t the same. Square pixels are the pixels of same x and y dimensions.
Each monitor breaks images into tiny pixels that display the image. Computer monitors’ pixels are perfectly
square while television pixels are rectangular. Titles and images created on a computer can appear
“stretched” when displayed on a television if this difference is not taken into account.
Use of square pixels solves such problems - picture elements are equally arrayed in both directions, and
allow easy addressing. Thus, aspect ratio of the image does not require adjustment. This is needed in
image processing tasks requiring accurate image measuring. Refer to Figures 2.9 and 2.10.
Many video formats use square pixels. This means that a screen with a resolution of 640 x 480 pixels will have an
aspect ratio of 4:3.
After understanding the videotape formats and features we need to understand the various platforms, video
interfaces and computer systems that can be used to execute the project that we are going to start. It is essential to
use the right kind of computer to work on, as we will be using it for editing, special effects and postproduction.
The operating systems have their pros and cons, so we have to decide what kind of OS is the most compatible
and suitable for the software packages that we are going to use. Let us a take a quick look at the most widely used
operating systems and their features from the digital video and film point of view.
Aptech Limited 23
Concepts of Digital Filmmaking & Visual Fx
Operating Descriptions
Systems
Macintosh OS Widely used in postproduction houses
Configuring of various cameras, interfaces and storage options is much easier
In built Firewire DV support
All higher end packages are not compatible
Apart from these BeOS, Unix and Linux OS are also used commonly.
● Works as interface between high-end digital video formats and non-linear editing systems
■ Analog Digitizers
All the cameras do not store data in digital formats. Some old formats like Hi8 and Betacam SP do not have
digital interface so we can’t plug those cameras into a Firewire port on our computer. Even some DV format
hardware lacks digital I/O. In such cases we need an Analog Digitizer, which takes an analog signal from our
camcorder or deck and digitizes it using the computer. Targa 2000, Canopus Rex Media 100 and Adid products
are the most popular analog digitizers. Refer to Figure 2.11.
24 Aptech Limited
Understanding Video Technology
Figure 2.11: Analog digitizing system where special hardware installed in the system takes care of
compressing and decompressing video in real time
2.5 Summary
In this session, Understanding Video Technology, you learned that:
Digital media itself has particular quality characteristics. Generally, digital media present a crisper, cleaner
product that can be viewed an unlimited number of times. The largest advantage, however, is that it’s a
lossless medium.
Interlacing technique avoids the image flickering on TV screen.
There are many different kinds of video signals, which can be divided into either television or computer
Aptech Limited 25
Concepts of Digital Filmmaking & Visual Fx
types. The format of television signals varies from country to country.
Timecode systems assign a number to each frame of video analogously to the way that film is manufactured
with edge numbers to allow each frame to be uniquely identified. Time data is coded in binary coded decimal
(BCD) digits in the form HH:MM:SS:FF (Hours: Minutes: Seconds: Frames), in the range 00:00:00:00 to
23:59:59:29 for 30 Hz frame rate systems.
Most of the tape formats are not longer than 90 minutes but if the length of the project is longer than that
then we have to use two tapes for our master.
Compression is a key component in facilitating the widespread use of digital video, which is currently,
prevented by the mismatch between the huge storage and transmission bandwidth requirements of video
and the limited capacity of existing computer systems and communications networks.
Each monitor breaks images into tiny pixels that display the image. Computer monitors’ pixels are perfectly
square while television pixels are rectangular. Titles and images created on a computer can appear
“stretched” when displayed on a television if this difference is not taken into account.
The operating systems have their pros and cons, so we have to decide what kind of OS is the most
compatible and suitable for the software packages that we are going to use.
2.6 Exercise
1. The videotapes that are used for TV are interlaced.
a. True b. False
2. Our computer monitor and many new digital television formats use progressive scan.
a. True b. False
3. D
V formats use 4:2:1 color sampling, which is an amount of color reduction that is considered visible to the
viewer.
a. True b. False
4. D
TV includes standards for broadcasting like, ATV (Advanced Television, HDTV (High Definition Television) and
SDTV (Standard Definition Television).
a. True b. False
5. W
hen digital camera is sampling an image the degree to which it samples each primary color is called the Color
Sampling Ratio.
a. True b. False
6. This timecode is the professional industry standard set up by the Society of Motion Picture and Television
Entertainers (SMPTE).
a. True b. False
7. The speed at which the scan lines run on the TV set is called its frame rate.
a. True b. False
26 Aptech Limited
Guidelines for Selecting a DV Camera
Learning Outcomes
In this session, you will learn to -
3.1 Introduction
In the previous session we discussed about video formats and studied the differences
between analog video and digital video. Though new digital video formats deliver better
quality than most of the old analog formats, it doesn’t matter how good our format is if our
camera is not up to the mark and shoots bad images. As a feature filmmaker we should
be most concerned about image quality, particularly if we are planning on transferring to
film. Apart from choice of format, the camera that we choose will make a difference in the
image quality of our final footage.
In the past few years, a lot of improvements have taken place in DV cameras and the DV
filmmaker can now buy an affordable camera that rivals professional cameras of just a
few years ago. Before we start working with the actual camera it is essential to understand
the features and techniques of various cameras.
In this session we will understand various features and functions of the cameras and
guidelines for evaluating and selecting the right camera for our project. By the end of this
session we will know what all those buttons on our camera are and will have a good idea Figure 3.1: The stan-
of how to use them. After that, we’ll be ready to start shooting! Refer to Figure 3.1. dard controls found
on any DV camera
(Refer to the Colored
Section for the
colored image.)
3.2.2 Videotape
We can load the videotape into the camera and shoot on it.
Though the process is simple we need to take care of few things
as listed below. Refer to Figure 3.3.
Be careful when loading a DV tape and use a tape
transport mechanism for it.
Never touch the inside of the cassette.
3.2.3 Controls
On every camera there are many buttons and controls. To shoot Figure 3.3: DV tapes and tapes transport
a good footage we must know how these controls work. Refer to
Figure 3.4.
Zoom: With the zoom controls we can Zoom In the image to make it look larger and closer whereas we use
the Zoom Out controls to make object look farther and smaller. Some cameras have a digital zoom feature,
which creates a fake zoom effect, which zooms the image in high amounts but usually looks grainy and
noisy with bleeding colors so it’s best to keep it off.
Start and Stop: These controls start and stop the recording.
28 Aptech Limited
Guidelines for Selecting a DV Camera
Zoom Focus
Figure 3.5a: Manual focus allowed to focus on the Figure 3.5b: With just a slight shift in focus, the
hand in this image. A large aperture kept the depth- head has been made sharp and the hand in the
of-field shallow so the background is out of focus foreground soft
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
Aperture: It is also called Iris or Exposure. It controls how bright our image is. Sometime when we require
a bright light in the scene the automated camera may tone down the brightness thinking the scene is too
bright. Being able to control aperture lets us control depth of field-the area from foreground to background
that’s sharp in the image. This lets us either throw the background out of focus or keep both the background
Aptech Limited 29
Concepts of Digital Filmmaking & Visual Fx
and the foreground sharp. Refer to Figures 3.6a and 3.6b.
Figure 3.6a: A small aperture gives us greater Figure 3.6b: A large aperture lets us throw the
depth of field background out of focus
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
Shutter Speed: Shutter speed is another way that the camera controls how bright the camera image is.
A faster speed lets in less light, while a slower speed lets in more. Most cameras automatically select a
shutter speed based on their aperture setting, a process called Shutter Priority but with manual adjustments
we can many times get a better effect. Refer to Figures 3.7a and 3.7b.
F-Stop: The f-stop ring controls a small delicate transparent part (diaphragm) in the lens. This diaphragm is
used to regulate the amount of light reaching the film plane. In its most simplified sense, the f-stop is used
to obtain a usable exposure. If the f-spot is not set properly, the film will be over or under exposed. The f-
stop ring has a series of numbers. The thing we need to remember is that the smallest number represents
the widest opening, or maximum aperture, and the highest number represents the smallest opening, or
minimum aperture. So the lower f-stops let more light and higher f-stop let in less light. Hence, choosing the
accurate f-stops becomes essential when we want a picture with normal exposure. For example if the light
is too bright, the stop must be small so too much light does not reach the film where as in the low light the
stop must be large. Refer to Figures 3.8a and 3.8b.
30 Aptech Limited
Guidelines for Selecting a DV Camera
Figure 3.8b: The left side image indicates the light exposure with f5.6 f-stop setting where as the right
side image indicates with f8 setting
F-stops and shutter speeds work together to control the amount of light that reaches our film. An f-stop
determines the size of the hole the light passes through, and our shutter speed, the amount of time that
light is allowed to reach the film. With the right f-stop and the right shutter speed our picture will be correctly
exposed; not too light and not too dark. The light our lens sees is the light that’s there. There’s only so
much of it and we can’t change that. The same thing goes for our film. It’s designed to give us a correctly
exposed picture if it’s exposed to a certain specific amount of light; no more, no less. We can’t change
that either. In other words, we have a constant at each
end, and that’s why the two adjustments in the middle
- f-stop and shutter speed - must remain in a constant
fixed relationship. If we cut down on the amount of light
by changing the f-stop to a smaller aperture we have
to compensate by increasing the amount of light by
changing to a slower shutter speed.
Audio: Cameras audio facilities are also worth being
concerned about. The microphones included on the
camera often pick up camera motor noise, as well as
the sound of our hands while we move the camera.
Manual audio gain controls let us adjust or attenuate
the audio signal coming into the camera, making it easy
to boost quiet voices or lower the level of a roaring car
engine. Refer to Figure 3.9.
Aptech Limited 31
Concepts of Digital Filmmaking & Visual Fx
White Balance: The concept on which the white balancing is based is very simple. As we all know that
light is made of bunches of different colors. When we see a rainbow, what we are seeing is normal sunlight
split into all of its separate colors by drops of rain. The concept behind white balancing is that if the camera
knows what white looks like under the current light, then it will know what every other color is supposed to
look like since when white light is composed of every other color. With all the automatic features, automatic
white balance can get confused under certain situations as the camera assumes that the brightest object in
the scene is white when it can be in any other color like green or orange, which will make the entire scene
imbalanced. This same problem also occurs when we have mixed lighting. Due to these reasons manual
white balancing becomes important. Refer to Figures 3.10a, 3.10b, and 3.10c.
Figure 3.10a: Auto White Balance Figure 3.10b: Manual White Bal- Figure 3.10c: Manual White Bal-
ance Sunny look ance Cloudy look
3.2.5 Widescreen
In the past, video cameras used vacuum tubes for capturing images. Now video cameras use special imaging chips
called CCD, or charge-coupled devices. CCD based cameras use either a single CCD to capture a full-color image,
or three chips to capture separate red, green and blue data, which is then assembled into a color image. Two factors
that contribute the most to our camera’s image quality are the camera’s lens and the number of chips the camera
uses to create an image.
Widescreen TV uses an aspect ratio of 16:9, compared with the common TV aspect ratio of 4:3. At a ratio of 4:3,
the width of the screen is 33% more than the height, whereas at a ratio of 16:9 this is 75%. The 16:9 aspect ratio
corresponds to the normal visual field of human beings. It will therefore feel more natural to watch and it is for this
reason that most motion pictures are made in this format. Refer to Figure 3.11.
Movies are typically shot using formats that are very wide; much wider in width than height. Whereas Television is
almost square. When it comes to transferring a movie to make it fit onto a TV, a studio has three options:
Letterbox: Putting the black boxes around the extra area to change the shape of our TV screen into
something more rectangular.
Pan & Scan: Crop the edges to make the movie fit our TV. Due to the cropping we often see scenes in
32 Aptech Limited
Guidelines for Selecting a DV Camera
videotape movies where someone’s head is chopped off or when one person is talking to another but that
person is half visible on the screen. Refer to Figure 3.12.
Figure 3.12: Letterbox (left) and Pan & Scan (right) as ways to put a 16:9 format movie or program on a 4:3
broadcast
Movie Compress: The 16:9 image is squeezed proportionately, horizontally only. People appear thin
on a 4:3 TV set. Widescreen TV users can choose the ‘Widescreen’ mode, and the image is stretched
proportionately to fill the 16:9 screen. 4:3 sets users can choose ‘Movie Compress’. The 4:3 image is then
squeezed vertically, to restore the original 16:9 proportion. On a 4:3 set, the viewer will then see letterbox.
Most of the filmmakers find Wide formats to be much more appealing than the square TV formats. As with format
we can capture wide landscapes and create an epic, sprawling landscape on the screen. It’s important to note that
the Widescreen mode of a camera does not actually shoot a wider image but they crop off top and bottom to make
the image into a wider shape.
Note
If we are using editing software that does not suit the Widescreen angle then avoid using it.
3.2.6 Lenses
Lens perspective refers to the way lenses represent space. Different kinds of lenses have different effects on the
way we perceive depth and dimensionality within an image. This aspect of lenses has a great impact on the process
of choosing a lens. Directors and cinematographers generally do not choose lenses for how close they bring the
viewer to the subject. That can be controlled by simple camera placement. Lenses are usually chosen for how they
represent space.
Just as film camera uses a lens to focus light onto a piece of film, a digital video camera uses a lens to focus light
onto the imaging window of a CCD. The quality of lens of the lens of our video camera can mean the difference
between sharp images with good color and soft images with muddy colors. Refer to Figure 3.13.
Aptech Limited 33
Concepts of Digital Filmmaking & Visual Fx
While evaluating any lens we must take into consideration the following points:
The lens should produce images that are brighter at the middle rather than at the edge.
While zooming In and Out with lens the image should not appear darker or look blown up.
At the wide angles the images should not get distorted at the corner edges.
The lens should focus equally on all wavelengths of light.
■ Wide-angle lenses
The defining characteristic of wide-angle lenses, also called short lenses, is that they elongate space, that is,
objects appear more distant from each other than they actually are. Wide-angle lenses also bend lines in the
composition outward. Extreme wide-angle lenses bend corners and give almost funhouse-mirror distortion to
objects or people being filmed. They are rarely used for portraiture because they balloon people’s faces and
make them look heavy or freakish. Refer to Figures 3.14a, 3.14b, and 3.14c.
■ Telephoto lenses
Telephoto lenses do the opposite of wide-angle lenses. Rather than elongating perspective, telephoto squashes
perspective, that is, makes things look closer together. This is a very common effect, and, once pointed out,
many examples come to mind. In films set in big cities, directors and cinematographers like to use telephoto
lenses to shoot crowd shots on streets. It makes people look jammed together and cramped. It exaggerates the
effect of people crowded like rats packed together. Refer to Figures 3.15a, 3.15b, and 3.15c.
34 Aptech Limited
Guidelines for Selecting a DV Camera
Figure 3.15a: This picture was Figure 3.15b: The same piano as Figure 3.15c: Telephoto lenses
shot with a wide-angle lens, the in above figure has been shot are widely used in sports
distance between the front and with a telephoto lens. Compare coverage, to get a “closer” view
the rear of the piano is elongated, how the distance between the of the action
giving the image an illusion of front and the rear of the piano
great depth appears (Refer to the Colored Section for
the colored image.)
(Refer to the Colored Section for (Refer to the Colored Section for
the colored image.) the colored image.)
■ Zoom Lenses
The term zoom as it is meant here has existed long enough that most people understand what it means.
The zoom lens includes all of the focal lengths discussed previously. The zoom effect is created by movable
elements in the lens that either bring the subject closer to or push it farther away from a stationary camera. This
allows shots, like some camera movements, to go from very random information to very specific information and
vice versa. Refer to Figures 3.16a and 3.16b.
Figure 3.16a: Image shot with zoomed in angle Figure 3.16b: Image shot with zoomed out angle
Aptech Limited 35
Concepts of Digital Filmmaking & Visual Fx
3.3.1 Three-Chip Camera
Sony VX-1000: This camera has excellent image quality; manual controls and has a comfortable feel but its
biggest limitation is lack of LCD viewfinder. It is the most popular camera among all. Refer to Figure 3.17.
Film is a practical art. In this art cinematography plays a very important role. It is nothing else but coordinating the
director’s desired presentation of the film story with the primary technical equipment of filmmaking: camera, lights
and film stock. Establishing a shot is a painstaking and time-consuming process. It requires a responsible approach
to electrical engineering and safety.
The cinematographer is the Director of Photography (DP) who works closely with the director, organizes the shots
and shot sequences, determining the type of lighting needed and the creation of a visual mood. The DP coordinates
closely with the gaffer (head electrician) and camera operator to meet the needs of the picture and he must take
into account the source and quality of the light, the color of the set and costume, the skin tone and makeup of the
performer. The DP chooses which lights and which placement of those lights will create the image desired: day,
night, shadow, visual depth and scene focus.
One of the most beautiful films ever filmed is “Days of Heaven” (1978). Photographed by the legendary
cinematographer Néstor Almendros, much of this movie was filmed during “magic hour” a luminous time of day
when the sun sets but there’s still enough light to expose the film. When we view a work like “Days of Heaven”, as
shown in Figure 3.20, it is clear that filmmaking is dependent on one aspect of production more than any other and
that is cinematography. Cinematography is about light, lens and picture making. It is the look “the aesthetic” that
defines the visual quality of the picture.
Aptech Limited 37
Concepts of Digital Filmmaking & Visual Fx
Figure 3.20: A scene from film “Days of Heaven” which is famous for its cinematography
While the actor thinks about his character he should portray it in such a way that his personality should become a
part of the character. There are many ways by which an actor can portray his character but the main things that can
change his personality are as follows:
Movement-this includes such things as walk, gestures and posture. Changes could incorporate pace, rate,
rhythm and style.
Voice-this includes accent, diction, sound and vocabulary. Changes might include pitch, rate, volume and
words specific to the character type.
Personality-changing the movement and voice will often change the personality of the actor to fit the
character. Or the characterization might take place in the opposite way-the actor could change personality
traits and the voice and movement changes would follow naturally.
Apart from this, good make up and appropriate costumes are always additional benefits.
Through techniques in films and understanding the storyline thoroughly, we can characterize scenes, segments,
and individual frames in any film. When we talk about technical aspects, the camera plays the most important role
in portraying a character. One important aspect of characterization is interpretation of camera motion. Many a times
good placement and adjustment of the camera positions can add a marvelous effect in a simple scene. Many a
times in Hindi films we see that camera motions are used to show the confusion on a characters face. Here we
notice the character may not have variations in expressions yet the impact of camera motion and lighting creates
the overall impact. If the character is cunning then the camera is mainly focused on the facial expressions rather
than overall get up to make the audience understand his cruel thoughts.
When the character is set in the director’s mind then it is a challenge to turn it into reality. It is a creative job, which
is executed with the help of camera techniques.
38 Aptech Limited
Guidelines for Selecting a DV Camera
3.6 Summary
In this session, Guidelines for Selecting a DV Camera, you learned that:
Most of the cameras can run in different modes: Movie Mode, for shooting video, VCR Mode for watching
the tape and P.SCAN Mode, a special version movie mode that shoots a different type of video.
Movie Expand is a feature to fill a 16:9 screen with a 4:3 broadcast. The 4:3 picture is ‘blown up’, or enlarged
to fill up the black bars to the left and right. The image also expands upwards and downwards, so a part of
the image on the top and the bottom ‘falls off’. When a movie is broadcast in 4:3 and letterbox (with black
bars on top and bottom of the screen), the black bars on the 4:3 image will fall off, and no part of the picture
will be lost. There are several ways commonly applied to broadcast a 16:9 format movie or program on a
4:3 broadcast: Letterbox, Pan & Scan and Movie Compress.
In the past few years, a lot of improvements have taken place in DV cameras and the DV filmmaker can
now buy an affordable camera that rivals professional cameras of just a few years ago.
The cinematographer is the Director of Photography (DP) who works closely with the director, organizes the
shots and shot sequences, determining the type of lighting needed and the creation of a visual mood.
Cinematography is the process of capturing a vision onto film. For the cinematographer this dynamic
process involves a combination of technical skill with creative judgment. The process is both a craft and an
art, involving composition of light, shadow, time and movement.
When we talk about technical aspects, the camera plays the most important role in portraying a character.
One important aspect of characterization is interpretation of camera motion. Many a times good placement
and adjustment of the camera positions can add a marvelous effect in a simple scene.
3.7 Exercise
1. M
ost of the cameras can run in different modes: Movie Mode, for shooting video, VCR Mode for watching the
tape and P.SCAN Mode, a special version movie mode that shoots a different type of video.
a. True b. False
2. It is not necessary to make sure that the battery is completely dead before we recharge it.
a. True b. False
a. True b. False
4. A faster speed lets in more light, while a slower speed lets in less.
a. True b. False
5. The f-stop ring controls a small delicate transparent part (diaphragm) in the lens.
a. True b. False
6. Widescreen TV uses an aspect ratio of 16:9, compared with the common TV aspect ratio of 4:3.
a. True b. False
a. True b. False
8. T
he cinematographer is the assistant director of photography who works closely with the director, organizes the
shots and shot sequences, determining the type of lighting needed and the creation of a visual mood.
a. True b. False
Aptech Limited 39
This page has been intentionally left blank.
Basics of Lighting and Art Directing
Learning Outcomes
In this session, you will learn to -
4.1 Introduction
In the previous session we learnt about camera concepts and other equipments related to camera. Although it’s
important to choose a good camera for our shoot, the one thing that will have the greatest impact on the quality of
our image has nothing to do with our camera. Lighting our scene is more related to the quality of our image and
good visual impact.
Traditionally, shooting on film has been technically more challenging than shooting video because film stocks need
to be exposed properly, and proper film exposure needs lot of sound. Lighting for film is an art form by itself. Witness
only, many of the fine (and not so fine) films produced during the past decades. In addition, film is a wonderful and
valuable medium to capture and then study lighting and lighting techniques.
Lighting for film is a bridge between the cameraman, his film and the processing lab. Film lighting techniques are
heavily dependent on the knowledge of how a particular film stock will react to a particular type of light - in respect
to; intensity, contrast and color temperature. A multitude of image qualities are available by manipulating; exposure,
color temperature and film processing.
Lighting is greatly important for films along with the other aspects of good cinematography - composition, camera
movements, and the choice of colors, set designing, and make up.
Aptech Limited 41
Concepts of Digital Filmmaking & Visual Fx
42 Aptech Limited
Basics of Lighting and Art Directing
4.2.2 Color Temperature
Light is measured in terms of color temperature, which is calculated in degrees Kelvin (k). Indoor tungsten lights
have a color temperature of 32000 K, whereas daylight has an approximate color temperature of 55000 K. Daylight
balanced lights are much more stronger than tungsten lights, and if we try to mix them together the daylight will
look much stronger than tungsten lights. Wherever it is essential to mix both the lights we need to account for the
color temperature differences by balancing our light sources. There are many accessories available to balance the
temperature of light and make it either tungsten balanced or daylight balanced.
Figure 4.5a: Image shot with soft light Figure 4.5b: Image shot with a hard light source
(Refer to the Colored Section for the colored (Refer to the Colored Section for the colored
image.) image.)
Aptech Limited 43
Concepts of Digital Filmmaking & Visual Fx
44 Aptech Limited
Basics of Lighting and Art Directing
If we are unable to approach a sunlit subject because it is some distance away, across a river or road, for example,
simply take a reading of the same sunlight that is striking the subject from where we currently are. This is known as
a “substitute reading.”
There are two ways by which light is normally measured on the sets: Incident light and reflective light.
Incident light: In measuring incident light, a light meter is held at the subject and pointed toward the
camera. From that particular position the meter determines how much light is falling on the subject.
Reflective light: In measuring reflective light, light meter measures the light in the opposite direction of
incident light. In this the meter is held at the camera to determine how much light is being reflected from the
Aptech Limited 45
Concepts of Digital Filmmaking & Visual Fx
subject back to the camera lens. Many cameras have reflective light meters inbuilt in their viewing system.
These internal light measurement systems are called through-the-lens (TTL) metering systems. Except for
TTL systems, light meters are handheld and can usually measure either reflective or incident light. Meter
typically comes with both a white bulb for reading incident light and an interchangeable flat grid for reading
reflective light. Refer to Figure 4.13.
4.2.5 Exposure
When we shoot a film, the exposure isn’t uniformly distributed over all the areas and angles. Highlights (brighter
areas) in the scene reflect the most light, and the areas of the sensor onto which they are focused are exposed
a great deal. Darker areas, like shadows, reflect much less light, so the areas of the sensor onto which they are
focused receive much less exposure. The perfect exposure retains details in both the highlights and shadows. Refer
to Figure 4.14.
46 Aptech Limited
Basics of Lighting and Art Directing
As depicted in the series of images in Figure 4.1.4, the middle photo is correctly exposed. The left photograph was
overexposed and is too light. The right photo was underexposed and is too dark.
The art of lighting involves choosing an acceptable exposure. We can control an exposure using the auto exposure
control of camera only to some extent as most of the time auto-exposure mechanism in our camera might do exactly
the opposite of what we want. For example we may decide to purposely overexpose the sky to help in keeping a
good exposure on our actor’s face. But in actual process the auto-exposure might make sky brighter causing our
actor’s face to fall into a shadow.
While creating a film, try to make a good balance of exposure on all the areas of the scene because if the image is
underexposed the scene will look dark or muddy. Remember there’s always some area of overexposure in properly
exposed film, usually reflective highlights and bright white areas. As a rule, it’s better to have more light than not
enough light.
Another problem that is commonly faced is the mixing of daylight and interior light. As they both have different color
temperatures mixing them together is a difficult task. In this situation we should choose the most important light
source and balance the other lights accordingly.
For example, if we have to shoot in a house where light source is coming from door or window then that will be our
main light source and we have to adjust other lights according to its color temperature. Refer to Figure 4.15.
Aptech Limited 47
Concepts of Digital Filmmaking & Visual Fx
The one fundamental concept that the lighting designer working on an outdoor scene must learn is: It is very
difficult to compete with Mother Nature. Scene lighting during a bright sunny day is almost impossible and has no
impact. Scene lighting during a cloudy or overcast day may have some impact but usually at best provides basic
illumination. During the day, the designer may need to provide 100’s of kilowatts of lighting to view even a minor
expression on an actors face. The lighting may only tend to fill in the shadows at best. If suddenly a cloud passes
over the sun, the scene lighting levels will seem to rise drastically. Once the sun has started to set however, a fixture
of just 1 kilowatt can appear brighter to the audience than the 100’s of kilowatts previously required to provide the
same visual impression.
48 Aptech Limited
Basics of Lighting and Art Directing
4.3.3 Light at Night
Lighting at night is no fun at all. However much light we seem to pour onto a subject it still looks dark and grainy,
either that or our subject looks blasted out - white and washed out, like a rabbit caught in a car’s headlights.
The best bet is to shoot all our night stuff just before light is about to go, when it looks like night but there is still some
light on the horizon (we better be quick), or shoot it day for night.
Figure 4.17a: Image indicating the key light Figure 4.17b: Image indicating the effect of light on
position the subject
Fill Light: It is used to fill in the strong shadows created by the key light. Fill light is usually placed at a 45
to 90 degree angle opposite the Key light (left of the camera if the Key is right, and vice-versa). The fill light
may be placed high or low. Usually the fill light is dim and more diffuse than the key light. Subduing the
shadow gives a pleasing effect of light and shadow on the subject. Refer to Figures 4.18a, 4.18b, 4.19a,
and 4.19b.
Aptech Limited 49
Concepts of Digital Filmmaking & Visual Fx
Figure 4.18a: Image indicating the fill position Figure 4.18b: Image indicating the effect of light on
the subject
Figure 4.19a: Image indicating the key light and fill Figure 4.19b: Image indicating the effect of light on
light positions the subject
Backlight or kicker light: This is positioned behind the subject and is used to separate it from the
background. The backlight is usually placed high behind the subject, at an angle nearly opposite to the key
light. The aim here is to add a highlight along the edge of the subject’s silhouette, usually on the darker side
of their body (the side with the fill light). This separation lends a sense of depth to the image and helps make
our subject stand out better. Refer to Figures 4.20a and 4.20b.
Figure 4.20a: Image indicating the backlight Figure 4.20b: Image indicating the effect of light on
position the subject
50 Aptech Limited
Basics of Lighting and Art Directing
Lighting a person is much more complicated than lighting an object as the human face has a lot of angles and there
is always a change in expressions. The lighting here needs to be done very carefully as incorrect lighting will cause
strange shadows on the actors face. Also, a direct lighting might show little flaws of the actor’s face very clearly. To
avoid all these problems, make a lighting creatively and try to get the best results. Refer to Figure 4.21.
While creating a visual symbolic set the good lighting is an easiest way to achieve it. Colors have strong association
with the thinking of people. Black symbolizes death, red hints at blood, violence or danger as well as love and
passion, blue is peaceful and charming as well as an indication of sadness and so on. It is the art director who needs
to take care of accurate symbolic displays using suitable techniques.
In addition to externalizing the themes of the story, production design also aids in focusing the viewer’s eyes and
this job is the art director’s responsibility. Apart from lighting, the art director also needs to take care of set dressing
and props, and location alterations.
Aptech Limited 51
Concepts of Digital Filmmaking & Visual Fx
Figure 4.22a: Example 1 - Complete set Figure 4.22b: Example 2 - Complete set
The Art Director acts as a go-between with the props department and set dresser in the completion of the sets.
A good prop (property) can really sell a weak location or make the shooting much easier. For example if our
film involves weapons and fight scenes, we will need special props like glass, breakaway tables and chairs, fake
weapons. Every time it is not possible to buy such things as it might turn out to be very expensive so nowadays
rental props are also available.
52 Aptech Limited
Basics of Lighting and Art Directing
4.5.3 Tips and Tricks
Let us have a look at some tips and tricks that any art director should know beforehand.
When determining how we want our lighting to look, we should first form a style of lighting that will
accommodate the scene or job that we are about to shoot. We need to establish our own look and style.
When shooting outdoors, choose a background that is not brighter or lighter in color than our subject, if we
can help it. Shooting our subject against a bright background will be difficult to light since it can reflect light
as much as two or three stops brighter than what would be on our talent. Choosing a darker background
outdoors to shoot against will allow us to light our talent without the background becoming overexposed.
Mirrors can look great, but create special problems for lighting and shooting. Work with director of
photography before putting a mirror in a set.
Extremely saturated colors look good to the human eye, but not to the video camera. Take care to avoid
highly saturated colors and stripe patterns, as they look bad on video usually resulting in a distracting moiré
effect.
Lighting a scene, whether it is a one person interview or a large interior, is determined by the script, the
environment, a desired mood, or feel, and what the program should depict to the viewer, not the format of
the camera.
For outdoor lighting, try to place the subject toward the sun if possible and place ourselves in front of the
subject. It is best to film in the early morning or late afternoon to get a golden effect on our video.
Filming at high noon creates narrower or harder shadows on the subject’s face. It is best to use some sort of
light diffusion, if possible, if we are filming at high noon. For e.g. shoot under the shade of a tree if we are filming
portraits.
For indoors: Try to use as much available light as possible. We may also need to enhance the available
light with the use of a camera light. This will ensure that we properly expose the subject.
Remember these small problems become a big headache when we transfer our video to a film.
4.6 Summary
In this session, Basics of Lighting and Art Directing, you learned that:
Under normal conditions a human perceptual adjustment called approximate color consistency comes
into play and automatically makes adjustments for sources of light that we assume are white. Strangely,
when we look at video or film, approximate color consistency doesn’t work in the same way. Unless color
corrections are made, we’ll notice significant (and annoying) color shifts between scenes when they are cut
together.
From the lighting instruments themselves, we now turn to attachments that are used with these lights.
Adjustable black metal flaps called barn doors can be attached to some lights to mask off unwanted light
and to keep it from spilling into areas where it’s not needed.
A Fresnel is mounted on a floor stand for film and on-location video work, but in the studio these lights are
typically hung from a grid in the ceiling.
In addition to having different color temperatures, lights have different qualities. They can be direct or hard,
soft or diffuse, or they can be focused like a spot light.
The most common solution for picture quality problems is to control the light. Cameras love light. No matter
what our film is about, creating lighting can add an extra layer to enhance the mood and emotion of our
story.
The Art Director acts as a go-between with the props department and set dresser in the completion of the
sets. A good prop (property) can really sell a weak location or make the shooting much easier.
Aptech Limited 53
Concepts of Digital Filmmaking & Visual Fx
4.7 Exercise
1. Professional lights fall into _________basic categories.
a. One b. Two
c. Three d. Four
2. F
luorescent Lights have color temperature that ranges from 2700-65000 K and are much brighter than normal
lights.
a. True b. False
a. True b. False
a. True b. False
5. A special fresnel lens attachment lets us adjust the angle of the light beam from flood to spot light.
a. True b. False
6. L
ight Meters are translucent sheets of colored plastic that are placed in front of the light not only to alter the color
of the light but to decrease the brightness also.
a. True b. False
7. I n measuring reflective light, a light meter is held at the subject and pointed toward the camera. From that
particular position the meter determines how much light is falling on the subject.
a. True b. False
8. Fill light is usually placed at a 45 to 90 degree angle opposite the Key light.
a. True b. False
54 Aptech Limited
Dealing with Audio
Learning Outcomes
In this session, you will learn to -
5.1 Introduction
So far we have covered cameras and lights in the previous sessions. We know the importance of these two aspects
now. Apart from these two areas the next area that has tremendous importance is Sound. It is one of the most
powerful tools in our creative palette. Remember cinema is a nothing else but making a series of images talk and
communicate with the audience and take them into a whole new world. With the right music and sound effects,
we can do everything from evoking locations to defining moods and building tensions. While watching a movie we
close our eyes and listen to the background music, we get a fair idea of what is happening in the scene. We know
whether the scene has a scary situation, a happy mood or whether a suspense scene is going on. Without sound
the impact of a movie is very low.
While watching a movie an audience sometimes is not aware of the low picture quality. They sometimes find even
a low quality movie appealing but if they can’t understand audio they won’t be engaged in the story.
Even though most of today’s filmmakers are aware of the importance of audio, a few DV filmmakers are still lacking
in audio experience and technology. And when we consider that audio is just as important as the visuals, this lack
of knowledge can ruin a movie.
Though it is possible to edit the quality of our sound in post-production, it has lots of limitations. It is much easier
to get good editing results for images and animation but not for audio. If we have got extra sound, or low recording
levels, correcting our audio will be extremely difficult. There are two ways to get good audio results, put the actor into
a vacuum chamber so every other possible sound is gone, or put the microphone as close to the actor as possible.
Since getting a vacuum chamber big enough for an actor and the entire set is not possible, we’ll focus on getting
that microphone as close to the actor as we can.
One thing is important to understand and that is though it’s difficult or impossible to correct a sound, or remove an
unwanted sound from the recording, it’s easy to mix the sound in. So if we have a primary sound of an actor recorded,
we can always add additional sounds like background music or other object sounds later in the postproduction
stage.
Recording a good audio requires a lot of preparation and begins with selecting the right microphone. Rather it all
begins with the microphone. We will start with the microphone that is built into the camcorder. For the most part,
this microphone should not be used to record sound for a movie. There may be some documentaries that must use
it due to limitations of staff or time but this must never be used for a narrative movie. The reason why the on-board
microphone should never be used is due to the main principle behind audio recording for the motion picture film. It
goes something like this. To properly record dialog we must get the cleanest possible recording that we can. This
means that background noise is as low as possible and the dialog does not over-modulate.
To record a good quality audio we need to buy or rent one or more high-quality microphones, which we will connect
to our camera or to a separate audio recorder such as DAT or MiniDisc recorder. Different types of microphones are
designed for different recording situations so we should choose our microphone according to the type of shooting
we are doing.
There are various characteristics in each kind of microphone that define what they will hear. These hearing
characteristics are also known as Directional characteristics. The directional coverage of the mic that we will choose
will have a lot to do with both the content and quality of our recorded sound. Let us have a look at some of these
directional qualities as we proceed further.
As the omni directional mics pick sounds from all directions, it might sound good to use it when recording on large
sets with many actors. On the other hand these mics are not practical for many such situations. As with their wide
coverage omni mics can pick up far more sound than we may require, including camera noise, operators noise, as
well as sounds of passing cars or people or many other such things.
Miniature Omni directional mics are called “Binaural microphones”. They are, used in pairs, placed on either side of
a human (or artificial) head and placed in, or as near as possible to, the ears. Omni directional mics pick up sound
in all directional fairly equally, so when they are used in this manner, they pick up sound very much like the human
ear does. Refer to Figure 5.1.
Note
56 Aptech Limited
Dealing with Audio
5.2.2 Unidirectional Mics
Cardioids microphones are Unidirectional microphones and pick up sound mostly in the direction that we point
them. They cannot be used to make binaural recordings, but can, of course, be used to make stereo recordings.
Because of this directionality, they have certain advantages over Omni-directional mics in some situations. When
we are recording in a venue that does not have great acoustics, the audience is noisy and/or we can’t get close to
the sound source, Cardioids are the better mics to use.
Since Cardioids are directional mics, they will greatly reduce excess reflected sound coming at the mics from all
over the venue. They do a good job of reducing unwanted audience noise from the sides and rear. While they
can be used up close with excellent results, they excel over Omni mics when recording from a distance. In fact,
there are different levels of directionality available, including Sub-Cardioid, (regular) Cardioid, Hyper-Cardioid and
Super-Cardioid (sometimes called shotgun mics). In general, the further we are from the sound source, the more
directional the mic should be. Refer to Figure 5.2.
Note
Cardioids are also the preferred mic to use on stage for sound reinforcement
applications.
Good quality Cardioids are about two or more times the size of equivalent sounding omni directional mics. The
Cardioid mics that are commonly available in the same size package as the small omni’s are lacking in deep bass
and high frequency response.
■ Handheld Mics
Omni directional handheld mics are good for vocals, music, or sound effects. To some extent they are sensitive
to handle noise but best used on a stand. We need to be careful around wind or ambient rumble (such as
traffic). To get the best results we need to place them close to the subject rather than anything else. We should
Aptech Limited 57
Concepts of Digital Filmmaking & Visual Fx
also remember that the performer shouldn’t speak directly into the top
of the mic. It’s best to tilt the mic at about a 300 angle. Refer to Figure
5.3.
■ Lavaliers
Lavalier, or clip-on mics are the small condenser mics that we see clipped
to the front of newsreaders. Originally, the term “lavalier” referred only
to the “neck-worn” or “body-worn” class of small microphones. These
days, the working definition of lavalier has been extended to include Figure 5.3: Example of a
virtually any miniature microphone small enough to be worn on the body handheld mic
and/or hidden in the set. These are generally omni directional mics and
because they are kept so close to the speakers mouth they rarely pick up extraneous sound making them
suitable for recording individual performances.
Modern lavaliers can be described as being either “Proximity” or “Transparent”. Refer to Table 5.1 to understand
the difference between the two mics in brief.
Proximity Transparent
A Proximity type lavalier is defined as a microphone They are defined as sounding more like
that works best when kept fairly close to the source omnidirectional recording studio mics. They are
of the voice, emphasizes that voice, and suppresses very sensitive to sounds, and their volume vs.
background. distance characteristics is far more gradual than
that of proximity lavaliers. Transparent mics can
be deployed at greater distances and are far
more forgiving of talent turning their heads away
from the mic. Transparent mics sound much more
natural and less forced than proximity mics. The
drawback of transparent mics is that they are
much more sensitive to background noise, and
also require greater skill to hide under clothing.
Refer to Figure 5.4.
Table 5.1: Table depicting the difference between the two mics - proximity and transparent
58 Aptech Limited
Dealing with Audio
While placing these mics for a good recording certain precautions must be taken. Let us understand these simple
rules to follow:
Center the lavalier (left to right). If the lavalier is positioned to one side, the amplified
voice will drop if the actor’s head is turned away from the microphone. Naturally,
the voice will get louder if the head is turned toward the microphone. This results
in very inconsistent levels.
Position the lavalier so it is at least a full hand spread from the mouth. This minimizes
differences in distance from the mouth caused by head movement. Again, this will
help keep levels consistent.
Avoid having the lavalier cable hang straight down. This increases cable noise.
Instead, put a gentle bend in the cable near the microphone by bringing the cable
back up through the microphone clip. Refer to Figure 5.5. Figure 5.5: Image
depicting the bend in
Hide the miniature lavalier microphone in the actor’s hair or beard. The sound level
the lavalier cable
will stay constant since the microphone moves with the head.
■ Shotgun Mics
The long thin microphones that we see sticking out of video cameras are referred to as shotgun mics. These
mics provide the greatest flexibility for micing. Most shotgun mics record stereo audio usually by having two
parts inside, one each for the left and right channels. So these mics are used for high quality film and TV
production requiring mono and stereo in one mic. They are popular with current affairs crews who need a
shotgun for interviews and a stereo for atmospheres but only want to carry one microphone. Many times they
are also excellent for stereo wildlife recording.
There is a tendency among filmmakers to rely too heavily on their camera mounted shotgun mics rather than
separately mounted boom mics. Obviously, having a microphone on the camera is more convenient. But our
objective in the field is not convenience, but gaining the highest quality sound possible. The shotgun microphones
are like a telephoto camera lens. A long lens will isolate and magnify a distant subject, but at the same time it will
compress the perceived distance between subject and background. Everything appears to be closer together
than it physically is. Refer to Figures 5.6a and 5.6b.
Note
Most cameras and tape decks have one place to plug in a microphone. So if we are using
multiple microphones we’ll have to plug them into a mixer to mix their signals down to a
single stereo signal that can be plugged into the microphone input on our camera or deck.
Aptech Limited 59
Concepts of Digital Filmmaking & Visual Fx
between microphone and subject should agree with the distance to the screen or camera. In a long shot, it is natural
for the voice to sound more distant and for there to be a greater presence of ambience. Close-up angles should
consist of more voice and less background.
There are four basic mic placements from which all mic setups are built: boom, plant, lavaliere and wireless. This
priority is sometimes referred to as the Hierarchy of Microphone Techniques. Let’s examine each approach:
■ Boom
Booming involves mounting the microphone to a boompole, and suspending it in front
of the subject for optimal sound pickup. The process is demanding since it involves
coping up with many variables at once, such as holding the mic as close as possible
to the subject, moving the mic from one subject to the next, and keeping the mic out
of the picture frame. The rule of thumb when booming is to get as close as possible
to the subject, which is just outside the camera frame line. The further the mic is
away from subject, the greater the background noise and echo, so every inch closer
improves sound quality. The mic is normally held several inches to a foot over the
Figure 5.7: Example
actor’s head. Refer to Figure 5.7. of booming
Note
■ Plant
A plant is any microphone fixed in place. It is used to cover a static subject when it is impractical to use a
boom. Plants can be conventional condenser mics or lavalieres. The type of microphone used depends on
the situation. Placement is important because plant mics are effective only if dialogue is directed within their
pickup pattern and range. Another important consideration is that the mic be properly hidden (planted). Mics
can be hidden just about anywhere: behind, on top and below props, furniture and walls. The possibilities are
limited only by our imagination. The newer lavs (lavaliers) are so small the can be used in plain sight and go
unrecognized as a mic.
Lavaliere: We have already learnt about the Lavaliere, as a type of mic that is also a mic-clipping technique
that is used to record the individual subjects voice.
Wireless: Wireless (radio) mics send the audio signal over airwaves using a transmitter and receiver.
Their main drawback is that they are subject to RF interference, so a wired mic should be used whenever
possible. They are helpful when the boom operator cannot get close to the action and it is impractical to run
a lav cable. A wireless lav system consists of a normal solution for complex shoots involving lots of mics.
This system consists of a normal lav microphone attached to a small transmitter worn by an actor. Receiver
picks up the audio and routes it on to our recording device.
5.2.5 Headphones
Headphones are the audio equivalent of a field monitor, as we need them to hear the audio as it’s being recorded.
Headphones serve a dual purpose as they block out ambient noise from the set and allow the sound recordist to
monitor the audio directly. With the headphones a sound recordist can focus on the actors’ sound, which is not
possible with normal human ears as it absorbs all sorts of noises from the atmosphere. Refer to Figures 5.8a and
5.8b.
60 Aptech Limited
Dealing with Audio
Audiography is a term that refers to the entire process and techniques that help in making audio for a film. The term
includes sound systems, synchronization techniques, the hard work of sound crewmembers, sound effects applied
during postproduction, sound dubbing and other equipments used to get the best quality of sound. Let’s learn these
terms in detail.
Early video cameras left much to be desired in terms of sound, but many professionals used a single system
approach anyway, recording directly into the camera. The main reasons were ease of operation and portability of
equipment (at that time, video was primarily used for documentaries and news gathering).
5.3.2 Synchronization
As the sophistication of video and audio equipment increases, so – almost inevitably – does the delay experienced
by the signals passing through them. Unless great care is taken to match the delays in the audio and video paths,
any small differences can rapidly accumulate and lead to a distracting loss of synchronization between sound and
picture (most notably, loss of “lip-sync”). This is already becoming a widespread problem in television production
and it affects all types of programmes, including live and pre-recorded material, and films.
Currently the problem of maintaining correct audio synchronization is addressed using a tracking “A/V sync” audio
delay. This approach uses an audio delay, which automatically tracks the difference in the timing of the input, and
output video signals across an item of video equipment, i.e. the difference in timing of the video is measured and
applied to the audio.
Aptech Limited 61
Concepts of Digital Filmmaking & Visual Fx
■ Lip-syncing sound for filmmaking
The entire filmmaking process of shooting, recording, transferring, dubbing, editing, and projecting whereby
the end result produces the appearance to the viewer that the on-camera speaker’s lips appear to be precisely
synchronized with the words he/she is speaking. Since, unlike videotape, the filmmaking process is split into
the two separate mediums of picture and sound, the requirement for maintaining lip sync throughout the various
production and post-production stages becomes very important.
Note
Slating is the process of providing positive identification marks for the start of a lip-sync
take on both the picture film and the sound track tape. These markings are very important in
the editing stage to greatly simplify the process of locating the exact sync position between
picture and sound.
62 Aptech Limited
Dealing with Audio
levels for optimal recording. Refer to Figures 5.9a and 5.9b.
Note
To balance a strong voice against a weak voice, use our mic angle and placement rather than
riding gain (volume) on our mixer/recorder. Let the strong voice strike the mic slightly off-
axis and/or from a little more distance than the softer speaking actor. This will balance the
relative volume of both people without having the background noise continually changing
during the shot.
Boom Operator: The boom operator is a sound crewmember who handles the microphone boom, a long
pole that holds the microphone near the action but out of frame, allowing the microphone to follow the actors
as they move. The Boom operation mainly needs to take care of two things: The boom shadow should not
fall on any element present in the scene and should not allow the mic to be visible in the main scene area.
Figure 5.9a: Pivoting a single mic back and forth between two characters is usually the best approach
Figure 5.9b: Subject movement requires the boom to lead the subject through a space
If the production is large then there is a third member in the sound crew called Cable Puller, who helps with
setup and keeps the microphone cords out of the way during the shot.
Aptech Limited 63
Concepts of Digital Filmmaking & Visual Fx
5.3.4 Automated Dialog Replacement
As we have seen above, the recording of dialogue usually occurs on the set during filming, and this is referred to
as “Production Dialogue”. Sometimes, while actors are on the set, but without cameras rolling—the company will
record additional lines of dialogue to be used later as “Wild Lines”. Examples of wild lines that would be recorded
on the set for future use include other halves of phone conversations, shouts or greetings from afar, background
ambience, alternate dialogue, narration.
Sometimes, for any of a multitude of reasons, production dialogue is unusable and must be replaced during post-
production. Automated Dialogue Replacement (ADR) is a method of creating and replacing dialogue in post process
that used to be referred to as looping. In the old days, dialogue replacement was done by physically cutting out short
sections of the original dialogue (consisting of one or two lines) along with the appropriate picture. These sections
were formed into continuous loops. That’s why the process was called “looping”.
Better technology greatly simplified the process. In the ADR process, the physical loops have been done away
with. Instead, the entire reel of picture and the entire reel of original sound are threaded up in sync. An entire reel
of blank audio stock is set up on a recorder. A computer is fed the start and stop footage of each “loop” that needs
to be recorded. All three machines roll down, in sync, to the first “loop” and the process begins. The actor watches
the projected footage and listens to the cue track on headphones. A series of three audible beeps alerts talent as
the system rolls forward towards the record start point. His take is recorded on the blank stock. At the completion
of each take, the computer rewinds all three machines back to the programmed start point and the process repeats
itself. When the loop has been successfully recorded, the entire system moves ahead to the next programmed set
of cues.
With the current ADR process, editing and replacing of sound has become much easier for the editor.
We might not have even noticed these ambient sounds, because our brain filters them out as insignificant to the
activity or situation we’re focusing on at a given moment. But in video, leave out the ambience, especially if we’re
cutting from different scenes in which the background sounds change, and the results will be rather choppy if not
annoying.
5.3.7 Dubbing
Once the dialog, sound effects, and other audio are recorded, it is time for audio dubbing. Each separately recorded
sound exists on its own sound track. Each of these sound tracks can be manipulated in different ways. They can be
faded in or out, have their pitch changed, or have many other effects applied.
64 Aptech Limited
Dealing with Audio
Usually the director and producer attend these dubbing sessions. Once the director and producer of the film
are satisfied with the final audio mix, it is time to produce the finished composite sound track, which is what the
moviegoers will hear. In the past, the composite sound track existed as only two tracks: left and right. This is called
surround sound, which gives a real 3D sense to the audio. Today, with the new technology, the composite sound
track of a film can exist in many more tracks.
Aptech Limited 65
Concepts of Digital Filmmaking & Visual Fx
5.3.10 Woofer
The bass and lower midrange sounds are reproduced by the woofer. To operate efficiently, a woofer’s cone should
be made of material that is stiff, yet lightweight. Cones made of polymers, polypropylene, light metals, or poly mixed
with other materials including carbon strands and metals, provide excellent sound. Refer to Figure 5.12.
Figure 5.13: The XPS 210 subwoofer showing the audio control knobs and buttons
Bass: These control the low end of the audio frequency spectrum, from approximately 20 Hz up to 400
Hz or so. An instrument producing bass tones vibrates air more slowly than one producing Treble tones. In
audio, bass is subdivided into mid-bass and sub-bass regions.
Mid-bass: The segment of the audio frequency spectrum covering sounds produced in the upper bass and
lower midrange region.
Treble: These control the highest frequencies in the audio spectrum, above 1.3 kHz and usually with an
upper limit of 20 kHz.
Mid-range: The segment of the audio frequency spectrum between the bass and treble frequencies. The
mid-range includes most voices and the fundamental tones produced by most musical instruments.
Attenuation: These controls are useful when we want to reduce the higher and lower audio levels.
66 Aptech Limited
Dealing with Audio
5.4 Summary
In this session, Dealing with Audio, you learned that:
The key to optimal microphone performance is in the placement of the microphone in relation to the sound
source. If a microphone is too close, too far, or off axis, complications will result, including poor frequency
response, noise and distortion.
There are three basic mic placements from which all mic setups are built: boom, plant and lavaliere. This
priority is sometimes referred to as the Hierarchy of Microphone Techniques.
ADR is an acronym for automatic dialogue replacement. In this process the actors are called back during
the post-production process to re-record dialogue that wasn’t recorded properly during the shoot. The editor
supervises this process and matches the newly recorded lines to the actor’s mouth on film.
Aim the mic from ABOVE the subject so that the mic points DOWNWARD. The line of sight reaches from
the front of the mic, to the mouth, and then towards the ground. Background noise and ambience will strike
the sides of the microphone (which is the maximum rejection angle) rather than striking the mic along its
most sensitive front axis. In the event that it is impossible to mike from above, the next best option is to mike
from below, so that the line of sight terminates with sky.
Once the dialog, sound effects, and other audio are recorded, it is time for audio dubbing. Each separately
recorded sound exists on its own sound track. Each of these sound tracks can be manipulated in different
ways. They can be faded in or out, have their pitch changed, or have many other effects applied. The
technicians who control these effects are called mixers.
Aptech Limited 67
Concepts of Digital Filmmaking & Visual Fx
5.5 Exercise
1. T
he omni directional mics pick sounds from all directions; it might sound good to use it when recording on large
sets with many actors.
a. True b. False
a. True b. False
3. In general, the closer we are from the sound source, the more directional the mic should be.
a. True b. False
4. Lavalier, or clip-on mics are small condenser mics that we see clipped to the front of newscasters.
a. True b. False
a. True b. False
6. The long thin microphones that we see sticking out of video cameras are referred to as lavaliers.
a. True b. False
a. True b. False
8. Corded mics send the audio signal over the airwaves using a transmitter and receiver.
a. True b. False
a. True b. False
10. S
ometimes, while actors are on the set, but without cameras rolling the company will record additional lines of
dialogue to be used later as production dialogue.
a. True b. False
68 Aptech Limited
Editing
6 Editing
Learning Outcomes
In this session, you will learn to -
6.1 Introduction
Whether we prefer the quick cutting MTV look, a traditional film-cutting style or any other way of cutting the goal of
editing is to successfully tell a story. In this regard editing can be a continuation of the writing process, as when the
film is shot the editor needs to do the rewriting of the script using the footage that exists. As the shooting is already
done the rewriting will be restricting to the footage available in hand.
Through editing, shots are combined in accordance with the script to create the finished movie. A shot must be as
short as possible while still achieving its purpose. If it is too long, the audience will be bored; if it is too short, they
will be frustrated. Once the audience grasps the meaning of a shot, it’s time to cut to the next shot in the scene.
Editing is the act of completing the pacing and narrative structure of a film and its soundtrack by cutting and splicing
the shots together to make a final, comprehensible story. The individual who edits the film is the editor. Very often,
the success or failure of a production may rely on the quality of the editor’s work. Sharp film editing can make a
mediocre production look good and a good production look that much better. Inversely, sloppy editing can unhinge
a solid script and even negate strong efforts by the director, the actors, and technical crews.
The film editor must know how to tell a story, be politically savvy when working with directors and studio executives,
and have a calm and confident demeanor. Millions of dollars of film and the responsibility of guiding the picture
through post-production and into theaters rest in the editor’s hands. Scenes may have been photographed poorly
and performances might have been less than inspired, but a skilled and creative editor can assemble the film so
that the audience will never see these imperfections.
Aptech Limited 69
Concepts of Digital Filmmaking & Visual Fx
The first stage of editing begins after the editor is given the dailies (footage). He/She synchronizes the dailies with
the parallel sound track, which has been shifted to magnetic stock. Because the director has probably engaged in
coverage, or the process of shooting all the scenes as many different ways and from as many different angles as
possible, the editor has a lot of flexibility. Let us have a look at the factors that determine proper length of shot.
Audience Expectation: Sometimes the editor cuts to a new shot simply because the audience does not
generally like long shots. But this can’t be applicable to all shots all the time. The audience may need to
get closer, further away or angled differently to see the action. Audience expectation works on a subliminal
level. Still, if the expectation is not met they will feel it and react disapprovingly. For instance, if a character
is injured in wide shot, the audience will want to see a close-up in order to clarify what happened. If the first
shot drags on too long, it will frustrate the audience and, possibly, impede their understanding of the action.
Dragging a shot too long is a common error with beginner filmmakers.
Comprehension: Shots require various viewing times for the audience to understand them. Simple
compositions, static subjects and shots similar to their predecessor need minimal screen time for
comprehension. On the other hand, complicated compositions, moving subjects and shots vastly different
than their predecessor need more screen time. Despite this, the speed with which an audience can absorb
the meaning and purpose of a shot should not be underestimated.
Action Requirements: Some shots include an action that must be completed before cutting to the next
shot. If the action is too long to hold audience interest and curiosity, we should compress it using various
editing techniques.
Editor Imposed: In most of the situations, the editor decides shot length based on audience needs.
Occasionally, the editor will impose a cut to create a response in the audience. This can be to: create
emphasis, maintain rhythm, surprise the audience, or make a symbolic point.
Sometimes it is better to edit the sound first and make the picture fit. This is usually true when we need to have
certain actions happen when certain words are heard, as in a how-to video, or in the case of a montage sequence
cut to music. If every action must be seen in its entirety, then it makes more sense to edit the picture first.
While doing audio editing also we need to be careful of many things. If we have a digitized file of a live performance
an hour or two in length, separate the file into shorter audio clips by song. Once we have five or ten separate
audio clips, sort through each clip and perform some basic clean-up editing to remove unwanted artifacts, such as
coughs, sneezes, loud inhales on a vocal mic before a song starts, a clank against the mic stand, and so on. After
this basic clean up we can add fade-ins and fade-outs for each clip. In general, there are no set rules about how
long fade-ins and fade-outs should last at the beginning and end of a tune. Try several options and use your ear to
determine the timing of the fade-ins and fade-outs that works best with each song.
Typical problems that emerge in the editing room are, for example: 1) lack of different kinds of continuity; 2) cases
where the emotional intention of a scene is not realized: we don’t laugh at what was intended to be funny, or we
laugh at a scene where we were supposed to cry; 3) the audience lacks information necessary to understand the
relations between the characters or the action; or 4) the narrative creates expectations that are not fulfilled by the
story as it evolves.
Such problems might not arise from the quality of the individual scenes, but from the fact that there are too many of
them or that, when assembled, they do not produce the necessary dramatic flow.
Though the editing part comes in the postproduction stage it is always a good idea to select our editing package
before we start shooting. With this we can get a better idea of shooting the scene and will know the solutions for the
problems beforehand. In general there are two categories in which editing is done. The terms are known as on-line
and off-line. Understanding the difference between these two terms is essential as to know what is the best quality
that we will get.
On-line Editing: When we capture a high-quality video, edit it, add special effects and finally dump it on
to a videotape final master means we are editing on-line. Outputting from our computer directly to film
using a special film recorder is also considered editing on-line. In both the cases the editor is creating the
final master output. Here the quality output is a must. Yet it is not always the case that on-line editing is
70 Aptech Limited
Editing
performed on our computer. The main reason for this is the amount of cost involved. For this process we
need equipments like Digital Betacam to get the best quality footage, a DigiBeta deck to connect to our
computer. Purchasing them or paying the rent for them is quite expensive.
Off-line Editing: Off-line editing on the other hand occurs when we are using low quality or proxy footage
to create a rough draft of our project. This term does not refer to specific equipment. It is assumed however
that we are probably working with the highest-quality video equipment to which we have access.
If we’re firm on to cut costs, we can edit offline at a much lower cost per hour than online. It works like this: using a
computer notation editing system (like Avid, Media 100 or Premiere), we can indicate all of our cuts and transitions
for both video and audio on disk (computer diskette) as an EDL (edit decision list). Once we’ve made all of our
editing judgments, they can be printed out or put onto a disk that the online system can use to quickly reference the
time code locations for every cut.
Aptech Limited 71
Concepts of Digital Filmmaking & Visual Fx
NTSC/PAL video monitors: A video monitor is necessary for us to know what our final output will look like.
This is the one component that we’ll have to have all the way through the post-production process.
Speakers: All editing systems need a pair of external speakers so we can properly hear sound in our
project. Refer to Figure 6.3.
Figure 6.4a: The three waveform monitors shown Figure 6.4b: 1720 Series Vectorscopes/Waveform
are the Tektronix WFM-601, the WFM-601i and the Monitors
WFM-601M
Audio mixers: Mixing boards provide on-the-fly fine-tuning of equalization and gain to control the audio
quality of our production and make it easy to manage multiple audio sources. Refer to Figure 6.5.
72 Aptech Limited
Editing
There are many ways to build the first cut of a scene using non-linear editing system. The simplest method is called
drag-and-drop editing. While working with this editing method we use the mouse to drag shots from a bin and into
the timeline window. There the shots can then be arranged by dragging them into the order we want.
If our software offers storyboard editing switch to thumbnail view in our bin and visually arrange the shots in order
we think will work, then select them and drag and drop them into the timeline.
It is a method of setting In and Out points to precisely control where and how frames are inserted into a Timeline.
In a three-point edit, we set any three such markers, and Premiere determines the fourth to match the specified
duration. This method results in more precise edit than drag and drop editing.
If the scene is based on dialogue a good way to build the first cut is to create a radio cut. The whole idea behind this
method is to make the scene sound good first without worrying how it looks.
Aptech Limited 73
Concepts of Digital Filmmaking & Visual Fx
Figure 6.7: The two sides of the Liberty. Some might say this is crossing the line. But such a cut is not
confusing; in fact, it makes a strong, informative beginning to a sequence
(Refer to the Colored Section for the colored image.)
Matching Emotion and Tone: While shooting, the continuity of actor’s emotions and tones is very essential.
Apart from this the camera angle, movement of an actor, lighting also needs to be similar while moving into
the next continuity scene. If our scene is about a heated argument between two people, the scene starting
from long shot to close up will look appealing rather than a shot, which is cut in between in such a way that
the focus on the emotions is lost.
6.6 Transitions
The phenomenon of editing deals with all aspects of filmic rhythm - from the transition of one image to another or
the detailed musical rhythm in a small sequence of edits, to the most general balancing of pace and rhythm in the
overall narrative structure. When we switch from one clip to another, we are making a transition (also known as a
“cut”) between perspectives. This switch is a fundamental part of editing, and getting smooth transitions can take a
lot of time and patience. A smooth transition is one where the viewer doesn’t even realize we have switched their
perspective. The golden rule for transitions: Don’t cut just for the sake of cutting. Make sure that the perspective we
cut to be the best way to show the moment we want to show.
Transitions are effects that are used to establish a link between two shots. The most commonly used is cross-dissolve
and apart from that various types of wipe transitions are also used quite often. In the early days of filmmaking only
fade in and fade out transitions were used. Today’s filmmaker relies more on the techniques given below:
Hard Cuts: This expression refers to an edit between two very different shots, without a dissolve or other
effect to soften the transition. Hard cuts are used for smoothing out the matching actions, screen position,
and other cues.
Dissolve, Fades and Wipes: Instead of just a straight cut between clips, we may decide that we want to use
a transition effect such as cross-fading, wipes or dissolves to elaborate the switch between perspectives.
Transition effects can also have secondary effects to the theme and flavor of our content; for example, a
“fade-to-black” transition can suggest the passage of time. Movies in the Star Wars saga used many “wipe”
transitions to indicate changes in place and character. The cross-fade transition can allow for very smooth
and subtle changes. Refer to Figures 6.8 and 6.9.
Figure 6.8: One image getting faded into another image creating a fade effect
(Refer to the Colored Section for the colored image.)
Using a dissolve to transition between scenes can add a feeling of smoothness and serve to slow down the pacing
of our story. Dissolves usually indicate a period between two scenes where the impact of the first scene needs to be
carried forward for a while in the next scene. They can also indicate the start of a dream sequence or flashback.
Dissolve effect
Blinds effect
Figure 6.9: Fade effect on left hand side and right side image indicating dissolve and blinds effect
Aptech Limited 75
Concepts of Digital Filmmaking & Visual Fx
Wipe is visible on screen as a bar traveling across the frame pushing one shot off and pulling the next shot into
place. Rarely used in contemporary film, but common in films from the 1930s and 1940s. Fades and wipe are not
used in films much as they are considered to be old effects bur at times we still see these effects to create comic
effects. Fade is a visual transition between shots or scenes that appears on screen as a brief interval with no picture.
The editor fades one shot to black and then fades in the next. Often used to indicate a change in time and place.
With the current software packages Transitions are easy to use; just select the transition we want (we can view it in
advance in the small screen), set it to what speed we want, and drag the bar that says what our transition is (Cross
Dissolve, for instance) between the two clips that we want to transition.
We’re all familiar with the fact that music is often the most obvious sound effect, and the experience of music
providing important pieces of information. Sometimes, it is the musical score that carries all of the dramatic pacing
in a scene. Try watching the last few minutes of Jurassic Park with the sound turned down. We will realize how the
movie doesn’t really have a strong ending. Instead, we’re simply led to the ending by the musical score.
We have learnt the sound effects that are used with images to create a spectacle of reel life dramas. But again
imagine a film, which has great sound and dialogues but they are not matching with the lip movements of the actors.
All we will get is tremendous irritation while watching a movie.
A good sound editing is often used to “dress” a set to make it more believable. Good dialogue synchronization plays
a key role in scenes where there is not much of a background sound, but where a conversation between just two
subjects is going on. At this point the audience is totally concentrating on the dialogue more that the visuals. We’ll
need to do a fair amount of editing to get our dialog organized so that it can be easily adjusted and corrected, and
to prepare it for the final mix.
Checkerboarding (or splitting tracks) is the process of arranging our dialog tracks so that one voice can be easily
adjusted and corrected. We use this method to separate different speakers onto different tracks so that we can
manipulate and correct their dialog with as few separate actions as possible. It’s called checkerboarding because,
as we begin to separate different speakers, our audio tracks will begin to have a “checkerboard” appearance as can
be seen in following image.
It is not necessary that we will separate out every single voice or even every occurrence of a particular speaker.
Though we might be trying to split out a particular actor, splitting tracks during short lines or overlapping dialog
may not be worth the trouble. In addition to splitting up our dialog, we’ll also need to move all of the sound effects
recorded during our production onto their own track.
The Non-linear method of editing makes it very simple to select the relevant portions of the sound track and copy
and paste them to a new track. Remember not to cut, or we’ll throw the remaining audio out of sync. Here we need
to copy the stereo information that is the sound info of both the speakers, for each actor.
Many a times Automatic dialog replacement (ADR) is used to replace badly recorded sound. The ultimate aim is
the audio should be synchronized with video as accurately as possible. It is always advisable to take maximum
precautions while filming a shot and recording dialogues rather than rely on the sound editing and synchronization
techniques.
76 Aptech Limited
Editing
6.8 Summary
In this session, Editing, you learned that:
When we watch a film, most of us have great difficulty in consciously perceiving the editing. Of course we
know that every time there is a shift from one image to another, it is an edit, and we know that editing in
general has to do with the establishing of rhythm in film. But we are often not sure of the concrete function
of editing, and likewise of the contribution that the editing process makes to the final film.
In video editing, a transition is used between two different segments of video. The most common transitions
are cuts, dissolves (also known as a fades) and wipes. Other types of transition effects include page turns,
spins etc.
Sometimes simple cuts from one clip to the next work well, but at other times we might want to use fancier
transitions from scene to scene. For example, we might want to use a dissolve, or a wipe or a fade.
After the video clips are in the right order and edited, the program is polished. The video clips can be
cropped, resized and color corrected. Titles are inserted or superimposed. Transitions (over 60 types) can
be used between clips. The camera audio, voice-over, and music are equalized and mixed. Special effects
and visual filters can be applied, from a simple sepia tone to the bizarre effects used in a music video.
A good sound editing is often used to “dress” a set to make it more believable. Good dialogue synchronization
plays a key role in scenes where there is not much of a background sound, but where a conversation
between just two subjects is going on.
6.9 Exercise
1. Outputting from our computer directly to film using a special film recorder is considered editing off-line.
a. True b. False
2. Off-line editing occurs when we are using low quality or proxy footage to create a rough draft of our project.
a. True a. False
a. True b. False
5. The three-point edit results in more precise edit than drag and drop editing.
a. True b. False
6. The fine cut is used as a guide for cutting the original negative and as detailed instructions involving transition
techniques such as fades, wipes, and dissolves.
a. True b. False
7. Splitting tracks is the process of arranging our dialog tracks so that one voice can be easily adjusted and
corrected.
a. True b. False
Aptech Limited 77
This page has been intentionally left blank.
Color Corrections and Monitor Calibration
7.1 Introduction
Specialized high-end facilities offer a great service and often use experienced colorists, who know very well what
film should look like, to color correct each shot. They are also able to process the footage in real time once all the
programming is done. However, there are a few disadvantages to using such facilities including a certain lack of
control from the part of the producer (unless he/she is able to be physically there during the process) and the time
it takes to get the master to and from the facility. Fortunately, these days we can get the same results by using
inexpensive personal computers and workstations plus readily available software.
The preference for a film look over a video look seems to be a cultural one. In many countries, including Japan, a
very well produced and crispy looking video is considered better then film for projects that will be ultimately shown
on television or video. The reason is simple: modern television cameras are capable of producing great looking
images, far cleaner looking on television that telecined film. Video doesn’t have the graininess of film plus it plays
back at 60 fields per second instead of 24 frames per second, resulting in smoother motion. In other parts of the
world, including America, people tend to prefer the look of film.
As we have grown up so used to the dreamlike look of film, with its organic grains and slower-than-life 24 frames per
second speed that basically anything produced in the format tends to assume greater than life proportions and grab
our attention more effectively. This is one of the many reasons why a number of television series and commercials
are shot on film instead of video. These days, however, with the reliability of digital video formats and the advent
of HDTV, there is an increasing number of television series that appear to have been shot on film that are actually
produced electronically and are processed to look like they were shot on celluloid. This decreases production costs
considerably while still delivering the film look that audiences like so much.
In the next few sections of our session we will discuss several techniques to make our video footage look like it was
shot on film. They can be very useful if we are shooting a feature on digital video and want give it the same look as
a film shot on celluloid. It wasn’t too long ago that only high-end digital post-production facilities had the capability
to make video look like film. With the facilities that are available today for us the quality of the digital film should be
such that even a professional cinematographer should think that it had been shot on celluloid film.
Aptech Limited 79
Concepts of Digital Filmmaking & Visual Fx
7.2.1 White Balance
In the preceding sessions we have understood the importance of white balance and how it can be adjusted
manually on the DV camera. Film, on the other hand, is balanced for a specific color temperature and will record
any variances. In both cases the color temperature should be adjusted as accurately as possible. Any adjustments
in color temperature, if not taken care of at the time of shooting through the use of color correction filters and gels,
must be made in the postproduction stage. It’s a little hard to presume the effect that different color temperatures
have on film because our own vision works pretty much like a video camera with the automatic white balance
circuitry always on. Our brain is constantly adjusting the color temperature of light sources for us so that they are
always perceived as white. In fact, this is why we can tolerate the awful green light that certain fluorescent bulbs
emit.
Experienced cinematographers have learned what different light sources look like on film and they can think in
terms of different colors. Even we can be masters of this process if we dedicate a fair amount of time on this aspect
of filmmaking. The changes in color temperature can vary a lot depending on various factors.
With a very simple exercise we can understand how film sees different color temperatures. Get close to a window on
a bright, sunny day, and turn on the light bulbs inside the room. If we focus our attention outside for a few minutes
and then look at the lights inside, they will look very orange. After a few seconds, our eyes will adjust to the color
temperature of these lights and we’ll start to perceive them as white. Now look outside and the sunlight will appear
very blue. As we can see, our brain will try to always give us the impression that almost any light source is white.
All it takes is some time for adjustment. This is the same reason why film can’t adjust the color temperature on its
own.
At its best, a DV camera can shoot beautiful images with nicely saturated, accurate colors. But at times by the errors
done by the operators or bad weather conditions we need to correct the color of our footage. Sometimes we might
make changes in the footage for artistic reasons. Maybe we want to dominate an emotion in a scene using specific
colors or just want to make a particular shot more colorful to make it look like a fantasy.
Most of the editing software provide us with powerful tools that can correct and change the colors in our video if it
has not been shot correctly or for plain artistic requirements. Any compositing program such as Softimage Digital
Studio, Discreet effect, Eeyon Digital Fusion, Nothing Real Shake or Adobe After Effects can be used for color
correction. The main advantage of using such applications is the increased complexity of control that we are given
as a user. Because the traditional compositing process for film involves a lot of color matching between layers and
elements, the color control tools must be more complex in order to ensure perfect results. Another excellent reason
to use compositing programs for color correction is the availability of great plug-ins for this purpose. Learning the
tool options for good color correction is essential and invaluable. Usually color corrections are done during the
editing process. Let us have a look that can affect the color balance of the footage.
Most of the time, matching a footage from one camera to another can be difficult as it’s an adjustment of odd
combinations of factors that make footage from one camera look different from another. We need to take into
consideration the following points while making the color adjustments:
80 Aptech Limited
Color Corrections and Monitor Calibration
All cameras do not have the same settings for color levels and sharpness details. These differences can
cause a tonal difference in shots and cause major problems while matching the shots. We can reduce this
problem by using unsharp mask filters to sharpen the details from softer cameras.
While adjusting the flesh tones, we need to be extra careful. The shadows of the actors while blue screening
appear to be bright whites on the walls, couch and other props. In the realistic shots this looks fake and
needs color corrections for it. When we work on the color corrections in these areas make sure that only
shadows are corrected and the rest of the area is balanced.
Identify which footage looks the best of all among the many shots. According to that adjust the tonal ranges
of remaining shots to get the best visual output. If the main footage also has some problem then first correct
it using a filter and then use additional filters to remove problems in the remaining footage. We should
remember that these techniques could vary according to the footage we have in our hand and depending
on the contrast between the color tones.
At times we just have to adjust a part of an image in one color and another part of the same image needs
to be corrected in a different manner. For example, in a scene the background needs to be corrected using
one filter where as the foreground with another. Most of the editing tools allow us to select a particular area
of an image and correct it.
Clamping is the process that establishes a fixed level for the picture level at the beginning of each scanning line.
Most of the DV cameras have a luminance setting which works automatically. We can control luminance manually
as well. For manual luma clamping we have to use a CODEC that allows us to deactivate luminance clamp.
Now that we have been introduced to the concept of color correction related to digital filmmaking, feel free to
experiment as much as possible. Do our color correction on a scene-by-scene basis, making sure that the overall
tonality contributed to that particular scene’s mood. Color correction may be used very subtly just to avoid the typical
video “perfect white” or more aggressively to convey strong emotions. We may also go to extremes and use color
correction as a special effect or even to set a style for our whole movie.
When previewing video output, it’s important that the video monitor be calibrated. Proper monitor calibration ensures
that what we see today will match what we see tomorrow, and that the judgments and decisions we make based
on what we see remain valid. Not everyone sees colors and brightness the same. If monitor settings were left up
to personal preference, everyone’s monitor would be set differently, making it impossible for people to agree on
how the video footage actually looked like and therefore what changes might be needed. By choosing a calibration
standard we don’t necessarily make the picture look its best, that’s a subjective judgment in any case, but we
attempt to make it look standard. If everyone viewing a project adheres to the same standard, then there will be
much less disagreement over what, if any, changes are needed.
Aptech Limited 81
Concepts of Digital Filmmaking & Visual Fx
Even calibrated video monitors will differ in appearance due to a number of factors. Color temperature and other
internal components used in various monitor standards like NTSC and PAL can cause in the difference in picture
quality. Monitoring on a television set is a big step up from trying to do it on a computer monitor. We get to see color
differences and interlace problems, which are the two main items that cause problems with computer-generated
footage. Let’s say we need an image for professional DV film; it may appear on an uncorrected PC monitor that
many of the image details are hidden in shadows. We may be tempted to correct that directly in image with various
adjusting functions. However we should first make Monitor settings corrections and look at it again. Maybe suddenly
the shades are not so dark and the details are quite visible. Television is a little different from computer imagery.
Those who have converted their text and graphics to video have quickly discovered that the colors are different, thin
lines tend to vibrate, and the picture is much fuzzier than the one on their computer screen.
There are really two kinds of TV monitors, those that make the picture look good (designed for the viewing audience),
and those that tell the truth (helpful to those analyzing their pictures to make them the best possible). We will focus
on the latter, the kind of monitor videographers use to measure the quality of their video signal, both the visible
and the invisible parts. There are distinct advantages to using a real video monitor, despite the additional cost. And
by real video monitor, we mean a monitor purpose-built for monitoring video signals, not for watching TV. Refer to
Figure 7.1.
Because NTSC video devices are prone to drift out of adjustment, it is necessary to calibrate the signal and the TV
monitor frequently. Before we start editing, it is important to adjust our NTSC monitor to make sure that it displays
colors as accurately as possible. The easiest way to do this is with color bars, a regular pattern of colors and gray
tones that can be used to adjust our monitor’s brightness and contrast. To calibrate our video monitor, display a set
of SMPTE color bars. Many professional monitors and cameras can generate color bars. Be sure that the bars we
use are correctly calibrated. The simplest calibration methods involve adjustments to the Gamma, Contrast and
Brightness settings of our monitor.
82 Aptech Limited
Color Corrections and Monitor Calibration
7.4 Summary
In this session, Color Corrections and Monitor Calibration, you learned that:
With the reliability of digital video formats and the advent of HDTV, there is an increasing number of television
series that appear to have been shot on film that are actually produced electronically and are processed to
look like they were shot on celluloid. This decreases production costs considerably while still delivering the
film look that audiences like so much.
A film with good color balance can improve our storytelling, deliver critical emotional cues, and add impact
to our videos. While creating any kind of film we need it to meet the required industry standards, whether
we need to match video footage to film footage or all we want is to give our video projects a much higher
perceived production value.
Most of the editing software provide us with powerful tools that can correct and change the colors in our
video if it is not been shot correctly or for plain artistic requirements.
Most of the time matching a footage from one camera to another can be difficult as its an adjustment of odd
combinations of factors that make footage from one camera look different from another.
Clamping is the process that establishes a fixed level for the picture level at the beginning of each scanning
line. Most of the DV cameras have a luminance setting, which works automatically. We can control luminance
manually as well.
By choosing a calibration standard we don’t necessarily make the picture look its best, that’s a subjective
judgment in any case, but we attempt to make it look standard.
7.5 Exercise
1. M
ost of the editing software provide us with powerful tools that can correct and change the colors in our video
if it is not been shot correctly or for plain artistic requirements.
a. True b. False
2. All cameras have same settings for color levels and sharpness details.
a. True b. False
3. The shadows of the actors while blue screening appear to be bright whites on the walls, couch and other
props.
a. True b. False
4. _
___________is the process that established a fixed level for the picture level at the beginning of each scanning
line.
a. Footage b. Clamping
c. Color correction d. Monitor calibration
5. C
olor temperature and other internal components used in various monitor standards like NTSC and PAL can
cause a difference in picture quality.
a. True b. False
Aptech Limited 83
This page has been intentionally left blank.
Compositing and Rotoscoping
Learning Outcomes
In this session, you will learn to -
8.1 Introduction
The creation of visually fantastic and seamlessly integrated effects in today’s films requires a level of technical and
visual sophistication in training that few multimedia-training centers can offer.
Creating realistic digital effects is a difficult task even for professionals, but with the innovative features and
accessibility of powerful graphics software (like Maya, 3ds max, After Effects, etc.) creating fascinating, commercial-
quality effects is now within the reach of many artists and animators.
The latest special effects projects include: interactive cinema, audience participation, and exclusively computerized
movies. In the interactive cinema, the audience would be moved around in seats while watching 3-dimensional
films. Audience participation in movies is also on the horizon of special effects. There will be more exclusively
computerized movies, like Ice Age and Matrix.
Soon, we will see that even Indian films will be visually as rich as western films. Filmmakers will combine the latest
computer technology, digital trickery, motion camera photography, and computer-generated movements to recreate
the stars of the past. Who knows when Madhubala, Meena Kumari, Kishore Kumar, and other old movie stars will
reappear????
8.2 Titling
Every production may not need fancy special effects such as 3D- rendered dinosaurs, or complicated composites
and morphs, but will definitely need a title sequence at the beginning and end credit roll at the end. For a project
like a documentary, we will use titles to identify interviewees and locations. Though most of the editing packages
give the facility to create titles, it may not be possible to create cool, animated sequences or even a simple list of
rolling credits.
Aptech Limited 85
Concepts of Digital Filmmaking & Visual Fx
The Action Safe and Title Safe are the two terms that we will hear commonly while making titles for any
film. Let us understand what they are all about and see what they are used for. Refer to Figure 8.1.
Figure 8.1: Regions defined for Title Safe and Action Safe Area
● Action Safe: This area is the larger of two regions. This amounts to about 90% of the total picture area.
It is symmetrically located inside the picture border. Our home sets are over scanned. We don’t see the
entire picture, the edges being lost beyond the border of the screen. Safe action area is designated as
the area of the picture that is “safe” to put action that the viewer needs to see. Refer to Figure 8.2a.
● Title Safe: The area for safe title is inside the safe action area and amounts to about 80% of the total
picture area. Titles and text are usually kept within the safe title area to make sure they can be seen in
their entirety. Refer to Figure 8.2b.
Figure 8.2a: Animation within Action safe area Figure 8.2b: Title within Title safe area
Apart from these two areas, we need to keep in mind the Safe Color concept, which is equally important while
making titles. NTSC and PAL video have much smaller gamut than our computer monitor. This means the colors
that appear to be fine on the computer monitor may not look good on the NTSC monitor and appear to be plain.
Most of the titling tools of editing programs choose colors that are not NTSC safe. We can study the color difference
on NTSC monitors or Vectorscopes and make the color corrections that look the best or use tools like After Effects
and Photoshop, which provide Broadcast Color filters that will convert our graphics to NTSC-safe colors.
While choosing colors for the title, we also have to pay attention to the background scenes that are going to play
86 Aptech Limited
Compositing and Rotoscoping
behind the titles. For example, a white title might look good against a dark background but may not look appealing
on the light background. If the title is not readable, the audience will not understand whom the credit goes to for a
particular section of work done in film.
While placing the titles over a video we need to give some thought to their placement. Though titles are not the
original part of footage, they should not look separate from the video completely. The viewer should not think titles
are different from the screenplay. Whether the titles are going to be moving or static they have to be readable.
Usually this means at least four seconds for a fairly short title, excluding fade in or out. Titles can serve as another
punch in our story telling process, so give them some thought.
8.3 Compositing
Compositing is a technique by which one shot is super-imposed on another, resulting in a composite shot. In theory
compositing sounds like a very basic effect but in practical use, it is one of the most powerful tools at our disposal.
Many spectacular shots can be achieved using Compositing. In fact it is one of the most important and widely
used effects techniques. No Special FX movie can be made without using Compositing. In the movies (and TV),
“compositing” is done by filming the actors and actresses against a green or blue screen, removing the color from
88 Aptech Limited
Compositing and Rotoscoping
the filmed scene, and then inserting a new background. Refer to Figure 8.7.
Figure 8.7: Images from Jurassic Park, a popular example of the compositing techniques applied by
Hollywood
(Refer to the Colored Section for the colored image.)
Aptech Limited 89
Concepts of Digital Filmmaking & Visual Fx
3D compositing can be extremely challenging but at the same time, very rewarding. When we import our footage
files in any editing programme the clips need to be arranged in proper order. After that we have to choose the
method that we are going to use for compositing. Compositing methods fall into two categories: keys and mattes.
8.3.1 Keys
A common example of keying is our everyday weather forecast on TV. The weather map is a separate computer
generated shot onto which the announcer is super-imposed, making it look as if he/she is standing in front of a giant
TV screen flashing different weather images. In reality the weatherman is standing in front of a blue or green screen
that is electronically keyed out and replaced with the image of the map.
The word “key” was derived from a digital mixer’s capability for cutting an electronic keyhole in a video picture. Into
this keyhole is placed another video signal or a matte color, which results in one image being placed over, or within,
another. Key effects go by a variety of names including Luminance Key, Chroma Key (Color key), External Key and
Downstream Key. All of the key effects work on the principal of taking an image shape presented from one video
source and using it as the cut-out for placing another video source (or matte color) inside of it or around it. And
typically this cutout is defined by its brightness level in contrast to whatever else is in the scene. Let us understand
the functions of these keys in detail.
Chroma Key: Chroma Key looks for a specific color or hue to be used as the cutout reference. Such is
the case with the TV weatherman standing in front of a blue or green wall as those specific shades of blue
or green are replaced by a computer generated weather map. Because we have to shoot in front of a
specially colored screen, and have to have very precise lighting, chroma key effects are not ideal for every
compositing task. Refer to Figure 8.9.
90 Aptech Limited
Compositing and Rotoscoping
8.3.2 Mattes
As it’s hard to get a good composite using a key, and because it’s not always likely to hang a colored screen behind
something, we can also achieve composites using a special type of mask called a matte. Just like a stencil, matte
can be used to cut out an area of footage and the underlying footage can be visible through it. For instance, if we
want to show a giant butterfly flying behind the building and has covered the entire building. While shooting we can’t
put such a big blue screen behind the building, so in this situation we can make use of matte technique to remove
the background to make visible the butterfly, which is placed on the underlying layer.
In the analog film world, matters are drawn by hand for each frame by matte cutters. The process is very time
consuming and the results not always up to the mark. But the digital world has made these kinds of effects very
simple.
8.4 Rotoscoping
Drawing around something in the frame so that an effect can be applied to that part of the film is a tough job if the
object is moving around in the scene. If an animated creature has to go behind something in the live action piece of
film, that object can be drawn around so a matte can be created, so that the creature will not show over the top of
that object. If the camera were moving, then each frame of film would have to be rotoscoped. If the camera is still,
then the same matte can probably be used for all frames in that shot.
Rotoscoping was first used by the Fleischers for making cartoons. A rotoscope was a kind of projector used to
create frame-by-frame alignment between filmed live-action footage and hand-drawn animation. Mounted at the top
of an animation stand, a rotoscope projected filmed images down through the actual lens of the animation camera
and onto the page where animators draw and compose images.
The Rotoscope consists of an animation camera and a light source (usually using a prism behind the movement
and the lamp house attached to the camera’s open door) that projects a print through the camera’s lens and the
projected image is then traced to create a matte. The lamp house is then removed and the raw stock placed in the
camera and the drawings are filmed through the same lens that projected the image. The resulting image will then
fit the original image if the two strips of film are run bi-packed in the same projector movement (using an optical
printer). In digital film effects work, rotoscoping refers to any drawn matte, as both images can be seen composited
while the matte is being drawn, so good results can be achieved. Refer to Figure 8.11.
Aptech Limited 91
Concepts of Digital Filmmaking & Visual Fx
Figure 8.11: Greenscreen extraction, wire removals, and lots of rotoscoping make this a densely
complicated shot
(Refer to the Colored Section for the colored image.)
A rotoscoping texture (sometimes called a sequence map) is the use of video within an animation, something like
an animation within an animation. For example, in a cartoon animation, the television set could show a program
containing another animation. Or in a background to an animation in the foreground, we could include some clouds
that slowly changed during the foreground animation. The frame rate for both the main animation and the “animation
within the animation” must be the same.
In computer graphics, to rotoscope is to create an animated matte indicating the shape of an object or actor at each
frame of a sequence, as would be used to composite a CGI element into the background of a live-action shot.
While creating special effects for films the special effects crew comes into the picture. They are involved in every
part of the production, from assembling together real elements to decorate the truth with realistic-looking fakes. The
post-production editing and effects market promises to be a bonanza for the multimedia software industry. Post-
production is now poised on the brink of a reformation that will result in a massive digital convergence. The fact is
that in India currently, more than 90% of this market is still occupied by “old tools” analog, hardware-based, difficult,
and expensive. These tools make it difficult to create mind blowing digital effects in a low budget. Also, another
reason for not using digital effects in Bollywood is that most of the current directors are unaware of the power of
these effects and the various tools available in the market. But this scenario is changing rapidly and over the next
92 Aptech Limited
Compositing and Rotoscoping
five years this entire market will convert to “new tools” -- digital, software-based, friendly, inexpensive. As digital is
becoming the standard, postproduction is becoming faster and easier, providing the director with greater control and
less signal degradation. Refer to Figure 8.12.
Aptech Limited 93
Concepts of Digital Filmmaking & Visual Fx
and relatively low cost (compared to the alternatives).
Visual FX: refers to effects created using either the digital or special FX, or a combination of both. The
visual effects can be created using various kinds of techniques. In modern days almost all the visual effects
are created digitally. Let us have a look at some of the effects created in the past and then see the modern
day technology.
In the earliest stage the optical printers were custom built by the major studios and film laboratories, and were
usually designed and made suitable for particular requirements. The first modern standardized optical printing
equipment made in 1943 made it possible to create innumerable effects to cater to the entire motion picture industry.
Improvements over the years of more sophisticated equipment, new duplicating films, special purpose lenses, and
improved film processing techniques, as well as skilled technicians, have increased the use of the optical printer to
a point where its great creative and economic value is common knowledge in the motion picture industry.
From the above three ways the various techniques of creating special effects can be categorized as per the
following:
In-The-Camera Technique: These techniques were less expensive than the laboratory process. The
equipments were not so expensive and the shot could be made by regular production crew. But this technique
also had some disadvantages like it was quite time-consuming hence it could upset the production schedule.
In-the-camera technique consisted of three main categories namely: Basic Effects, Image Replacement
and Miniatures. Let us understand each one of them. Refer to Table 8.1 to view the various categories.
94 Aptech Limited
Compositing and Rotoscoping
Categories Description
Basic Effects These effects include the following:
Optical transitions
Aptech Limited 95
Concepts of Digital Filmmaking & Visual Fx
Glass painting
of arches
Final output
Glass painting
Figure 8.13b: Complicated glass shot where two glass frames are used to create effect
96 Aptech Limited
Compositing and Rotoscoping
Figure 8.15: The miniature model will be pulled with wire by crewmember, while propulsion jets fixed in
the tail of the rocket. Vapour from the launch pad is obtained with steam injected through an aperture in
the set
Laboratory processes: This process includes the following techniques:
● Bi-pack printing: In contrast to the “in-the-camera” matte shot technique, for which all of the image
manipulations and replacements were performed upon the original negative, bi-pack printing allowed
such work to be done from master positives, which were struck from the negative. Refer to Figure
8.16.
● Optical printing: Virtually every film that was made in 35mm, made use of an optical printer before
it was complete, if only for its share of optical transitions. With optical printers it was possible to
manipulate space, time and image. Around 80% of the optical printer’s mechanical life was devoted
to the preparation of optical transitions like fade, dissolve, wipe and push offs. Along with this these
printers were used to speed up, slow down, reverse or stop the actions. Image modifications and
distortions, image size changes, split screen and distortion of colored images were possible through
optical printers. Refer to Figures 8.17a and 8.17b.
Aptech Limited 97
Concepts of Digital Filmmaking & Visual Fx
Figure 8.16: First printing for Bi-pack matte shot combining two live action scenes
Figure 8.17a: The research products set-optical Figure 8.17b: On this optical printer camera can be
printer tilted through several degrees on its optical axis
● Traveling mattes: This technique is also of matting and is used to place the moving figure of the actor
into the background image and to produce a convincing composite. Unlike static matte, this matte
changes its position, size and configuration from frame to frame. Its silhouette conforms exactly to the
shape and movements of the actor, allowing him to move anywhere within the picture. In the earlier
times this technique was done with the help of optical printers whereas today it is done on a computer
using software. Refer to Figure 8.18a and 8.18b.
98 Aptech Limited
Compositing and Rotoscoping
● Aerial-image printing: An aerial image is a real image, which is cast into space, rather than on to a
screen, ground grass, or piece of film. Let’s say for example we set up an 8” x 10” view camera, and
focus a scene upon its ground glass, we have caused a real image to be produced at the plane of
the viewing glass, where, suitably diffused, it becomes clearly visible. If with the camera in the same
position, we remove the ground glass entirely, the real image becomes an aerial image. The focused
image still exists in the same position, but since it exists only in space, it cannot be seen with the eye.
Here the important thing to understand is that this invisible aerial image can be re-focused by a second
optical system so as to produce a real image at another position. At the same time, any artwork, which
is placed into the plane of the aerial image, will be picked up by the supplement optics with equal clarity
and definition. Using this technique, it was possible to execute many kinds of composite shots, in which
painted or photographic artwork was combined with live action footage. Refer to Figure 8.19.
The background can be classified into two parts, namely, rear projection and front projection. Rear projection
was an expensive technique and was best-suited for well-financed productions. Refer to Figure 8.20.
Aptech Limited 99
Concepts of Digital Filmmaking & Visual Fx
Most of the effects are created using digital tools, which are getting more and more popular worldwide for the
following reasons:
Faster: Computers provide random or non-linear access to information. Applied to video, computers allow
users to jump from frame #6 to frame #88 directly, without passing intervening frames en route. This reduced
access time means increased creative time.
They are very simple to use: Any person who is not so confident in working with these tools can also create
good effects suitable for any broadcast formats. As most of the tools are user friendly, even a beginner can
try out new things on footage and explore his creativity to the maximum level.
We get more control over the creative process: With a general-purpose computer platform, video editing
and special effects are open to a world of post-production options. These software-based applications
include animation, paint, special effects, titling, and musical accompaniment.
Less expensive: In the professional video editing market, software for editing and effects on the Macintosh,
PC, and Silicon Graphics platforms has begun to multiply rapidly. Software companies such as Adobe,
Alias, Avid, SOFTIMAGE, Wavefront and many others are drawing post-production professionals with low-
price products that don’t demand compromises. Apart from these, to enhance still images, software like
Adobe Photoshop, Fractal Design Painter, and KPT filters are equally popular and go hand in hand with
editing software.
Small work set up and more money: Independent small Post-Production houses can also apply editing
and effects to projects that require broadcast quality, such as network television shows or feature films. This
is increasing more and more capital investments in these kinds of businesses.
■ Particle Systems
A particle system is a collection of independent points, which are animated using a set of rules, with the intention
of modeling some effect.
These points are assigned attributes, and animated over time. Common characteristics for a particle are:
Position
Velocity
Energy
Color – could also add transparency
Lifetime – how long before itself destructs
As the animation progresses, these characteristics change. For example, at time = 0, a particle may have a
very high velocity, and a lot of energy. As time progresses, this velocity and energy may increase or decrease
as per the changes we make in settings, and its path will change. Explosions are one of the easier and most
impressive things to create using particle systems.
Figure 8.23: Fluid and Blast effects created with particle systems
■ Dynamics
This method creates simulations between multiple objects using mathematical calculations. Basically this
method is used keeping in mind two aspects, first the physics approach and other, the entertainment approach.
The physics approach corresponds to simulation methods based on the various laws of physics, such as the
law of gravity. The simulation method is often mathematically complex to implement. And it almost always takes
a relatively long time to play in real time. Most of the time, simulation methods are required for science and
accurate visualization while entertainment methods are used to convey an idea, motion, and even emotion. The
entertainment approach is more inclined towards the traditional animation. Depending on the circumstances in
the scenes the suitable method is applied. For example, using the exaggeration approach, we achieve the main
objective of entertainment and that is to entertain. Refer to Figures 8.24a and 8.24b.
Figure 8.24a: Dynamics applied to a ball Figure 8.24b: Pins will fall realistically by the
mathematical calculations when the ball hits them
Let us understand physics and entertainment approach with an example. If we want to create an animation where
a model is bouncing on the ground, we will use the physics dynamics to create accurate bouncing movements.
This will help us in getting perfect bounce of the character as per the surface it’s bouncing on. But to make it look
more entertaining we might want to add a traditional animation principle like squash & stretch and overlap to it.
Though squashing and stretching might look exaggerating, this will add a lot of entertainment to the viewer. The
Note
Motion capture animation is useful when both the entertainment and simulation approaches
fall short. For example, in creating facial animation, there are hundreds of detailed muscles
to control in order to convey any facial expression.
■ Digital Compositing
Using different types of digital compositing software, we basically generate the mattes for superimpositions,
tracking and stabilizing the shots. When the effects are created and finalized, they are superimposed on the
footage to get the final result. Using the above mentioned three methods we can create following effects:
Water effect
Among all digital effects, creating a water effect is one of the most challenging tasks the artist will face.
Creating realistic looking digital water has always been a difficult task for both the artist and technical
director. The scene in which this effect is required needs to be analyzed as to what the final output should
convey. At times, the image must convey the mild-mannered peace that water brings. But at other times, the
images must convey the strength and awesomeness of water. Beginners or anybody from the non-technical
audience of the film might think that creating these kinds of effects must be some simple trick. The fact is
that creating realistic looking water is not really a trick. That is, despite all the development achieved so far
in creating the ideal mathematical model of digital water, there is no such easy formula to create good water.
Every new scene will require a new form of water and probably a new technique of creating it. Although
digital effect software provide us with general equations, functions and tools to create wavelike water, the
real success in directing the water flow with accurate water characteristic requires a good visualization
power and lots of patience and skills. Refer to Figure 8.25.
Though water does not have a fixed shape, it has a definite volume and while creating any kind of water we
must remember this key area. While creating water in any shape, like, rainwater, waterfall or ocean with lots
of ripples, we must not forget that the overall volume of the water must remain the same.
Burning effects: These kinds of effects give off spontaneous and smooth burning flames. Some of the real
life examples of these effects would be burning wood in a fireplace, candle flames etc. If we observe these
natural fires we will realize that there are sound effects associated with each kind of fire. While creating
these effects on the computer, the artists have to take into consideration the other aspects related to a
burning fire like the sound of fire, the effect of surroundings, the density of flames, the particles generated
by the fire etc. These mock fire effects are very useful when it is difficult to shoot the fire in the real life due
to high cost or inappropriate season. Refer to Figure 8.29.
8.6 Summary
In this session, Compositing and Rotoscoping, you learned that:
Digital compositing is the digitally manipulated integration of at least two source images to produce a new
image. This is a new, rapidly growing, and increasingly important art form. It is the process of assembling
multiple images to make a final image, typically for print, motion pictures or screen display.
Prior to computers, an animation stand called a Rotoscope was used to project a sequence of action frames
against a surface so that a set of animation frames could be traced or created. The same work can now be
done with digital images and special computer software.
The work “key” was derived from a digital mixer’s capability for cutting an electronic keyhole in a video
picture. Into this keyhole is placed another video signal or a matte color, which results in one image being
placed over, or within, another.
Luminance Key detects the dark portions of a video picture and electronically replaces that with another
image, generally from another video source being fed into the digital mixer.
Most of the effects are created using digital tools which, are getting more and more popular worldwide for
the following reasons:
● Less expensive
8.7 Exercise
1. The __________and _________are the two terms that we will hear commonly while making titles for any film.
2. Action Safe area amounts to about 60% of the total picture area.
a. True b. False
3. Title Safe area for safe title is inside the safe action area and amounts to about 80% of the total picture area.
a. True b. False
4. A line of titles that move mostly horizontal in direction across the screen at the bottom is called_________.
5. Rotoscoping is a technique by which one shot is superimposed on another, resulting in a composite shot.
a. True b. False
8. R
otoscoping technique is used to remove wires from models or actors, touching up the seams in a composite,
adding optical effects like lighting, clouds, smoke and do color corrections.
a. True b. False
9. I n _____________ process a large sheet of glass was placed in front of the camera with a painting of appropriate
representation images upon the portions of the glass.
9 Types of Outputs
Learning Outcomes
In this session, you will learn to -
9.1 Introduction
In developing an idea for film, three primary questions need to be addressed: What are we going to film? How are
we going to film it? How are we going to structure it? Beginning with script development through the final wrap
shot, all aspects of production for film and television are studied, with special consideration of the in-school and
thesis projects. Independent film and television projects are used as the basis for a coordinated production outline,
script breakdown, shooting schedule, and budget program. Locations, lighting, travel, logistics, camera angles,
wardrobe, casting, set dressing, action props, and other elements affecting the budget and planning process are
also examined.
In the previous session we learnt topics like titling, compositing, rotoscoping and visual effects. Now we know that
once the movie footage is ready there are a lot many things that have to be done after shooting the scenes. Once
our movie is edited, polished and is ready with all the effects, titles and credits, we are ready for our big opening.
Though all the films may not be made for theaters with film print, there are still plenty of ways to deliver our movies
using suitable formats. The final step of making the movie is to show it. The key factor to remember here is to show
to it to an audience we believe matches the film. If we make a documentary film about wildlife in our area, most
teenagers will not like it; on the other hand, the local chapter of environmentalists or nature enthusiasts would
probably enjoy watching it. By avoiding groups that would turn the film down simply for its genre, we are well on our
way to making films that an audience can enjoy.
Outputting the movies in a right way is a major factor once the movie is done. The filmmaking is incomplete without
understanding the outputting of digital films. In this session we are going to learn the topics like audio mixing, DVD
authoring and transferring the movie onto videotape. Let us begin with videotape masters.
Many a times it is difficult do decide whether to create directly a film print or a videotape master. The most cautious
choice is to start small, but keep our options open: create a videotape master and audio mix as viewing copies to
pass around. When we are making a videotape master, we need to take into consideration the format we shot on,
the format we wish to master and are we going to make a film print eventually.
If the footage is in the digital format, and we are thorough with NLE, we can save lot of money by creating the master
ourselves. The final quality of the master depends a lot on how we have captured the original footage. If the original
footage is of a very low quality then even the best NLE professional will be unable to get the best output.
Figure 9.2: S4 MidiLand 8200 v2.0 A 200 watts Dolby Digital/DTS Home Theater System
Note
The subwoofer only uses a tenth of the dynamic range of a normal channel, so overall we
get a total of 5.1 channels.
● SSDS uses 7.1 channels, adding center-left and center-right speakers in the front.
Figure 9.3a: HPTV’s 32-channel audio mixer Figure 9.3b: The Tascam DA-88 is an eight track
16-bit digital recorder utilizing the Hi-8 recording
medium
After the sound is tweaked it’s time to mix. Our tracks will be sent through a large mixing board and out to high-
quality sound recording formats like 24-track tape DAT. The mixer will set the audio levels that will suit our video
master player and the resulting audio will be recorded. Once the entire project is checked and approved, the tracks
from 24-Track will be recorded back onto our videotape master, a process known as lay back. If we want more than
one type of mix like stereo and DM & E mix, we need to make two videotape masters to lie back onto.
No matter how many audio tracks we have, we’ll want to start by mixing them down to eight tracks. A standard eight-
track configuration includes two tracks of checkboarded sync production sound, a track of voice-over if voice-over
is there in the project, a track of ambience, two tracks of sound effects, a track of stereo left music, and a track of
stereo right music.
To create a DM & E mix, mix the sync production sound and voice-over down to one track, and the effects and
ambience tracks down to another track. Leave the music as it is on the last two tracks. If we are creating a stereo
mix then we have to mix the dialogue, effects and stereo music left to channel one and the dialogue, effects and
stereo music right to channel two. While doing any kind of audio mix we need to balance the audio levels correctly
or else the dialogues and effects might overpower the music.
DVD is a short form for Digital Versatile Disc (or Digital Video Disc). A family of optical disc formats used both for
pre-recorded content, especially movies, and as recordable media for consumer devices and computers. A family
of data format standards for video, audio, and data storage (that is, DVD-Video and DVD-Audio) for consumer
electronics products and computers.
DVD discs are the same diameter as CD discs (120mm. or 12 cm, in diameter), and most formats hold 4.7GB
of data on a side. A smaller size mini-DVD disc is also used, especially in camcorders. With its massive storage
capacity and versatile, programmable interface structure, DVD provides a strong, high-quality delivery medium
with built-in simple authoring environment. In addition to large data storage capacity, the DVD video system also
benefits by giving interactive controls such as menus, buttons, chapters and random access. Creating a DVD is not
mere compressing of data and giving a high-quality output. It needs lot of hard work and efficiency to get the best
results.
DVD Compression: Compression is a process of converting data into a more compact form for storage or
transmission. MPEG (Mostly MPEG-2) is the compression system of choice for DVD-Video and Video CD.
MPEG (Moving Picture Experts Group) is the ISO committee, which is responsible for defining the various
MPEG video specifications. While compressing the video into MPEG 2 format it can take lot of time so we
will have to schedule a time for this process. Depending on the speed of our computer and software, MPEG
2 compression can take anywhere from 1 to 3 hours per second of a video. Through hardware MPEG2
encodes work much faster, but it turns out to be quite expensive compared to software compression.
● MPEG-1 originally defined in 1992 was aimed at full screen video stored on a CD-ROM. It has since
been incorporated into the Video CD specification and is used on CD-ROMs.
● MPEG-2 came later and was intended for digital television applications and is used for DVD-Video. It
supports interlaced video and variable bit rate (VBR) encoding.
● MPEG-3 was intended for HDTV but this was later incorporated into MPEG-2.
● MPEG-4 is intended for video conferencing, Internet distribution and similar applications using low
bandwidths.
Authoring in the video world refers to a process where already-encoded video files are transferred into a
specific format that describes how the data should be kept on storage media, such as CD or DVD. Most
common use of the term is when speaking of DVD authoring, using a separate DVD authoring software.
DVD Authoring: The process of creating a DVD production. This involves creating the overall navigational
structure, preparing the multimedia assets (video, audio, images), designing the graphical look, laying out
the assets into tracks, streams, and chapters, designing interactive menus, linking the elements into the
navigational structure, and building the final production to write to DVD, CD, hard disk, or tape. Refer to
Figure 9.5: Intel Pentium 4 Processor commonly used for complete DVD Authoring
If we want to see our family videos in twenty years we better transfer our footage to DVD as soon as possible. Refer
to Figure 9.6.
Figure 9.6: (from left to right) Standard DVD player and Sony DVPNS400 DVD/CD/VCD player with DOLBY
Digital Decoder & DTS Pass Through
Conclusion: The major technical progress of the 1990’s has been the beginning of the Digital Age. All across the
Numerous breakthroughs in computer effects editing make it not only possible to alter the look of a film in a computer,
but also make it extremely cost effective, as more productions use the computer to delete out mistakes in filming, or
expand the magnificence of a scene. Perhaps the most important step comes from the pioneer of the digital world,
George Lucas. Releasing Star Wars: E1 in three theaters using completely digital projectors (no film reels needed)
and making his preparations to film the next two using completely digital cameras and encouraging release on
completely digital theaters. It is now clear to Hollywood and the rest of the world that digital is the next evolution in
film.
9.6 Summary
In this session, Types of Outputs, you learned that:
Many a times it is difficult do decide whether to directly create a film print or a videotape master. The most
cautious choice is to start small, but keep our options open: create a videotape master and audio mix as
viewing copies to pass around.
The number of masters we will create will depend on our type of final output. If the project is a feature film,
we will create more than one master copy for the purpose of distribution. We will need VHS viewing copies
for film festivals. We will need streaming media output if we are publicizing the promos and trailers on the
Web and videotape online to get a good videotape master.
If the videotape format is high-end, it usually has four tracks audio where as a low-end format and consumer
videotape has only two.
Dolby Digital, DTS (Digital Theater System and SSDS (Sony Dynamic Digital Sound) are the currently
available digital surround sound formats.
The video decks can record only two channels but with Digital Betacam or a high-end Betacam SP deck,
we can record up to four channels.
A standard eight-track configuration includes two tracks of checkboarded sync production sound, a track of
voice-over if voice-over is there in the project, a track of ambience, two tracks of sound effects, a track of
stereo left music, and a track of stereo right music.
DVD provides a strong, high-quality delivery medium with built-in simple authoring environment. In addition
to large data storage capacity, the DVD video system also benefits by giving interactive controls such as
menus, buttons, chapters and random access.
9.7 Exercise
1. The number of masters we will create will depend on our type of final output.
a. True b. False
a. True b. False
a. True b. False
4. M
ost of the digital audios in the NLEs are recorded with a quality of 16 or 28 kHz is good for using tracks directly
from our NLE as a source for our mix.
a. True b. False
a. True b. False
Glossary
A
Anamorphic Widescreen Video
Vertical resolution improves when the anamorphic method of widescreen presentation is employed. This is where
the horizontal dimension is squeezed so that more of the vertical space can be used. The squeeze ratio is 1.78
to 1.33, since that is what has been used in Europe and Japan for a number of years now. Anamorphic video is
horizontally squeezing a widescreen image into a 1.33:1 image. The advantage of the anamorphic format is 33
percent more vertical detail in widescreen images over the letterboxed image. The DVD format is an anamorphic
widescreen capable format. A 1.78:1 anamorphic film original image on DVD has a potential of 640 vertical and 480
horizontal lines. In addition there is far better color resolution capability on this format than on a LaserDisc.
Animation
A film in which inanimate objects or individual drawings are photographed frame by frame in order to create an
illusion of movement on the screen when the film is projected at the standard speed of 24 frames per second (fps).
By manipulating the objects or drawings minutely for each frame the filmmaker can make objects or characters in
the film appear to move as if they are “animated.” Also called animated film.
Aspect Ratio
The ratio of the visible-picture width to the height. Standard television and computers have an aspect ratio of
4:3(1.33). HDTV has aspect ratios of either 4:3 or 16:9(1.78). Additional aspect ratios like 1.85:1 or 2.35:1 are used
in cinema.
Available lighting
The illumination that actually exists on location during filming, either natural (sunlight) or artificial (lamps, fires,
etc.).
B
Baby spot
Back Porch
The area of a composite video signal defined as the time between the end of the color burst and the start of active
video. Also loosely used to mean the total time from the rising edge of sync to the start of active video.
Balanced Audio
A method that uses three conductors for one audio signal. They are plus (+), minus (-) and ground. The ground
conductor is strictly for shielding, and does not carry any signal. Also called “differential audio” and “balanced line’.
Black Level
More commonly referred to as “brightness,” the black level is the level of light produced on a video screen. The level
of a picture signal corresponding to the maximum limit of black peaks at which level a video screen emits no light at
all (screen black). The bottom portion of the video waveform which contains the sync, blanking and control signals.
Blanking Interval
There are horizontal and vertical blanking intervals. Horizontal blanking interval is the time period allocated for
retrace of the signal from the right edge of the display back to the left edge to start another scan line. Vertical
blanking interval is the time period allocated for retrace of the signal from the bottom back to the top to start another
field or frame. Synchronizing signals occupy a portion of the blanking interval.
Blanking Level
Used to describe a voltage level (blanking level). The blanking level is the nominal voltage of a video waveform
during the horizontal and vertical periods, excluding the more negative voltage sync tips.
Blimp
A soundproof camera housing which prevents the noise of the camera’s motor from being recorded on the sound
track.
Breezeway
The area of a composite video signal defined as the time between the rising edge of the sync pulse and the start of
the color burst.
C
Cels
Transparent plastic sheets on which animators draw or letter images to be photographed frame by frame for an
animated film or to be superimposed over live action. Animation done from such drawings is called cel animation.
Chroma
The color portion of a video signal. This term is sometimes incorrectly referred to as “chrominance,” which is the
actual displayed color information.
Cinemascope
The trade name for wide-screen films photographed and projected with anamorphic lenses on the camera and the
projector.
Cinerama
A wide-screen motion picture process that employs three cameras and three projectors, a wide curved screen, and
stereophonic sound. Three separate images are projected simultaneously onto the curved screen, widening the
picture into the viewer’s peripheral vision.
Cinema verite
In French, literally, “cinema truth.” A style of documentary filmmaking in which the filmmaker interferes as little
as possible with events being filmed. Cinema verite, also called direct cinema, is characterized by direct and
spontaneous use of the camera (usually hand-held), long takes, naturalistic sound recording, and in-camera
editing.
Clamp
A circuit that forces a specific portion (either the back porch or the sync tip) of the video signal to a specific DC
voltage, to restore the DC level. Also called “DC restore.” A black level clamp to ground circuit forces the back-porch
voltage to be equal to zero volts. A peak clamp forces the sync-tip voltage to be equal to a specified voltage.
If a signal passing through an amplifier or other electronic circuit exceeds that circuits’ voltage or current limits it will
appear on an oscilloscope as if the tops of waveforms had been clipped off by a pair of scissors, hence the term.
Clipped signals are generally distorted in a number of ways both audible and measurable.
Color Burst
The color burst, also commonly called the “color subcarrier,” is 8 to 10 cycles of the color reference frequency. It is
positioned between the rising edge of sync and the start of active video for a composite video signal.
Color Saturation
The amplitude of the color modulation on a standard video signal. The larger the amplitude of this modulation, the
more saturated (more intense) the color.
Component Video
A three-wire video interface that carries the video information in its basic RGB components or luma (brightness) and
two-color-difference signals.
Composite Video
A video signal that combines the luma (brightness), chroma (color), burst (color reference), and sync (horizontal and
vertical synchronizing signals) into a single waveform carried on a single wire pair.
Crane shot
A shot taken from a studio crane, a large mechanical arm that can move the camera and its operator smoothly and
noiselessly in any direction.
Crossover
An electronic circuit device, used most commonly in loudspeaker systems, that divides an input sound spectrum
into higher and lower bands about a specified frequency. Crossovers can either be passive or active. They are also
found in many home theatre and cinema sound processors and are used to direct low frequency sounds to the
subwoofer.
D
Deep-focus photography
A cinematographic technique which keeps objects in a shot clearly focused from close-up range to infinity. Also
called pan-focus photography.
Depth of field
The distance in front of the camera lens within which objects appear in sharp focus.
The Digital Light Processor is a micro mirror technology from Texas Instruments. It is also known as DMD or
Digital Micromirror Device. A DLP chip is made up of an array of mirrors. One of the configurations currently being
marketed has 848 elements per row and 600 rows. That’s 508,800 mirrors per chip. DLP is efficient in light output
both in active area of each element and reflectivity. The DLP imager requires a progressively scanned source to
drive it. That means that all DLP projectors capable of accepting an NTSC input have line doublers as part of their
processing electronics.
Important measurement parameter for composite video signals. Not applicable in Y/C or component signals.
Differential gain is the amount of change in the color saturation (amplitude of the color modulation) for a change in
low-frequency luma (brightness) amplitude. Closely approximated by measuring the change in the amplitude of a
sine wave for a change in its DC level.
Differential Phase
Important measurement parameter for composite video signals. Not applicable in Y/C or component signals.
Differential phase is the change in hue (phase of the color modulation) for a change in low-frequency luma
(brightness) amplitude. Closely approximated by measuring the change in the phase of a sine wave for a change
in its DC level.
Documentary
A non-fiction film, usually photographed on location, using actual people rather than actors and actual events rather
than scripted stories.
Double exposure
The superimposition of two (or more) images on a single film strip. Also called multiple exposure.
Dubbing
Adding sound to a film after shots have been photographed and edited. Also, to insert dialogue, sometimes foreign,
into a film after it has been shot.
Dynamic Range
All audio systems are limited by inherent noise at low levels and by overload distortion at high levels. The usable
region between these two extremes is the dynamic range of the system. Otherwise defined as the range between
the loudest and softest sounds a sound format or system can reproduce with noise or distortion. Expressed in dB.
E
Emulsion
The chemical coating on film stock, which contains light-sensitive particles of metallic silver.
Epic
A film genre characterized by sweeping historical themes, heroic action, spectacular settings, period costumes, and
a large cast of characters.
Establishing shot
A camera shot, usually at long range, which identifies, or “establishes,” the location of a scene.
Ethnographic film
An anthropological film that records and perhaps comments on an ethnic group and its culture.
Expose
An investigative documentary that reveals, often in shocking ways, discreditable information or events.
F
Fast film
Film stock that is highly sensitive to light, usually with an exposure index of 100 or higher. Also called fast-speed
film. See slow film.
Fast motion
Shots photographed slower than the standard speed of 24 frames per second (fps) so that the action on the screen
appears faster than normal when projected at standard speed. See slow motion.
Feature film
Feminist criticism
Film analysis and criticism from a feminist perspective, concerned primarily with the social and political implications
of how women are depicted in films.
Fiction film
Any film that employs invented plot or characters; usually called narrative film.
Film criticism
The analysis and evaluation of films, often according to specific aesthetic or philosophical theories.
Film noir
In French, literally, “black film.” A type of film, mainly produced in Hollywood during the 1940s and 1950s, which
depicts “dark” themes, like crime and corruption in urban settings, in a visual style that features night scenes and
low-key lighting.
Film stock
Unexposed motion picture film with variable characteristics, such as gauge (16mm, 35mm, 70mm), color (black-
and-white, color), speed (fast film, slow film), grain (high-grain, or low-grain).
Filmography
Fisheye lens
An extreme wide-angle lens that distorts the image so that straight lines appear rounded at the edges of the
frame.
Flash-forward
A shot or sequence which depicts action that will occur after the film’s present time or that will be seen later in the
film.
Flashback
A shot or sequence which depicts action that occurred before the film’s present time.
Focal length
The distance from the center of the lens to the point on the film plane where light rays meet in sharp focus. A wide-
angle lens has a short focal length; a telephoto lens has a long focal length.
Following shot
A shot in which the camera pans or travels to keep a moving figure or object within the frame.
Footage
Formula
A familiar plot or pattern of dramatic action which is often repeated or imitated in films, for example, in genres like
gangster films and westerns.
Freeze-frame shot
A shot in which one frame is printed repeatedly in order to look like a still photograph when projected. Also called a
freeze shot.
Frequency
The speed of vibration of a sound wave, measured in cycles per second or hertz. Frequency determines pitch; the
faster the frequency, the higher the pitch. The human ear can hear frequencies in the range of 20 to 20,000 Hz.
Front Porch
The area of a composite video waveform between the end of the active video and the leading edge of sync.
G
Gauge
Grain
Minute crystals of light-sensitive silver halide within the emulsion on the film stock. Graininess is the speckle-like
appearance in a film image caused by coarse clumps of individual silver grains.
H
Hand-held camera
A shot where the camera operator, rather than a tripod or a mechanical vehicle, supports and moves the camera
during filming.
Hard lighting
Illumination, which creates stark contrast between light and shadow. See high-contrast lighting.
High-contrast lighting
A style of film lighting, which creates a stark contrast between bright light and heavy shadows.
The inverse of the time (or period) for one horizontal scan line.
The coupling of an unwanted frequency into other electrical signals. In audio, a “hum” can be heard; in video, it can
appear as waves in the picture. Often it is an audible disturbance caused by the power supply.
I
In-camera editing
Editing done within the camera itself by selectively starting and stopping the camera for each shot.
Insert
A shot of a detail edited into the main action of a scene. Also called an insert shot. See cutaway.
Intellectual montage
Editing intended to convey an abstract or intellectual concept by juxtaposing concrete images which suggest it.
Interlaced Scan
The process whereby each frame of a picture is created by first scanning half of the lines and then scanning the
second set of lines, which are interleaved between the first to complete the picture. Each half is referred to as a
field. Two fields make a frame.
Invisible editing
Editing made unobtrusive by carefully cutting on action or matching action between shots. Also called invisible
cutting.
IRE
An arbitrary unit of measurement equal to 1/100 of the excursion from blanking to reference white level. In NTSC
systems, 100 IRE equals 714mV and 1-volt p-p equals 140 IRE.
Iris
A circular masking device, so called because it resembles the iris of the human eye, which may be opened up or
shut down during a shot.
L
Limbo lighting
A style of film lighting, which eliminates background light and isolates the subject against a completely dark (or
neutral) field.
Live action
Film action with living people and real things, rather than action created by animation.
Long take
Loop
Film footage spliced tail to head in order to run continuously. Looping is sometimes used when actors dub lip-sync
sound to scenes which are already photographed. Also called film loop.
Optical laser-read home video format, in use since 1978. The LD is the video source of choice for many home
theatre systems. The 425 horizontal line resolution (horizontal luminance or black and white detail) capability in
NTSC is far superior to standard VHS at 260 lines, but not to DVD’s 480 line potential. LaserDiscs feature two PCM
digital audio channels, and stereo analog FM audio.
Luma
The monochrome or black-and-white portion of a video signal. This term is sometimes incorrectly called “luminance,”
which refers to the actual displayed brightness.
M
Magnetic sound track
A sound track that is recorded on an iron oxide stripe at the edge of the film opposite the sprocket holes.
Master shot
A single shot, usually a long shot or a full shot, which provides an overview of the action in an entire scene.
Matching action
Cutting together different shots of an action on a common gesture or movement in order to make the action appear
continuous on the screen. Also called a matched cut. See also invisible editing.
Melodrama
A play or film based on a romantic plot and developed sensationally, with little regard for convincing motivation and
with strong appeal to the emotions of the audience.
Metaphor
A comparison between two otherwise unlike entities, usually achieved in films by montage.
Method acting
A naturalistic style of acting taught by the Russian actor-director, Konstantin Stanislavsky, where the actor identifies
closely with the character to be portrayed. Also called the Stanislavsky Method.
Mickey Mousing
Creating music that mimics or reproduces a film’s visual action, as, for example, in many Walt Disney cartoons.
Monochrome
The luma (brightness) portion of a video signal without the color information. Monochrome, commonly known as
black-and-white, predates current color television.
Monologue
A character speaking alone on screen or, without appearing to speak, articulating her or his thoughts in voice-over
as an interior monologue.
Montage
To assemble film images by editing shots together, often rapidly to condense passing time or events. In Europe
montage means “editing.” See also American montage, Russian montage, dynamic montage, narrative montage.
Motif
A shot that includes two or more separately photographed images within the frame.
Multiscreen projection
Musical
A film genre that incorporates song and dance routines into the film story. Also called musical film.
N
Narration
Information or commentary spoken directly to the audience rather than indirectly through dialogue, often by an
anonymous off-screen voice. See voice-over.
Neo-realism
An Italian film movement after World War II characterized by starkly realistic, humanistic stories and documentary-
like camera style. Neo-realistic films were generally shot on location, using available lighting and non-professional
actors. Also called Italian Neo-realism.
Newsreel
Non-fiction film
Any film that does not employ invented plot or characters. See documentary.
Non-synchronous sound
Sound whose source is not apparent in a film scene or which is detached from its source in the scene; commonly
called off-screen sound. See synchronous sound.
O
Out-take
Any footage deleted from a film during editing; more specifically, a shot or scene that is removed from a film before
the final cut.
Over crank
To run film stock through the camera faster than the standard speed of 24 frames per second (fps), producing slow
motion on the screen when the film is projected at standard speed. See undercrank.
Overhead shot
A camera shot from directly above the action. See bird’s-eye view.
Overlapping editing
P
PAL
Phase alternate line. PAL is used to refer to systems and signals that are compatible with this specific modulation
technique. Similar to NTSC but uses subcarrier phase alternation to reduce the sensitivity to phase errors that would
be displayed as color errors. Commonly used with 626-line, 50Hz scanning systems with a subcarrier frequency of
4.43362MHz.
Pan-And-Scan
The technique used for transferring widescreen anamorphic films to standard full screen 1.33:1 aspect ratio video,
so as to avoid black bars at the top and bottom of a full screen 1.33:1 display screen. Since the format cannot show
the entire picture width, the transfer must continually move or pan from one part of the picture to another in order to
show all the on screen action. Frequently misused to describe full frame, non-widescreen anamorphic, spherical flat
photography without top and bottom black bars to properly frame the intended aspect ratio composition.
Pixel
Picture element. A pixel is the smallest piece of display detail that has a unique brightness and color. In a digital image,
a pixel is an individual point in the image, represented by a certain number of bits to indicate the brightness.
Pixilation
A type of film animation in which real objects or people are photographed frame by frame in order to make them
appear to move abruptly or magically when the film is projected. See also stop motion and trick film.
A shot taken from the vantage point of a character in a film. Also called a first-person shot or subjective camera.
Postsynchronized sound
Sound added to images after they have been photographed and assembled; commonly called dubbing.
Process shot
A shot in which “live” foreground action is photographed against a background image projected on a translucent
screen.
Progressive Scan
The process whereby a picture is created by scanning all of the lines of a frame in one pass.
Property
Any movable item used on a theater or film set. Usually called a prop.
Puppet film
An animated film in which inanimate objects or figures are manipulated and photographed frame by frame in order
to make them appear to move when the film is projected.
Pure film
A type of experimental film that explores the purely visual possibilities of cinema rather than narrative possibilities.
Also called pure cinema.
R
Rack focus
To change the focus of a lens during a shot in order to call attention to specific images. Also called selective focus
or shift focus.
Raster
The collection of horizontal scan lines that makes up a picture on a display. A reference to it normally assumes that
the sync elements of the signal are included.
Reflected Sound
Sound arriving at the listening location after bouncing off one or more of the surrounding surfaces. Because sound
waves lose energy according to the distance traveled and number of reflections encountered, reflected sound wave
are always of less intensity than similar waves arriving directly from the source. The sum total of all reflected waves
determines the room’s reverberation time and acoustical character.
Reaction shot
A shot that shows a character’s reaction to what has occurred in the previous shot.
Realism
A style of filmmaking, which endeavors to depict physical reality much as it appears in the everyday world. Typical
realistic techniques include the prominent use of long shots, eye-level camera angles, lengthy takes, naturalistic
lighting and sound effects, and unobtrusive editing.
Reverse motion
Action that moves backward on the screen, achieved by reversing film footage during editing or by reverse printing
in an optical printer. Also called reverse action.
RGB
Stands for red, green, and blue. It is a component interface typically used in computer graphics systems.
Rough cut
An early version of a film in which shots and sequences are roughly assembled but not yet finely edited together
for the final cut.
Running time
Russian montage
A style of editing, typical of prominent Soviet filmmakers in the 1920s, which employs dynamic cutting techniques
to evoke strong emotional, and even physical, reactions to film images.
S
Sampling Rate
The rate at which a signal is sampled, given in Hz. A sampling rate of 44.1 kHz, the standard for PCM audio, means
that 44,100 samples are taken of the sound per second.
Aptech Limited 127
Concepts of Digital Filmmaking & Visual Fx
Scene
A unit of film composed of one shot or several interrelated shots unified by a single location, incident, or set of
characters.
Science-fiction film
A film genre characterized by plot and action involving scientific fantasy. Also called sci-fi film.
Screen time
Screwball comedy
A type of Hollywood comic film characterized by zany characters, incongruous situations, and fast-breaking
events.
Scrim
Script
A set of written specifications for a motion picture production, usually delineating the film’s settings, action, dialogue,
camera coverage, lighting and sound effects, and music.
Selective sound
Semiology
A theory of film criticism which views cinema as a language or linguistic system that conveys meaning via signs or
symbolic codes. Also called, semiotics.
Senior spot
Sequence
A unit of film composed of interrelated shots or scenes, usually leading up to a dramatic climax.
Setup
A reference black level 7.5% (7.5IRE) above blanking level in NTSC analog systems. It is not used in PAL or digital
or HDTV systems. In these systems, reference black is the same level as blanking.
Setting
Shock cut
A jarring transition between two actions occurring at different times or places. Also called a smash cut.
Shooting ratio
The amount of film footage shot compared to the length of the film’s final cut.
Shooting script
The script that the director and the actors follow during filming.
A single, continuous run of the camera. The images recorded on a strip of exposed film from the time the camera
starts until the time it stops.
Single-frame cinematography
Shooting film one frame at a time to speed up normal motion, to make lifeless objects appear to move, or to do
time-lapse photography. Also called single-framing.
Slapstick comedy
Slow film
Film stock that is relatively insensitive to light and that produces finer-grained images than fast film. Also called
slow-speed film.
Slow motion
Shots photographed faster than the standard speed of 24 frames per second (fps) so that the action on the screen
appears to move slower than normal when projected at standard speed. See fast motion.
Sound track
The optical or magnetic strip at the edge of the film which carries the sound. Also, any length of film carrying only
sound.
Star system
A system developed in the early days of Hollywood to market movies based on the appeal of popular actors and
actresses, “movie stars,” who were under contract with commercial motion picture studios to play leading roles in
their productions.
Stereophonic sound
Sound recorded on separate tracks with two or more microphones and played back on two or more loud speakers
to reproduce and separate sounds more realistically.
Still
A photograph taken of a film scene for promotional purposes, not to be confused with a frame enlargement
reproduced from actual film footage.
Stop-motion photography
Filming real objects or live action by starting and stopping the camera, rather than by running the camera continuously,
in order to create pixilation, trick-film effects, or time-lapse photography. Also called stop-action photography.
Sub-text
Implicit meaning in a play or film which lies beneath the language of the text.
Subtitle
A written caption superimposed over action, usually at the bottom of the frame, to identify a scene or to translate
dialogue from a foreign language.
Superimposition
An avant-garde movement in the arts during the 1920s which endeavored to re-create unconscious experience with
shocking, dreamlike images. Surrealistic films rejected traditional notions of causality and emphasized incongruous,
irrational action instead.
S-Video
Commonly incorrectly used interchangeably with Y/C. Technically, a magnetic-tape modulation format.
Swish pan
Panning the camera so rapidly across a scene that the image blurs on the screen. Also called flash pan, whip pan,
zip pan.
Symbol
An object or image that has significance within a dramatic context beyond its literal meaning.
Synchronization
A precise match between film image and sound. Also called sync.
Synchronous sound
Sound whose source is apparent in a film scene and which matches the action.
Synecdoche
Sync Signals/Pulses
Sync signals, also known as sync pulses, are negative-going timing pulses in video signals that are used by video-
processing or display devices to synchronize the horizontal and vertical portions of the display.
T
Take
The shot resulting from one continuous run of the camera. A filmmaker generally films several “takes” of the same
scene and then selects the best one.
Timbre
The subjective tonal quality of a sound. The timbre of any musical or non-musical sound is determined largely
by the harmonic structure of the sound wave. Rich sounding musical tones tend to have a great number of inner
harmonics which contribute to their lush timbre, while thin sounding musical tones tend to be lacking in the presence
of harmonics.
Timbre Matching
A type of equalization applied to the surround channels in home THX processors based on a finding that loudspeakers
used for the surround are different in design (dipole vs. director radiator) in THX-certified systems and tend to be
closer than those for the front, and therefore sound brighter. This equalization compensates for this perceived
effect.
A type of cinematography in which the camera intermittently photographs the same object or scene over an extended
time period in order to speed up on the screen a lengthy process or action, such as the growth of a flower from a
Treatment
A written description of a film story which may later be developed into a script. Also called film treatment.
A film or a shot created by special camera or optical techniques, such as stop-motion photography or double
exposure.
Two-shot
Typecasting
Selecting an actor or actress for a film role because of his or her physical type, manner, or personality, or according
to a public image created by previous roles he or she has performed.
U
Undercrank
To run film stock through the camera slower than the standard speed of 24 frames per second (fps), producing fast
motion on the screen when the film is projected at standard speed. See overcrank.
Underground film
An independent film which emphasizes the filmmaker’s self-expression rather than commercial success. Underground
films frequently challenge or experiment with traditional cinematic form and technique, hence they are also called
avant-garde or experimental films.
V
Vertical Field Frequency
The inverse of the time (or period) to produce one field of video (half of a frame). In NTSC it is 59.94Hz.
The inverse of the time (or period) to produce one frame of video. Also called “refresh rate” or “vertical refresh
rate.”
The minimum analog bandwidth required to reproduce the smallest amount of detail contained in the video signal.
Voice-over
An off-screen narrator’s voice accompanying images on the screen. Any off-screen voice.
W
Watt
A unit of electrical power used to indicate the rate of energy produced by, or consumed by an electrical device. One
watt is one joule of energy per second.
Answer Key
Exercise 5.5
1. a 2. a
3. b 4. a
5. a 6. b
7. a 8. b
9. b 10. b
History Of Film
- David Parkinson
Special effects
AC manual
This page has been intentionally left blank.
This page has been intentionally left blank.
This page has been intentionally left blank.
Concepts of Digital Filmmaking & Visual Fx
Session Name Rating
Excellent Very Good Good Average Poor
Introduction
Editing
Types of Outputs
Center: ____________________________________________________________
Region:________________________________________________________________
To,
Customer Care
Aptech Limited
Aptech House
A-65, MIDC,
Andheri(E)
Colored Section
Colored Section
Figure 1.3a: Medium long shot (MLS). Most, if not Figure 1.3b: Medium shot (MS). The actor is framed
all, of the actor’s body is included, but less of the from the thigh or waist up
surrounding space is visible than in the LS
Figure 1.6: Extreme close-up (XCU). Any framing Figure 1.7: Low angle in which the camera is lower
closer than a close-up is considered an XCU than the filmed object
Figure 1.13: In deep focus shots, all planes of the image are in focus--as in one shot from a commercial
for GEICO insurance, deep focus enables the viewer to see a pregnant woman in a wheelchair in the back-
ground while a young man speaks on the phone in the foreground
Figure 3.5a: Manual focus allowed to focus on the hand in this image. A large aperture kept the depth-of-
field shallow so the background is out of focus
Figure 3.5b: With just a slight shift in focus, the head has been made sharp and the hand in the fore-
ground soft
Figure 3.6b: A large aperture lets us throw the background out of focus
Figure 3.15a: This picture was shot with a wide-angle lens, the distance between the front and the rear of
the piano is elongated, giving the image an illusion of great depth
Figure 3.15b: The same piano as in above figure has been shot with a telephoto lens. Compare how the
distance between the front and the rear of the piano appears
Figure 3.15c: Telephoto lenses are widely used in sports coverage, to get a “closer” view of the action
Figure 6.6a: The train passes across the screen from left to right. This shot establishes the on-screen
direction
Figure 6.6b: Now the train is going in the opposite direction. This will not march the scene action
Figure 6.7: The two sides of the Liberty. Some might say this is crossing the line. But such a cut is not
confusing; in fact, it makes a strong, informative beginning to a sequence
Figure 6.8: One image getting faded into another image creating a fade effect
Figure 8.7: Images from Jurassic Park, a popular example of the compositing techniques applied by
Hollywood
Figure 8.11: Greenscreen extraction, wire removals, and lots of rotoscoping make this a densely compli-
cated shot