Download as pdf or txt
Download as pdf or txt
You are on page 1of 180

WORLD COLLEGE OF TECHNOLOGY AND MANAGEMENT

FARRUKNAGAR, GURUGRAM

Session 2023-2024

Computer Graphics & Multimedia Course File


MCA 1st semester
Course Code- 20MCA21C3

SUBMITTED BY
Anjali Dhamiwal
Assistant Professor
CSE Department
Syllabus of Computer Graphics and Multimedia
Lesson Plan
Name of the Faculty: Anjali Dhamiwal

Discipline MCA
Semester 1st
Subject Computer graphics and Multimedia
Lesson Plan Duration: 13weeks ( September -December)
Work Load (Lecture /Lecture-
Practical)
03per
andweek
practical
(in hours):
-01 Lectures-04, Practicals -01

WEEK THEORY
LECTURE TOPIC (Including assignments and Tests)
DAY
1st 1st Computer Graphics, Classification, Applications of computer graphics,
2nd Display devices, Random and Raster scan systems,
3rd Graphics input devices
2nd 5th
Points, lines, circles and ellipses as primitives,
6th scan conversion algorithms for primitives

Fill area primitives including scan-line polygon filling,


3rd 8th inside outside test, boundary and flood-fill, character generation,
9th line attributes, area-fill attributes, character attributers.
10th Transformations & its Numericals
4th 11th homogeneous coordinates, composite transformations, reflection and shearing,
12th viewing pipeline and coordinates system, window-to-viewport transformation
13th clipping including point clipping, line clipping & polygon clipping.
5th 14th 3D display methods, polygon surfaces, tables, equations, meshes,
15th curved lies and surfaces, quadric surfaces, spline representation
16th cubic spline interpolation methods,
6th 17th Bazier curves and surfaces, B-spline curves and surfaces.
18th 3D scaling, rotation and translation, composite transformation,
19th viewing pipeline and coordinates, parallel and perspective transformation,
7th 20th view volume and general (parallel and perspective) projection transformations.
21st Visible surface detection concepts, Back-face detection, Depth Buffer method,
22nd Illumination, Light sources, Illumination methods (ambient, diffuse reflection)
8th 23rd Color models & Shading: Flat, Gouraud and Phong.
24th Concepts of Multimedia, Multimedia applications,
25th Multimedia system architecture, Evolving technologies for multimedia,
9th 26th Defining objects for multimedia systems, ,Compression and decompression,
27th Multimedia I/O technologies, Digital voice and audio,
28th Video image and animation, Full motion video, .
10th 29th Storage and retrieval technologies
30th Multimedia Authoring
11th 31st Revision of Unit 1 & unit 2
32nd Test of Unit 1 and Unit 2
33rd Revision of Unit 3 & unit 4
12th 34th Test of Unit 3 and Unit 4
35th Question Paper Discussion
36th Question Paper Discussion
Unit 1
Basic of Computer Graphics
It is difficult to display an image of any size on the computer screen. This method is simplified by
using Computer graphics. Graphics on the computer are produced by using various algorithms and
techniques. This tutorial describes how a rich visual experience is provided to the user by
explaining how all these are processed by the computer. Computer Graphics involves technology
to access. The Process transforms and presents information in a visual form. The role of computer
graphics is insensible. In today's life, computer graphics has now become a common element in
user interfaces, T.V. commercial motion pictures. Computer Graphics is the creation of pictures
with the help of a computer. The end product of computer graphics is a picture it may be a business
graph, drawing, or engineering.

In computer graphics, two or three-dimensional pictures can be created that are used for research.
Many hardware device algorithms have been developed to improve the speed of picture generation
with the passing of time. It includes the creation and storage of models and images of objects.
These models are for various fields like engineering, mathematics, and so on. It is the use of
computers to create and manipulate pictures on a display device. It comprises software techniques
to create, store, modify, and represent pictures.

Why are computer graphics used?

Suppose a shoe manufacturing company wants to show the sale of shoes for five years. For this
vast amount of information is to store. So a lot of time and memory will be needed. This method
will be tough to understand by a common man. In this situation, graphics is a better alternative.
Graphics tools are charts and graphs. Using graphs, data can be represented in pictorial form. A
picture can be understood easily just with a single look.

Interactive computer graphics work using the concept of two-way communication between
computer users. The computer will receive signals from the input device, and the picture is
modified accordingly. The picture will be changed quickly when we apply the command.

Classification
Computer graphics has been classified into two categories according to the application domain and
requirements. They are passive and interactive computer graphics.

1. Passive (Off-Line) Computer graphics: The most common example of passive computer
graphics is static website, where user has no control over the contents on the monitor. In this,
development take place independently in offline mode.
2. Interactive computer graphics: This is also called on-line graphics. Displays are controlled
by mouse, trackball, joystick etc. This is termed as interactive computer graphics because the user
can interact with the machine as per his requirements. Video games, dynamic websites, special
effects in movies, cartoon are all making use of interactive computer graphic.

Computer graphics can be broadly divided into the following classes:

1. Business Graphics or the broader category of presentation graphics, which refers to graphics,
such as bar-charts, pie-charts, pictograms, x-y charts, etc. used to present quantitative
information to inform and convince the audience.
2. Scientific graphics, such as x-y plots, curve-fitting, contour plots, system or program flowcharts
etc.

3. Scaled Drawings, Such as architectural representation, drawing of buildings, bridges, and


machines.

4. Cartoons and artwork, Including advertisements.

5. Graphical user interface (GUI) which are the images which appear on almost all computer
screens these days, designed to help the user utilize the software without having to refer to manuals
or read a lot of text on the monitor.

Application of Computer Graphics


Some of the applications of computer graphics are:
1. Computer Art:

Using computer graphics we can create fine and commercial art which include
animation packages, paint packages. These packages provide facilities for designing
object shapes and specifying object motion.Cartoon drawing, paintings, logo design
can also be done.

2. Computer Aided Drawing:

Designing of buildings, automobile, aircraft is done with the help of computer aided
drawing, this helps in providing minute details to the drawing and producing more
accurate and sharp drawings with better specifications.

3. Presentation Graphics:
For the preparation of reports or summarising the financial, statistical, mathematical,
scientific, economic data for research reports, managerial reports, moreover creation of
bar graphs, pie charts, time chart, can be done using the tools present in computer
graphics.

4. Entertainment:

Computer graphics finds a major part of its utility in the movie industry and game
industry. Used for creating motion pictures , music video, television shows, cartoon
animation films. In the game industry where focus and interactivity are the key players,
computer graphics helps in providing such features in the efficient way.

5. Education:

Computer generated models are extremely useful for teaching huge number of concepts
and fundamentals in an easy to understand and learn manner. Using computer graphics
many educational models can be created through which more interest can be generated
among the students regarding the subject.

6. Training:

Specialised system for training like simulators can be used for training the candidates
in a way that can be grasped in a short span of time with better understanding. Creation
of training modules using computer graphics is simple and very useful.

7. Visualisation:

Today the need of visualise things have increased drastically, the need of visualisation
can be seen in many advance technologies , data visualisation helps in finding insights
of the data , to check and study the behaviour of processes around us we need
appropriate visualisation which can be achieved through proper usage of computer
graphics
8. Image Processing:

Various kinds of photographs or images require editing in order to be used in different


places. Processing of existing images into refined ones for better interpretation is one of
the many applications of computer graphics.
9. Machine Drawing:

Computer graphics is very frequently used for designing, modifying and creation of
various parts of machine and the whole machine itself, the main reason behind using
computer graphics for this purpose is the precision and clarity we get from such drawing
is ultimate and extremely desired for the safe manufacturing of machine using these
drawings.

10.Graphical User Interface:

The use of pictures, images, icons, pop-up menus, graphical objects helps in creating a
user friendly environment where working is easy and pleasant, using computer graphics
we can create such an atmosphere where everything can be automated and anyone can
get the desired action performed in an easy fashion.

Display Devices
The display device is an output device used to represent the information in the form of images
(visual form). Display systems are mostly called a video monitor or Video display unit (VDU).
Display devices are designed to model, display, view, or display information. The purpose of
display technology is to simplify information sharing.
Today, the demand for high-quality displays is increasing.

There are some display devices given below:

1. Cathode-Ray Tube(CRT)
2. Color CRT Monitor
3. Liquid crystal display(LCD)
4. Light Emitting Diode(LED)
5. Direct View Storage Tubes(DVST)
6. Plasma Display
7. 3D Display

1. Cathode-ray Tube (CRT): Here, CRT stands for Cathode ray tube. It is a technology
which is used in traditional computer monitor and television.

Cathode ray tube is a particular type of vacuum tube that displays images when an electron beam
collides on the radiant surface.
Component of CRT

 Electron Gun: The electron gun is made up of several elements, mainly a heating filament
(heater) and a cathode. The electron gun is a source of electrons focused on a narrow beam
facing the CRT.
 Focusing & Accelerating Anodes: These anodes are used to produce a narrow and sharply
focused beam of electrons.
 Horizontal & Vertical Deflection Plates: These plates are used to guide the path of the
electron the beam. The plates produce an electromagnetic field that bends the electron
beam through the area as it travels.
 Phosphorus-coated Screen: The phosphorus coated screen is used to produce bright spots
when the high-velocity electron beam hits it.

There are two ways to represent an object on the screen:

1. Raster Scan: It is a scanning technique in which the electron beam moves along the screen.
It moves from top to bottom, covering one line at a time.

A raster scan is based on pixel intensity control display as a rectangular box on the screen called
a raster.

Picture description is stored in the memory area called as Refresh buffer, or Frame Buffer.
Frame buffer is also known as Raster or Bitmap. Raster scan provides the refresh rate of 60 to 80
frames per second.
For Example: Television
The beam refreshing has two types:
1. Horizontal Retracing
2. Vertical Retracing

When the beam starts from the top left corner and reaches bottom right, and again return to the top
left, it is called the vertical retrace.
It will call back from top to bottom more horizontally as a horizontal reversal.

Advantages:

1. Real image
2. Many colors to be produced
3. Dark scenes can be pictured

Disadvantages:

1. Less resolution
2. Display picture line by line
3. More costly

3. Random Scan (Vector scan):


It is also known as stroke-writing display or calligraphic display. In this, the electron beam points
only to the area in which the picture is to be drawn.
It uses an electron beam like a pencil to make a line image on the screen. The image is constructed
from a sequence of straight-line segments. On the screen, each line segment is drawn by the beam
to pass from one point on the screen to the other, where its x & y coordinates define each point.
After compilation of picture drawing, the system cycle back to the first line and create all the lines
of picture 30 to 60 times per second.
Fig: A Random Scan display draws the lines of an object in a specific order

Advantages:

1. High Resolution
2. Draw smooth line Drawing

Disadvantages:

1. It does only the wireframe.


2. It creates complex scenes due to flicker.

3. Color CRT Monitor:

 It is similar to a CRT monitor.


 The basic idea behind the color CRT monitor is to combine three basic colors- Red, Green,
and Blue. By using these three colors, we can produce millions of different colors.
 The two basic color display producing techniques are:

1. Beam-Penetration Method: It is used with a random scan monitor for displaying pictures.
There are two phosphorus layers- Red and Green are coated inside the screen. The color
shown depends on how far the electron beam penetrates the phosphorus surface.

A powerful electron beam penetrates the CRT, it passes through the red layer and excites the green
layer within.
A beam with slow electrons excites only the red layer.
A beam with the medium speed of electrons, a mixture of red and green light is emitted to display
two more colors- orange and yellow.
Advantages:

1. Better Resolution
2. Half cost
3. Inexpensive

Disadvantages:

1. Only four possible colors


2. Time Consuming

Random and Raster Scan System


Random Scan System uses an electron beam which operates like a pencil to create a line image on
the CRT screen. The picture is constructed out of a sequence of straight-line segments. Each line
segment is drawn on the screen by directing the beam to move from one point on the screen to the
next, where its x & y coordinates define each point. After drawing the picture. The system cycles
back to the first line and design all the lines of the image 30 to 60 time each second. The process
is shown in fig:

Random-scan monitors are also known as vector displays or stroke-writing displays or calligraphic
displays.

Advantages:
1. A CRT has the electron beam directed only to the parts of the screen where an image is to
be drawn.
2. Produce smooth line drawings.
3. High Resolution

Disadvantages:

1. Random-Scan monitors cannot display realistic shades scenes.

Raster Scan Display:


A Raster Scan Display is based on intensity control of pixels in the form of a rectangular box called
Raster on the screen. Information of on and off pixels is stored in refresh buffer or Frame buffer.
Televisions in our house are based on Raster Scan Method. The raster scan system can store
information of each pixel position, so it is suitable for realistic display of objects. Raster Scan
provides a refresh rate of 60 to 80 frames per second.

Frame Buffer is also known as Raster or bit map. In Frame Buffer the positions are called picture
elements or pixels. Beam refreshing is of two types. First is horizontal retracing and second is
vertical retracing. When the beam starts from the top left corner and reaches the bottom right scale,
it will again return to the top left side called at vertical retrace. Then it will again more horizontally
from top to bottom call as horizontal retracing shown

Types of Scanning or travelling of beam in Raster Scan

1. Interlaced Scanning
2. Non-Interlaced Scanning

In Interlaced scanning, each horizontal line of the screen is traced from top to bottom. Due to
which fading of display of object may occur. This problem can be solved by Non-Interlaced
scanning. In this first of all odd numbered lines are traced or visited by an electron beam, then in
the next circle, even number of lines are located.

For non-interlaced display refresh rate of 30 frames per second used. But it gives flickers. For
interlaced display refresh rate of 60 frames per second is used.

Advantages:
1. Realistic image
2. Million Different colors to be generated
3. Shadow Scenes are possible.

Disadvantages:
1. Low Resolution
2. Expensive

Differentiate between Random and Raster Scan Display

Random Scan Raster Scan

1. It has high Resolution 1. Its resolution is low.

2. It is more expensive 2. It is less expensive

3. Any modification if needed is easy 3.Modification is tough

4. Solid pattern is tough to fill 4.Solid pattern is easy to fill

5. Refresh rate depends or resolution 5. Refresh rate does not depend on the picture.

6. Only screen with view on an area is displayed. 6. Whole screen is scanned.

7. Beam Penetration technology come under it. 7. Shadow mark technology came under this.

8. It does not use interlacing method. 8. It uses interlacing

9. It is restricted to line drawing applications 9. It is suitable for realistic display.


Graphics Input Devices
The Input Devices are the hardware that is used to transfer transfers input to the computer. The
data can be in the form of text, graphics, sound, and text. Output device display data from the
memory of the computer. Output can be text, numeric data, line, polygon, and other objects.

These Devices include:

1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image Scanner
Keyboard:

The most commonly used input device is a keyboard. The data is entered by pressing the set of
keys. All keys are labeled. A keyboard with 101 keys is called a QWERTY keyboard.

The keyboard has alphabetic as well as numeric keys. Some special keys are also available.

1. Numeric Keys: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
2. Alphabetic keys: a to z (lower case), A to Z (upper case)
3. Special Control keys: Ctrl, Shift, Alt
4. Special Symbol Keys: ; , " ? @ ~ ? :
5. Cursor Control Keys: ↑ → ← ↓
6. Function Keys: F1 F2 F3....F9.
7. Numeric Keyboard: It is on the right-hand side of the keyboard and used for fast entry of
numeric data.

Function of Keyboard:
1. Alphanumeric Keyboards are used in CAD. (Computer Aided Drafting)
2. Keyboards are available with special features line screen co-ordinates entry, Menu
selection or graphics functions, etc.
3. Special purpose keyboards are available having buttons, dials, and switches. Dials are used
to enter scalar values. Dials also enter real numbers. Buttons and switches are used to enter
predefined function values.

Advantage:
1. Suitable for entering numeric data.
2. Function keys are a fast and effective method of using commands, with fewer errors.

Disadvantage:
1. Keyboard is not suitable for graphics input.

Mouse:

A Mouse is a pointing device and used to position the pointer on the screen. It is a small palm size
box. There are two or three depression switches on the top. The movement of the mouse along the
x-axis helps in the horizontal movement of the cursor and the movement along the y-axis helps in
the vertical movement of the cursor on the screen. The mouse cannot be used to enter text.
Therefore, they are used in conjunction with a keyboard.

Advantage:
1. Easy to use
2. Not very expensive

Trackball

It is a pointing device. It is similar to a mouse. This is mainly used in notebook or laptop computer,
instead of a mouse. This is a ball which is half inserted, and by changing fingers on the ball, the
pointer can be moved.

Advantage:
1. Trackball is stationary, so it does not require much space to use it.
2. Compact Size
Spaceball:

It is similar to trackball, but it can move in six directions where trackball can move in two
directions only. The movement is recorded by the strain gauge. Strain gauge is applied with
pressure. It can be pushed and pulled in various directions. The ball has a diameter around 7.5 cm.
The ball is mounted in the base using rollers. One-third of the ball is an inside box, the rest is
outside.

Applications:
1. It is used for three-dimensional positioning of the object.
2. It is used to select various functions in the field of virtual reality.
3. It is applicable in CAD applications.
4. Animation is also done using spaceball.
5. It is used in the area of simulation and modeling.

Joystick:

A Joystick is also a pointing device which is used to change cursor position on a monitor screen.
Joystick is a stick having a spherical ball as its both lower and upper ends as shown in fig. The
lower spherical ball moves in a socket. The joystick can be changed in all four directions. The
function of a joystick is similar to that of the mouse. It is mainly used in Computer Aided
Designing (CAD) and playing computer games.

Light Pen

Light Pen (similar to the pen) is a pointing device which is used to select a displayed menu item
or draw pictures on the monitor screen. It consists of a photocell and an optical system placed in a
small tube. When its tip is moved over the monitor screen, and pen button is pressed, its photocell
sensing element detects the screen location and sends the corresponding signals to the CPU.
Uses:
1. Light Pens can be used as input coordinate positions by providing necessary arrangements.
2. If background color or intensity, a light pen can be used as a locator.
3. It is used as a standard pick device with many graphics system.
4. It can be used as stroke input devices.
5. It can be used as valuators

Digitizers:

The digitizer is an operator input device, which contains a large, smooth board (the appearance is
similar to the mechanical drawing board) & an electronic tracking device, which can be changed
over the surface to follow existing lines. The electronic tracking device contains a switch for the
user to record the desire x & y coordinate positions. The coordinates can be entered into the
computer memory or stored or an off-line storage medium such as magnetic tape.

Advantages:
1. Drawing can easily be changed.
2. It provides the capability of interactive graphics.

Disadvantages:
1. Costly
2. Suitable only for applications which required high-resolution graphics.
Touch Panels:

Touch Panels is a type of display screen that has a touch-sensitive transparent panel covering the
screen. A touch screen registers input when a finger or other object comes in contact with the
screen.

When the wave signals are interrupted by some contact with the screen, that located is recorded.
Touch screens have long been used in military applications.

Voice Systems (Voice Recognition):

Voice Recognition is one of the newest, most complex input techniques used to interact with the
computer. The user inputs data by speaking into a microphone. The simplest form of voice
recognition is a one-word command spoken by one person. Each command is isolated with pauses
between the words.

Voice Recognition is used in some graphics workstations as input devices to accept voice
commands. The voice-system input can be used to initiate graphics operations or to enter data.
These systems operate by matching an input against a predefined dictionary of words and phrases.

Advantage:
1. More efficient device.
2. Easy to use
3. Unauthorized speakers can be identified

Disadvantages:
1. Very limited vocabulary
2. Voice of different operators can't be distinguished.

Image Scanner

It is an input device. The data or text is written on paper. The paper is feeded to scanner. The paper
written information is converted into electronic format; this format is stored in the computer. The
input documents can contain text, handwritten material, picture extra.

By storing the document in a computer document became safe for longer period of time. The
document will be permanently stored for the future. We can change the document when we need.
The document can be printed when needed.
Scanning can be of the black and white or colored picture. On stored picture 2D or 3D rotations,
scaling and other operations can be applied.

Types of image Scanner:

1. Flat Bed Scanner: It resembles a photocopy machine. It has a glass top on its top. Glass top in
further covered using a lid. The document to be scanned is kept on glass plate. The light is passed
underneath side of glass plate. The light is moved left to right. The scanning is done the line by
line. The process is repeated until the complete line is scanned. Within 20-25 seconds a document
of 4" * 6" can be scanned.

2. Hand Held Scanner: It has a number of LED's (Light Emitting Diodes) the LED's are arranged
in the small case. It is called a Hand held Scanner because it can be kept in hand which performs
scanning. For scanning the scanner is moved over document from the top towards the bottom. Its
light is on, while we move it on document. It is dragged very slowly over document. If dragging
of the scanner over the document is not proper, the conversion will not correct.

Graphics Software and Standards


Graphics software is a type of computer program that is used to create and edit images. There is
a wide range of graphics software available on the market, ranging from simple programs that
allow users to create and edit basic images, to complex tools that can be used to create detailed
3D models and animations. Some of the most popular graphics software programs include Adobe
Photoshop, Corel Painter, and Autodesk Maya.
Characteristics:
 A graphics software program is a computer application used to create digital images.
 Graphics software programs can be used to create both vector and raster images.
 Common features of graphics software programs include the ability to create, edit, and save
images in a variety of formats.
 Some graphics software programs also offer features such as the ability to create animations
or 3D models.
 Popular examples of graphics software programs include Adobe Photoshop, GIMP, and
Inkscape.

Examples:

Some popular graphics software programs are Adobe Photoshop, Adobe Illustrator, and
CorelDRAW. These programs can be used to create and edit digital images, illustrations, and
logos. They offer a variety of features and tools that allow users to manipulate photos and
graphics to create custom designs.
 Adobe Photoshop is a popular graphics software used by photographers and graphic
designers.
 Adobe Illustrator is another popular graphics software used by graphic designers,
especially for creating vector illustrations.
 CorelDRAW is a graphics software used by both professionals and hobbyists.
 GIMP is a free and open source graphics software with capabilities similar to Photoshop.
 Inkscape is a free and open source vector graphics software used by graphic designers and
illustrators.

Components:

The graphics software components are the tools that you use to create and manipulate your
graphic images. These components include the following:
 Image editors: These are the tools that you use to create or edit your graphic images.
Common image editors include Photoshop, Illustrator, and Inkscape.
 Vector graphics editors: These are the tools that you use to create or edit vector graphics.
Common vector graphics editors include CorelDRAW and Inkscape.
 3D modeling software: This is the software that you use to create three-dimensional models.
Common 3D modeling software includes Maya, 3ds Max, and Cinema 4D.
 Animation software: This is the software that you use to create animations. Common
animation software includes Adobe After Effects, Apple Motion, and Autodesk Maya.
 Video editing software: This is the software that you use to edit videos. Common video
editing software includes Adobe Premiere Pro, Apple Final Cut Pro, and Avid Media
Composer.

Types:

 Vector graphics software: This type of software is used to create images made up of lines
and shapes, which can be scaled without losing quality. Vector graphics are often used for
logos, illustrations, and diagrams.
 Raster graphics software: This type of software is used to create images made up of pixels,
which cannot be scaled without losing quality. Raster graphics are often used for photos and
web graphics.

 3D graphics software: This type of software is used to create three-dimensional images and
animations. 3D graphics are often used for product visualization and gaming.

 Animation software: This type of software is used to create moving images, either by
animating existing graphics or by creating new ones from scratch. Animation software is
often used for movies, commercials, and video games.

Applications:

The applications are used by professionals in a variety of fields, including graphic design,
photography, video editing, and web design. There are a wide variety of graphics software
applications available, each with its own unique set of features and capabilities. It is important
to choose the right application for the specific task at hand.
 It can be used to create and edit logos, and other graphical elements.
 It can be used to create website layouts and design elements.
 It can be used to create illustrations, visual presentations, and digital art.
 It can be used to edit and enhance photos, images, and animation.
 It can be used to create and edit website designs, presentation slides, and marketing materials.

Advantages:

There are many advantages of using graphics software, including the ability to create high-
quality images, edit images and create custom graphics.
 Graphics software provides users with a wide range of tools to create, edit and manipulate
images.
 It is often easy to use and can be used by people with little or no experience in image editing.
 It can be used to create images for a wide range of purposes, including web design,
advertising, and printing.
 It often provides a wide range of features, making it possible to create complex images with
ease.
 It is often able to create images in a range of different formats, making it easy to share images
with others.
 It provides users with a wide range of tools to create, edit, and manipulate images.
 It can be used to create both vector and bitmap images.
 It offers a variety of features and options that allow users to create images that are both
creative and professional.
 It is often used in conjunction with other software programs, such as word processors and
spreadsheets, to create comprehensive documents and presentations.
Disadvantages:

 Many graphics software programs are expensive, and the cost can be a barrier for some
people who want to use them.
 It requires a lot of memory to store huge files.
 Some graphics software programs can be complex and difficult to use, which can be
complicated for some users.
 It requires a powerful computer to work with the project smoothly.
 It can be time-consuming to create graphics.
 It can be expensive to purchase the software, and then you also have to pay for the
subscription regularly.
 It can be difficult to learn how to use the software, especially if you are not familiar with
graphic design.
 Some graphics software programs only offer limited functionality, which can be frustrating
for users who want to do more with their images.

Graphics Primitives
Graphics Primitive is a basic object that is essential for the creation or construction of complex
images. Graphics is constructed from three basic elements, as opposed to the great variety of
graphics applications. The most basic of these elemental structures is the pixel, short for picture
element.

Points
A Point in geometry is defined as a location in the space that is uniquely defined by an ordered
triplet (x, y, z) where x, y, & z are the distances of the point from the X-axis, Y-axis, and Z-axis
respectively in the 3-Dimensions and is defined by ordered pair (x, y) in the 2-Dimensions
where, x and y are the distances of the point from the X-axis, and Y-axis, respectively. It is
represented using the dot and is named using capital English alphabets. The figure added below
shows a point P in the 3-D which is at a distance of x, y, and z from the X-axis, Y-axis, and Z-axis
respectively.
Collinear points

We define collinear points as the points that lie on the same line, i.e. a straight line can be passed
through the collinear point. Points A, B, and C shown in the image added below are the collinear
points. For points to be collinear there must be a minimum of three points lying on the same line.

Lines
A Line in three-dimensional geometry is defined as a set of points in 3D that extends infinitely
in both directions It is the smallest distance between any two points either in 2-D or 3-D space.
We represent a line with L and in 3-D space, a line is given using the equation,
L: (x – x1) / l = (y – y1) / m = (z – z1) / n
where
(x, y, z) are the position coordinates of any variable point lying on the line
(x1, y1, z1) are the position coordinates of a point P lying on the line
l, m, & n are the direction ratios of the line.
In 3D we can also form a line by the intersection of two non-parallel planes.

Points, Lines, and Planes

Line Segment

A line segment is defined as the finite length of the line that is used to join two points in 2-D and
3-D. It is the shortest distance between two points. A line segment between two points A and B
is denoted as, AB
A line has infinite length whereas a line segment is a part of a line and has finite length.

Circle
A circle is defined as a set of points that all are the same distance from a
common point. The common point is known as the center and the distance from
the center of the circle to any point on its circumference is called the radius.

It is an eight-way symmetric figure which can be divided into four quadrants and
each quadrant has two octants. This symmetry helps in drawing a circle on a
computer by knowing only one point of any octant.

How to define a circle in computer graphics?

There are two methods to define a circle in computer graphics, namely:

1. Direct or Polynomial method and


2. Polar coordinates method
Let us have a look at both these methods and learn about them in brief.

1. Direct or Polynomial Method

In this method, a circle is defined with the help of a polynomial equation i.e.

(x - xc)2 + (y - yc)2 = r2

Where, (xc, yc) are coordinates of circle and r is radius of circle.

For each value of x, value of y can be calculated using,

y = yc ± √r2 – (x - xc)2

The initial points will be: x = xc – r, and y = yc

This is a very ineffective method because for each point value of xc, x and r are squared and then
subtracted and then the square root is calculated, which leads to high time complexity.

Ellipses
This is an incremental method for scan converting an ellipse that is centered at the origin in
standard position i.e., with the major and minor axis parallel to coordinate system axis. It is very
similar to the midpoint circle algorithm. Because of the four-way symmetry property we need to
consider the entire elliptical curve in the first quadrant.

Let's first rewrite the ellipse equation and define the function f that can be used to decide if the
midpoint between two candidate pixels is inside or outside the ellipse:
Now divide the elliptical curve from (0, b) to (a, 0) into two parts at point Q where the slope of
the curve is -1.

Slope of the curve is defined by the f(x, y) = 0 is where fx & fy are partial derivatives
of f(x, y) with respect to x & y.

We have fx = 2b2 x, fy=2a2 y & Hence we can monitor the slope value during the
scan conversion process to detect Q. Our starting point is (0, b)

Suppose that the coordinates of the last scan converted pixel upon entering step i are (xi,yi). We
are to select either T (xi+1),yi) or S (xi+1,yi-1) to be the next pixel. The midpoint of T & S is used to
define the following decision parameter.

pi = f(xi+1),yi- )

pi=b2 (xi+1)2+a2 (yi- )2-a2 b2

If pi<0, the midpoint is inside the curve and we choose pixel T.

If pi>0, the midpoint is outside or on the curve and we choose pixel S.

Decision parameter for the next step is:

pi+1=f(xi+1+1,yi+1- )

= b2 (xi+1+1)2+a2 (yi+1- )2-a2 b2

Since xi+1=xi+1,we have

pi+1-pi=b2[((xi+1+1)2+a2 (yi+1- )2-(yi - )2]

pi+1= pi+2b2 xi+1+b2+a2 [(yi+1- )2-(yi - )2]

If T is chosen pixel (pi<0), we have yi+1=yi.

If S is chosen pixel (pi>0) we have yi+1=yi-1. Thus we can express

pi+1in terms of pi and (xi+1,yi+1):

pi+1= pi+2b2 xi+1+b2

if pi<0 = pi+2b2 xi+1+b2-2a2 yi+1 if pi>0


The initial value for the recursive expression can be obtained by the evaluating the original
definition of pi with (0, b):

p1 = (b2+a2 (b- )2-a2 b2


= b2-a2 b+a2/4

Suppose the pixel (xj yj) has just been scan converted upon entering step j. The next pixel is either
U (xj ,yj-1) or V (xj+1,yj-1). The midpoint of the horizontal line connecting U & V is used to define
the decision parameter:

qj=f(xj+ ,yj-1)

qj=b2 (xj+ )2+a2 (yj -1)2-a2 b2

If qj<0, the midpoint is inside the curve and we choose pixel V.

If qj≥0, the midpoint is outside the curve and we choose pixel U.Decision parameter for the next
step is:

qj+1=f(xj+1+ ,yj+1-1)

= b2 (xj+1+ )2+ a2 (yj+1-1)2- a2 b2

Since yj+1=yj-1,we have

qj+1-qj=b2 [(xj+1+ )2-(xj + )2 ]+a2 (yj+1-1)2-( yj+1)2 ]

qj+1=qj+b2 [(xj+1+ )2-(xj + )2]-2a2 yj+1+a2

If V is chosen pixel (qj<0), we have xj+1=xj.

If U is chosen pixel (pi>0) we have xj+1=xj. Thus we can express

qj+1 in terms of qj and (xj+1,yj+1 ):


qj+1=qj+2b2 xj+1-2a2 yj+1+a2 if qj < 0
=qj-2a2 yj+1+a2 if qj>0

The initial value for the recursive expression is computed using the original definition of qj. And
the coordinates of (xk yk) of the last pixel choosen for the part 1 of the curve:

q1 = f(xk+ ,yk-1)=b2 (xk+ )2-a2 (yk-1)2- a2 b2

Midpoint ellipse algorithm


The midpoint ellipse method is applied throughout the first quadrant in two parts. Now let us
take the start position at (0,ry) and step along the ellipse path in clockwise order throughout the
first quadrant.

Ellipse function can be defined as:

fellipse(x,y)=ry2x2+rx2y2-rx2ry2

According to this there are some properties which have been generated that are:

1. fellipse(x,y)<0 which means (x,y) is inside the ellipse boundary.


2. fellipse(x,y)>0 which means (x,y) is outside the ellipse boundary.
3. fellipse(x,y)=0 which means (x,y) is on the ellipse boundary.

Properties of ellipse

Ellipse is defined as the locus of a point in a plane which moves in a plane in such a manner that
the ratio of its distance from a fixed point called focus in the same plane to its distance from a
fixed straight line called directrix is always constant, which should always be less than unity.

If the distance to the two foci from any point P=(x,y) on the ellipse are labeled d1 and d2 then the
general equation of the ellipse can be stated as- d1+d2=constant.

For expressing the distances d1 and d2 in terms of focal coordinates F1 and F2 we


have:- Ax2+By2+Cxy+Dx+Ey+F=0 where A, B, C, D,E, and F are evaluated in terms of focal
coordinates and dimensions of the major and minor axes of the ellipse.

Scan conversion algorithm


It is a process of representing graphics objects a collection of pixels. The graphics objects are
continuous. The pixels used are discrete. Each pixel can have either on or off state.
The circuitry of the video display device of the computer is capable of converting binary values
(0, 1) into a pixel on and pixel off information. 0 is represented by pixel off. 1 is represented using
pixel on. Using this ability graphics computer represent picture having discrete dots.

Any model of graphics can be reproduced with a dense matrix of dots or points. Most human
beings think graphics objects as points, lines, circles, ellipses. For generating graphical object,
many algorithms have been developed.

Advantage of developing algorithms for scan conversion

1. Algorithms can generate graphics objects at a faster rate.


2. Using algorithms memory can be used efficiently.
3. Algorithms can develop a higher level of graphical objects.

Examples of objects which can be scan converted

1. Point
2. Line
3. Sector
4. Arc
5. Ellipse
6. Rectangle
7. Polygon
8. Characters
9. Filled Regions

The process of converting is also called as rasterization. The algorithms implementation varies
from one computer system to another computer system. Some algorithms are implemented using
the software. Some are performed using hardware or firmware. Some are performed using various
combinations of hardware, firmware, and software.

Pixel or Pel:

The term pixel is a short form of the picture element. It is also called a point or dot. It is the smallest
picture unit accepted by display devices. A picture is constructed from hundreds of such pixels.
Pixels are generated using commands. Lines, circle, arcs, characters; curves are drawn with closely
spaced pixels. To display the digit or letter matrix of pixels is used.
The closer the dots or pixels are, the better will be the quality of picture. Closer the dots are, crisper
will be the picture. Picture will not appear jagged and unclear if pixels are closely spaced. So the
quality of the picture is directly proportional to the density of pixels on the screen.

Pixels are also defined as the smallest addressable unit or element of the screen. Each pixel can be
assigned an address as shown in fig:

Different graphics objects can be generated by setting the different intensity of pixels and different
colors of pixels. Each pixel has some co-ordinate value. The coordinate is represented using row
and column.

P (5, 5) used to represent a pixel in the 5th row and the 5th column. Each pixel has some intensity
value which is represented in memory of computer called a frame buffer. Frame Buffer is also
called a refresh buffer. This memory is a storage area for storing pixels values using which pictures
are displayed. It is also called as digital memory. Inside the buffer, image is stored as a pattern of
binary digits either 0 or 1. So there is an array of 0 or 1 used to represent the picture. In black and
white monitors, black pixels are represented using 1's and white pixels are represented using 0's.
In case of systems having one bit per pixel frame buffer is called a bitmap. In systems with multiple
bits per pixel it is called a pixmap.

Fill area Primitives


Filled Area primitives are used to filling solid colors to an area or image or polygon. Filling the
polygon means highlighting its pixel with different solid colors. Following are two filled area
primitives:
1. Seed Fill Algorithm
2. Scan Fill Algorithm

Seed Fill Algorithm:

In this seed fill algorithm, we select a starting point called seed inside the boundary of the
polygon. The seed algorithm can be further classified into two algorithms: Flood Fill and
Boundary filled.
Flood Fill Algorithm:

In this flood-fill algorithm, a seed point is taken inside the polygon. The flood fill algorithm is
used when the polygon has multiple color boundaries. In this method, the related pixels are
replaced with the selected color using fill color. The selected pixel values before are reassigned
with the selected color value. This can be done using two approaches either 4-connected or 8-
connected.

Algorithm:

Function floodfill(x, y, fillcolor, previouscolor)


if (getpixel (x, y) = previouscolor)
{
setpixel (x, y, fillcolor);
floodfill( x+1, y, fillcolor,previouscolor);
floodfill( x-1, y, fillcolor,previouscolor);
floodfill( x, y+1, fillcolor,previouscolor);
floodfill( x, y-1, fillcolor,previouscolor);
}
getpixel() - The color of specified pixel.
setpixel() - Sets the pixel to specified color.

Advantages:

 This method is easy to fill colors in computer graphics.


 It fills the same color inside the boundary.
Disadvantages:

 Fails with large area polygons.


 It is a slow method to fill the area.

Boundary Fill Algorithm:

The boundary fill algorithm uses polygon boundaries. It is also called an edge fill algorithm. In
this method, a seed point is taken in the polygon boundary and checks whether the adjacent or
neighboring pixel is colored or not. If the adjacent pixel is not colored then, fill it. This can be
done using two approaches: 4-connected or 8-connected.

Scan-line polygon filling


Polygon is an ordered list of vertices as shown in the following figure. For filling polygons with
particular colors, you need to determine the pixels falling on the border of the polygon and those
which fall inside the polygon. In this chapter, we will see how we can fill polygons using different
techniques.

Scan Line Algorithm

This algorithm works by intersecting scanline with polygon edges and fills the polygon between
pairs of intersections. The following steps depict how this algorithm works.

Step 1 − Find out the Ymin and Ymax from the given polygon.

Step 2 − ScanLine intersects with each edge of the polygon from Ymin to Ymax. Name each
intersection point of the polygon. As per the figure shown above, they are named as p0, p1, p2, p3.
Step 3 − Sort the intersection point in the increasing order of X coordinate
i.e. p0,p1�0,�1, p1,p2�1,�2, and p2,p3�2,�3.

Step 4 − Fill all those pair of coordinates that are inside polygons and ignore the alternate pairs.

Flood Fill Algorithm

Sometimes we come across an object where we want to fill the area and its boundary with different
colors. We can paint such objects with a specified interior color instead of searching for particular
boundary color as in boundary filling algorithm.

Instead of relying on the boundary of the object, it relies on the fill color. In other words, it replaces
the interior color of the object with the fill color. When no more pixels of the original interior color
exist, the algorithm is completed.

Once again, this algorithm relies on the Four-connect or Eight-connect method of filling in the
pixels. But instead of looking for the boundary color, it is looking for all adjacent pixels that are a
part of the interior.

Boundary Fill Algorithm

The boundary fill algorithm works as its name. This algorithm picks a point inside an object and
starts to fill until it hits the boundary of the object. The color of the boundary and the color that we
fill should be different for this algorithm to work.

In this algorithm, we assume that color of the boundary is same for the entire object. The boundary
fill algorithm can be implemented by 4-connected pixels or 8-connected pixels.

4-Connected Polygon

In this technique 4-connected pixels are used as shown in the figure. We are putting the pixels
above, below, to the right, and to the left side of the current pixels and this process will continue
until we find a boundary with different color.
Algorithm

Step 1 − Initialize the value of seed point seedx,seedy�����,�����, fcolor and dcol.

Step 2 − Define the boundary values of the polygon.

Step 3 − Check if the current seed point is of default color, then repeat the steps 4 and 5 till the
boundary pixels reached.

Step 4 − Change the default color with the fill color at the seed point.

setPixel(seedx, seedy, fcol)

Step 5 − Recursively follow the procedure with four neighborhood points.

FloodFill (seedx – 1, seedy, fcol, dcol)


FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)

Step 6 − Exit

There is a problem with this technique. Consider the case as shown below where we tried to fill
the entire region. Here, the image is filled only partially. In such cases, 4-connected pixels
technique cannot be used.
8-Connected Polygon

In this technique 8-connected pixels are used as shown in the figure. We are putting pixels above,
below, right and left side of the current pixels as we were doing in 4-connected technique.

In addition to this, we are also putting pixels in diagonals so that entire area of the current pixel is
covered. This process will continue until we find a boundary with different color.

Algorithm

Step 1 − Initialize the value of seed point seedx,fcolor and dcol.

Step 2 − Define the boundary values of the polygon.

Step 3 − Check if the current seed point is of default color then repeat the steps 4 and 5 till the
boundary pixels reached

If getpixel(x,y) = dcol then repeat step 4 and 5

Step 4 − Change the default color with the fill color at the seed point.

setPixel(seedx, seedy, fcol)

Step 5 − Recursively follow the procedure with four neighbourhood points


FloodFill (seedx – 1, seedy, fcol, dcol)
FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx, seedy + 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy - 1, fcol, dcol)

Step 6 − Exit

The 4-connected pixel technique failed to fill the area as marked in the following figure which
won’t happen with the 8-connected technique.

Inside- Outside Test


n Computer Graphics, Inside Outside is performed to test whether a given point lies inside of a
closed polygon or not. Mainly, there are two methods to determine a point is interior/exterior to
polygon:

1. Even-Odd / Odd-Even Rule or Odd Parity Rule

2. Winding Number Method


Even-Odd Rule / Odd Parity Rule
It is also known as crossing number and ray casting algorithm. The algorithm follows a basic
observation that if a ray coming from infinity crosses through border of polygon, then it goes from
outside to inside and outside to inside alternately. For every two crossings, point lies outside of
polygon.

Algorithm:
1. Construct a line segment from point to be examined to point outside of a polygon.

2. Count the number of intersections of line segment with polygon boundaries.

3. If Odd number of intersection, then Point lies inside of Polygon.

4. Else, Point lies outside of polygon.

This test fails in case line segment intersects at vertex point. To handle it, few modifications are
made. Look at other end points of two line segments of polygon.

 If end points lie is at same side of constructed line segment, then even number of intersection is
considered for that intersection point.
 If end points lie at opposite side of it, then odd number of intersection is considered.

Boundary and floodfill


Flood fill algorithm is also known as a seed fill algorithm. It determines the area which is
connected to a given node in a multi-dimensional array. This algorithm works by filling or
recolouring a selected area containing different colours at the inside portion and therefore the
boundary of the image. It is often illustrated by a picture having a neighbourhood bordered by
various distinct colour regions. To paint such regions we will replace a specific interior colour
instead of discovering a boundary colour value. This is the rationale the approach is understood
because of the flood-fill algorithm. Now, there are two methods which will be used for creating
endless boundary by connecting pixels – 4-connected and 8-connected approach. In the 4-
connected method, the pixel can have at maximum four neighbours that are positioned at the
proper, left, above and below the present pixel. On the contrary, in the 8-connected method, it
can have eight, and the neighbouring positions are checked against the four diagonal pixels. So,
any of the 2 methods are often wont to repaint the inside points.

Boundary-fill algorithm: It follows an approach where the region filling begins from some
extent residing inside the region and paint the inside towards the boundary. In case the boundary
contains single colour the fill algorithm continues within the outward direction pixel by pixel
until the boundary colour is encountered. The boundary-fill algorithm is often mainly
implemented within the interactive painting packages, where the inside points are easily chosen.
The functioning of the boundary-fill starts by accepting the coordinates of an indoor point (x, y),
a boundary colour and fill colour becomes the input. Beginning from the (x, y) the method checks
neighbouring locations to spot whether or not they are a part of the boundary colour. If they’re
not from the boundary colour, then they’re painted with the fill colour, and their adjacent pixels
are tested against the condition. The process ends when all the pixels up until the boundary
colour for the world are checked.
Difference Between Flood-fill and Boundary-fill Algorithm:

Flood-fill Algorithm Boundary-fill Algorithm

It can process the image containing more than It can only process the image containing
one boundary colours. single boundary colour.

Flood-fill algorithm is comparatively slower Boundary-fill algorithm is faster than the


than the Boundary-fill algorithm. Flood-fill algorithm.

In Flood-fill algorithm a random colour can be In Boundary-fill algorithm Interior points


used to paint the interior portion then the old one are painted by continuously searching for
is replaced with a new one. the boundary colour.

Memory consumption is relatively low in


It requires huge amount of memory.
Boundary-fill algorithm.

The complexity of Boundary-fill


Flood-fill algorithms are simple and efficient.
algorithm is high.

Character Generation
In the world of video production, a character generator (CG) is a software application that
produces static or animated text for use in 2D and 3D videos. A CG can be used to create
anything from simple Lower Thirds text to full-blown 3D animations. A character generator, or
CG, is a tool used to create digital characters. These characters can be used in video games,
movies, and other digital media. CGs are created by artists who design the characters and then
use software to bring them to life. There are many different types of character generators, but
the most common one is the 3D character generator. This type of CG allows artists to create
realistic-looking characters that can be used in movies and video games. 3D character generators
are usually very expensive and require a lot of experience to use.
2D character generators are also common, but they are not as realistic as 3D CGs. 2D CGs are
often used for cartoons and other types of artwork. They are usually less expensive than 3D
character generators and easier to use. No matter what type of character generator you use, the
process of creating a digital character generally follows the same steps: first, the artist designs
the character; then, they build the model using software; finally, they animate the character using
motion capture or keyframing techniques.
Working of Character Generators:

A character generator, or CG, is a device that creates graphic images and animations for use in
video productions. The images are usually created from scratch by a team of artists, or they may
be taken from a pre-existing database of images. The animations are created by an animator, who
designs the movement of the characters and objects in the scene. The CG is used to generate the
images and animations that are then combined with live-action footage or other graphics to create
a final video production. Character generators are often used in television commercials, music
videos, video games, and movies.
Advantages of Character Generator:
There are many benefits to using a character generator when creating characters for your stories.
Perhaps the most obvious benefit is that it can save you a lot of time. If you’re not experienced
in drawing or creating digital art, it can be very time-consuming to create believable and detailed
characters. With a character generator, you can simply input your desired characteristics and
have a professional-looking character in minutes.

Types of Character Generators:

There are several different types of character generators, each with its own unique capabilities.
Here are a few of the most common:
1. 2D character generators create two-dimensional characters that can be used in a variety of
applications, such as video games or animated films.
2. 3D character generators create three-dimensional characters that can be used in a variety of
applications, such as video games or animated films.
3. Motion capture character generators use motion capture technology to record the movement
of real people and then generate realistic character animations from that data.
4. Facial recognition character generators use facial recognition algorithms to generate
characters that look like specific people or celebrities.

Line Attributes
Computer Graphics is an important topic in the Computer Science domain. It is a type of coding
practice that will print the necessary output images. One should have a good imagination to
master Computer Graphics. Computer Graphics mainly can be written in C programming
language or C++ programming language. Using any one of the programming languages, users
can develop eye-catching output images, It generally uses the computer graphics card to produce
the images. There are color options also. Using calculation & proper use of programming
knowledge, users can develop any structure. From a simple car to Eiffel Tower, everything can
be derived using computer graphics. There are many inbuilt functions in the Computer Graphics.
The line is one of them.
Line Attributes In Computer Graphics:

The line is one of the major inbuilt functions in computer graphics, This helps to make more
interactive & interesting images. There are many inbuilt functions are present. Like there are
Circle, Arch, Eclipse, etc. All of this help to make a structure. Proper use of those functions helps
to draw out particular images.
As the line is the function there some attributes or arguments are present. These help to draw a
line in a better position. There are mainly four coordinates. Two are the starting coordinates &
the two are for ending coordinates.
Syntax:
line(int X1, int Y1, int X2, int Y2);
 int X1: This is the starting coordinate of the line. This is the horizontal coordinate of any line.
This is always the integer in nature. This is the starting X coordinate of the lines.
 int Y1: This also falls under the starting coordinates. This is the vertical coordinate of any
line. This is always the integer in nature. This is the starting Y coordinate of the lines.
 int X2: This is the ending coordinate of the line. This is the horizontal coordinate of any line.
This is also an integer in nature. This is the ending X coordinate of the lines.
 int Y2: This also falls under the ending coordinates. This is the vertical coordinate of any
line. This is also an integer in nature. This is the ending Y coordinate of the lines.

Associated Functions With Line Attribute:

The Line function can only be able to draw lines. Whenever there is a need to draw a line in
computer graphics, we need to take the help of the Line function. But there is also some need to
customize the line. Suppose, if the user wants to draw a Red car, then the lines will definitely be
in Red color. Or a line needs to be thick in someplace. So, for that purpose, we need to make a
width line there. All these things can’t able to do by a simple line function. For that purpose, we
need to have some more functions.
 setcolor(color): This is the function that is needed to make a colorful line. Using this function
along with the Line() function, users can able to draw a colorful line. There in this function,
we have to provide the color name as the argument. This function needs to be placed before
the line function. This will help to provide the appropriate output.
 setlinestyle(int linestyle, unsigned pattern, int thickness): This function basically used for
two main reasons. One is by the help of this function, we can able to draw different lines.
Like we can able to draw the dotted line also. This function also helps to make a thick like.
The last argument is used for making a width line. The first two functions are needed when
we have to make some other line pattern.

Area fill attributes


In computer graphics, there is a special function known as the fill function. This fill function is
used to color a certain area. There are two components of the fill area function. These
components are responsible to color a certain area. These components need to be placed in proper
sequences. Otherwise, there will be an issue to color an area.
1. Setfillstyle(int pattern, int color): This is the main component of the fill area function. This
function has two arguments. One is the pattern argument. And another is the color argument.
There are thirteen patterns available for this function. These all patterns can be used to create
new designs in any certain area. And the color argument is responsible to color the designs.
Any colors that are available in the Computer graphics can be used there.

2. Floodfill(int x, int y, int border_color): This is another component of the fill area function.
After declaring the area, this function needs to be used. There are two coordinates are present.
These coordinates must be a coordinate inside of the figure. This means these coordinates
will indicate the bounded area that needs to be colored. There is another argument, that is the
border color. Using that argument, the border color can be changed promptly. But, most of
the time the border color is White. The numerical representation of the color can be used
there also.

Different Patterns Of The Setfillstyle() Function:

There are different patterns are available in the setfllstyle() function. These can be used along
with a bounded figure. Also, after using the setfillstyle() function, we need to add the floodfill()
functions. This will complete the total implementation process. Here, a rectangle is used for the
bounded area. One by one all the patterns have been implemented.
In this case, in the setfillstyle() function, the pattern should be declared from the below-
mentioned pattern list. Along with that, the programmer needs to specify the color of the style.
Here, the GREEN color is used for demonstrating purposes. After using the setfillstyle()
function, an area has been implemented. Here, the rectangle area is used. Then, the floodfill()
function should be used. The programmer needs to provide the coordinates that are inside the
area. So, the inner area will be colored. Programmers need to provide specific coordinates that
are bounded. Otherwise, the external area will be colored.
EMPTY_FILL Pattern: This pattern can’t able to color anything. This is just like the null
pattern. No color and no design will be there inside the area. So, this pattern is rarely used in
computer graphics

Character Attributes
Bundled attributes for characters are a set of attributes that are used to define a character in a
computer graphic. These attributes can include the character’s height, weight, hair colour, eye
colour, and other physical features. They can also include the character’s personality traits, skills,
and other information that can help to define the character. Bundled attributes are a type of
attribute that can be applied to characters in computer graphics. These attributes are typically
used to adjust the appearance of a character and can be used to make a character look more
realistic or stylized. Bundled attributes can be found in many different software packages, and
can be used to create a wide variety of looks for characters.
Components:

There are four main components of bundled attributes for characters in computer graphics:
 Position: The position component of bundled attributes for characters defines the position
of the character in two-dimensional or three-dimensional space. The position is typically
defined by a pair of coordinates, such as (x, y) for two-dimensional space or (x, y, z) for
three-dimensional space. The position component can also be used to define the character’s
orientation in space, such as by specifying a rotation angle
.
 Size: The SizeComponent is one of the bundled attributes for characters. It essentially
determines how big or small a character is. The size of a character can impact various things
in the game, such as how much damage they can take, how much they can lift, and so on.
There are a variety of different sizes a character can be, from tiny to huge. The
SizeComponent is an important aspect of character creation and should be given careful
consideration.

 Shape: The Shape component of Bundled Attributes for Characters defines the physical form
of a character. This includes the character’s height, weight, body type, and other physical
features. The Shape component is important for determining how a character will interact
with the environment and other characters. For example, a character’s height may affect their
ability to reach certain objects or areas. Additionally, the character’s weight may impact their
movement speed and how much damage they can take in combat.

 Color: The colour consists of three components namely Red (R), Green (G) and Blue (B)
components. These components are combined in various proportions to form a particular
colour. Each of these components ranges from 0 to 1.

Colour systems are classified in terms of the three colours. The colour systems are :

 R, G, B System: The red, green and blue components are used in this system. The colours
are represented in terms of their red, green and blue components. The R, G and B components
are combined in various proportions to form a particular colour. The R, G and B components
are interdependent. The R, G and B components are combined in various proportions to form
a particular colour. The R, G and B components are interdependent. If a particular colour is
specified in terms of the R, G and B components, then the colour is determined. The R, G
and B components are interdependent. If a particular colour is specified in terms of the R, G
and B components, then the colour is determined.

 H, S, V System: The Hue (H), saturation (S) and value (V) components are used in this
system. The Hue (H) component specifies the colour. The saturation (S) component specifies
the purity of the colour. The value (V) component specifies the brightness of the colour. For
more details on the HSV model, you can refer HSV Color Model in Computer Graphics.
 Y, I, Q System: The luminance (Y), chrominance I and chrominance Q components are used
in this system. The luminance (Y) component specifies the brightness of the colour. The
chrominance I and chrominance Q components specify the hue and saturation of the colour.
The luminance (Y) component is a function of the R, G and B components. The luminance
(Y) component is given as the chrominance I and chrominance Q components are given as
the chrominance I and chrominance Q components are related to the R, G and B components.
The colour is represented by the luminance (Y) component and the chrominance I and
chrominance Q components.

Applications:

The bundled attributes of characters can be used to create different looks for the characters. For
example, the bundled attributes of characters can be used to create a cartoon look or a realistic
look. They are used to create different styles for the characters. For example, the bundled
attributes of characters can be used to create a classic style or a modern style.
Unit 2
2D Transformation
Transformation means changing some graphics into something else by applying rules. We can
have various types of transformations such as translation, scaling up or down, rotation, shearing,
etc. When a transformation takes place on a 2D plane, it is called 2D transformation.

Transformations play an important role in computer graphics to reposition the graphics on the
screen and change their size or orientation.

Translation

A translation moves an object to a different position on the screen. You can translate a point in 2D
by adding translation coordinate (t x, ty) to the original coordinate X,Yto get the new
coordinate X′,Y′

From the above figure, you can write that −

X’ = X + tx

Y’ = Y + ty

The pair (t x, ty) is called the translation vector or shift vector. The above equations can also be
represented using the column vectors.

P=[X][Y] p' = [X′][Y′]T = [tx][ty]

We can write it as −

P’ = P + T

Rotation

In rotation, we rotate the object at particular angle θ theta from its origin. From the following
figure, we can see that the point PX,Y is located at angle φ from the horizontal X coordinate with
distance r from the origin.

Let us suppose you want to rotate it at the angle θ. After rotating it to a new location, you will get
a new point P’ X′,Y′
Using standard trigonometric the original coordinate of point PX,Ycan be represented as −

X=rcosϕ......(1)

Y=rsinϕ......(2)

Same way we can represent the point P’ X′,Y′′ as −

x′=rcos(ϕ+θ)=rcosϕcosθ−rsinϕsinθ.......(3)

y′=rsin(ϕ+θ)=rcosϕsinθ+rsinϕcosθ.......(4)

Substituting equation 11 & 22 in 33 & 44 respectively, we will get

x′=xcosθ−ysinθ

y′=xsinθ+ycosθ

Representing the above equation in matrix form,

[X′Y′]=[XY][cosθ−sinθsinθcosθ]

P’ = P . R

Where R is the rotation matrix

R=[cosθ−sinθsinθcosθ]

The rotation angle can be positive and negative.

For positive rotation angle, we can use the above rotation matrix. However, for negative angle
rotation, the matrix will change as shown below −
R=[cos(−θ)−sin(−θ)sin(−θ)cos(−θ)] =[cosθsinθ−sinθcosθ](∵cos(−θ)=cosθandsin(−θ)=−sinθ)

Scaling
To change the size of an object, scaling transformation is used. In the scaling process, you either
expand or compress the dimensions of the object. Scaling can be achieved by multiplying the
original coordinates of the object with the scaling factor to get the desired result.

Let us assume that the original coordinates are X,Y the scaling factors are (SX, SY), and the
produced coordinates are X′,Y′�′,�′. This can be mathematically represented as shown below −

X' = X . SX and Y' = Y . SY

The scaling factor SX, SY scales the object in X and Y direction respectively. The above equations
can also be represented in matrix form as below −

(X′Y′)=(XY)[Sx00Sy]

OR

P’ = P . S

Where S is the scaling matrix. The scaling process is shown in the following figure.

If we provide values less than 1 to the scaling factor S, then we can reduce the size of the object.
If we provide values greater than 1, then we can increase the size of the object.
Reflection
Reflection is the mirror image of original object. In other words, we can say that it is a rotation
operation with 180°. In reflection transformation, the size of the object does not change.

The following figures show reflections with respect to X and Y axes, and about the origin
respectively.

Shear
A transformation that slants the shape of an object is called the shear transformation. There are
two shear transformations X-Shear and Y-Shear. One shifts X coordinates values and other shifts
Y coordinate values. However; in both the cases only one coordinate changes its coordinates and
other preserves its values. Shearing is also termed as Skewing.

X-Shear

The X-Shear preserves the Y coordinate and changes are made to X coordinates, which causes the
vertical lines to tilt right or left as shown in below figure.
The transformation matrix for X-Shear can be represented as −

Xsh=⎡⎣⎢100shx10001⎤⎦⎥��ℎ=[1�ℎ�0010001]

Y' = Y + Shy . X

X’ = X

Y-Shear

The Y-Shear preserves the X coordinates and changes the Y coordinates which causes the
horizontal lines to transform into lines which slopes up or down as shown in the following figure.

The Y-Shear can be represented in matrix from as −

Ysh⎡⎣⎢1shy0010001⎤⎦⎥��ℎ[100�ℎ�10001]

X’ = X + Shx . Y

Y’ = Y
Matrix Representation
Matrix representation is a method used by a computer language to store matrices of more than
one dimension in memory. Fortran and C use different schemes for their native
arrays. Fortran uses "Column Major", in which all the elements for a given column are stored
contiguously in memory. C uses "Row Major", which stores all the elements for a given row
contiguously in memory. LAPACK defines various matrix representations in memory. There is
also Sparse matrix representation and Morton-order matrix representation. According to the
documentation, in LAPACK the unitary matrix representation is optimized.[1][2] Some languages
such as Java store matrices using Iliffe vectors. These are particularly useful for storing irregular
matrices. Matrices are of primary importance in linear algebra.

Homogeneous Coordinates
The rotation of a point, straight line or an entire image on the screen, about a point other than
origin, is achieved by first moving the image until the point of rotation occupies the origin, then
performing rotation, then finally moving the image to its original position.

The moving of an image from one place to another in a straight line is called a translation. A
translation may be done by adding or subtracting to each point, the amount, by which picture is
required to be shifted.

Translation of point by the change of coordinate cannot be combined with other transformation by
using simple matrix application. Such a combination is essential if we wish to rotate an image
about a point other than origin by translation, rotation again translation.

To combine these three transformations into a single transformation, homogeneous coordinates


are used. In homogeneous coordinate system, two-dimensional coordinate positions (x, y) are
represented by triple-coordinates.

Homogeneous coordinates are generally used in design and construction applications. Here we
perform translations, rotations, scaling to fit the picture into proper position.

Example of representing coordinates into a homogeneous coordinate system: For two-


dimensional geometric transformation, we can choose homogeneous parameter h to any non-zero
value. For our convenience take it as one. Each two-dimensional position is then represented with
homogeneous coordinates (x, y, 1).
Following are matrix for two-dimensional transformation in homogeneous coordinate:

Composite Transformations
A number of transformations or sequence of transformations can be combined into single one
called as composition. The resulting matrix is called as composite matrix. The process of
combining is called as concatenation.

Suppose we want to perform rotation about an arbitrary point, then we can perform it by the
sequence of three transformations

1. Translation
2. Rotation
3. Reverse Translation
The ordering sequence of these numbers of transformations must not be changed. If a matrix is
represented in column form, then the composite transformation is performed by multiplying matrix
in order from right to left side. The output obtained from the previous matrix is multiplied with the
new coming matrix.

Example showing composite transformations:

The enlargement is with respect to center. For this following sequence of transformations will be
performed and all will be combined to a single one

Step1: The object is kept at its position as in fig (a)

Step2: The object is translated so that its center coincides with the origin as in fig (b)

Step3: Scaling of an object by keeping the object at origin is done in fig (c)

Step4: Again translation is done. This second translation is called a reverse translation. It will
position the object at the origin location.

Above transformation can be represented as T V.STV-1


Advantage of composition or concatenation of matrix:

1. It transformations become compact.


2. The number of operations will be reduced.
3. Rules used for defining transformation in form of equations are complex as compared to
matrix.

Composition of two translations:

Let t1 t2 t3 t4are translation vectors. They are two translations P 1 and P2. The matrix of P1 and
P2 given below. The P1 and P2are represented using Homogeneous matrices and P will be the final
transformation matrix obtained after multiplication.

Above resultant matrix show that two successive translations are additive.

Composition of two Rotations: Two Rotations are also additive

Composition of two Scaling: The composition of two scaling is multiplicative. Let S 11 and S12are
matrix to be multiplied.
General Pivot Point Rotation or Rotation about Fixed Point:

For it first of all rotate function is used. Sequences of steps are given below for rotating an object
about origin.

1. Translate object to origin from its original position as shown in fig (b)
2. Rotate the object about the origin as shown in fig (c).
3. Translate the object to its original position from origin. It is called as reverse translation as
shown in fig (d).

The matrix multiplication of above 3 steps is given below


Scaling relative to fixed point:

For this following steps are performed:

Step1: The object is kept at desired location as shown in fig (a)

Step2: The object is translated so that its center coincides with origin as shown in fig (b)

Step3: Scaling of object by keeping object at origin is done as shown in fig (c)

Step4: Again translation is done. This translation is called as reverse translation.


Viewing Pipelines and coordinate system
General Pivot Point Rotation or Rotation about Fixed Point:

For it first of all rotate function is used. Sequences of steps are given below for rotating an object
about origin.

1. Translate object to origin from its original position as shown in fig (b)
2. Rotate the object about the origin as shown in fig (c).
3. Translate the object to its original position from origin. It is called as reverse translation as
shown in fig (d).

The matrix multiplication of above 3 steps is given below

Scaling relative to fixed point:

For this following steps are performed:

Step1: The object is kept at desired location as shown in fig (a)

Step2: The object is translated so that its center coincides with origin as shown in fig (b)
Step3: Scaling of object by keeping object at origin is done as shown in fig (c)

Step4: Again translation is done. This translation is called as reverse translation.

Window-to- Viewport
Window to Viewport Transformation is the process of transforming 2D world-coordinate objects
to device coordinates. Objects inside the world or clipping window are mapped to the viewport
which is the area on the screen where world coordinates are mapped to be displayed.
General Terms:

 World coordinate – It is the Cartesian coordinate w.r.t which we define the diagram, like
Xwmin, Xwmax, Ywmin, Ywmax
 Device Coordinate –It is the screen coordinate where the objects are to be displayed, like
Xvmin, Xvmax, Yvmin, Yvmax
 Window –It is the area on the world coordinate selected for display.
 ViewPort –It is the area on the device coordinate where graphics is to be displayed.

Mathematical Calculation of Window to Viewport:

It may be possible that the size of the Viewport is much smaller or greater than the Window. In
these cases, we have to increase or decrease the size of the Window according to the Viewport
and for this, we need some mathematical calculations.
(xw, yw): A point on Window
(xv, yv): Corresponding point on Viewport
We have to calculate the point (xv, yv)

Now the relative position of the object in Window and Viewport are same.
For x coordinate,

For y coordinate,

So, after calculating for x and y coordinate, we get

Where sx is the scaling factor of x coordinate and s y is the scaling factor of y coordinate

Example: Let us assume,

 for window, Xwmin = 20, Xwmax = 80, Ywmin = 40, Ywmax = 80.
 for viewport, Xvmin = 30, Xvmax = 60, Yvmin = 40, Yvmax = 60.
 Now a point ( Xw, Yw ) be ( 30, 80 ) on the window. We have to calculate that point on the
viewport
i.e ( Xv, Yv ).
 First of all, calculate the scaling factor of x coordinate S x and the scaling factor of y
coordinate S y using the above-mentioned formula.
Sx = ( 60 - 30 ) / ( 80 - 20 ) = 30 / 60
Sy = ( 60 - 40 ) / ( 80 - 40 ) = 20 / 40
 So, now calculate the point on the viewport ( X v, Yv ).
Xv = 30 + ( 30 - 20 ) * ( 30 / 60 ) = 35
Yv = 40 + ( 80 - 40 ) * ( 20 / 40 ) = 60
 So, the point on window ( X w, Yw ) = ( 30, 80 ) will be ( Xv, Yv ) = ( 35, 60 ) on viewport.

Line clipping
It is performed by using the line clipping algorithm. The line clipping algorithms are:

1. Cohen Sutherland Line Clipping Algorithm


2. Midpoint Subdivision Line Clipping Algorithm
3. Liang-Barsky Line Clipping Algorithm
Cohen Sutherland Line Clipping Algorithm:

In the algorithm, first of all, it is detected whether line lies inside the screen or it is outside the
screen. All lines come under any one of the following categories:

1. Visible
2. Not Visible
3. Clipping Case

1. Visible: If a line lies within the window, i.e., both endpoints of the line lies within the window.
A line is visible and will be displayed as it is.

2. Not Visible: If a line lies outside the window it will be invisible and rejected. Such lines will
not display. If any one of the following inequalities is satisfied, then the line is considered invisible.
Let A (x1,y2) and B (x2,y2) are endpoints of line.

xmin,xmax are coordinates of the window.

ymin,ymax are also coordinates of the window.


x1>xmax
x2>xmax
y1>ymax
y2>ymax
x1<xmin
x2<xmin
y1<ymin
y2<ymin

3. Clipping Case: If the line is neither visible case nor invisible case. It is considered to be clipped
case. First of all, the category of a line is found based on nine regions given below. All nine regions
are assigned codes. Each code is of 4 bits. If both endpoints of the line have end bits zero, then the
line is considered to be visible.

The center area is having the code, 0000, i.e., region 5 is considered a rectangle window.
Following figure show lines of various types

Line AB is the visible case


Line OP is an invisible case
Line PQ is an invisible line
Line IJ are clipping candidates
Line MN are clipping candidate
Line CD are clipping candidate

Advantage of Cohen Sutherland Line Clipping:

1. It calculates end-points very quickly and rejects and accepts lines quickly.
2. It can clip pictures much large than screen size.

Algorithm of Cohen Sutherland Line Clipping:

Step1:Calculate positions of both endpoints of the line

Step2:Perform OR operation on both of these end-points

Step3:If the OR operation gives 0000


Then
line is considered to be visible
else
Perform AND operation on both endpoints
If And ≠ 0000
then the line is invisible
else
And=0000
Line is considered the clipped case.
Step4:If a line is clipped case, find an intersection with boundaries of the window
m=(y2-y1 )(x2-x1)

(a) If bit 1 is "1" line intersects with left boundary of rectangle window
y3=y1+m(x-X1)
where X = Xwmin
where Xwminis the minimum value of X co-ordinate of window

(b) If bit 2 is "1" line intersect with right boundary


y3=y1+m(X-X1)
where X = Xwmax
where X more is maximum value of X co-ordinate of the window

(c) If bit 3 is "1" line intersects with bottom boundary


X3=X1+(y-y1)/m
where y = ywmin
ywmin is the minimum value of Y co-ordinate of the window

(d) If bit 4 is "1" line intersects with the top boundary


X3=X1+(y-y1)/m
where y = ywmax
ywmax is the maximum value of Y co-ordinate of the window

Polygon clipping
It is performed by processing the boundary of polygon against each window corner or edge. First
of all entire polygon is clipped against one edge, then resulting polygon is considered, then the
polygon is considered against the second edge, so on for all four edges.

Four possible situations while processing

1. If the first vertex is an outside the window, the second vertex is inside the window. Then
second vertex is added to the output list. The point of intersection of window boundary and
polygon side (edge) is also added to the output line.
2. If both vertexes are inside window boundary. Then only second vertex is added to the
output list.
3. If the first vertex is inside the window and second is an outside window. The edge which
intersects with window is added to output list.
4. If both vertices are the outside window, then nothing is added to output list.
Following figures shows original polygon and clipping of polygon against four windows.

Disadvantage of Cohen Hodgmen Algorithm:

This method requires a considerable amount of memory. The first of all polygons are stored in
original form. Then clipping against left edge done and output is stored. Then clipping against
right edge done, then top edge. Finally, the bottom edge is clipped. Results of all these operations
are stored in memory. So wastage of memory for storing intermediate polygons.
Polygon surfaces
The polygon surfaces are common in design and solid-modeling applications, since
their wireframe display can be done quickly to give general indication of surface structure. Then
realistic scenes are produced by interpolating shading patterns across polygon surface to
illuminate.

Tables
In this method, the surface is specified by the set of vertex coordinates and associated attributes.
As shown in the following figure, there are five vertices, from v1 to v5.

 Each vertex stores x, y, and z coordinate information which is represented in the table as
v1: x1, y1, z1.
 The Edge table is used to store the edge information of polygon. In the following figure,
edge E1 lies between vertex v1 and v2 which is represented in the table as E1: v1, v2.
 Polygon surface table stores the number of surfaces present in the polygon. From the
following figure, surface S1 is covered by edges E1, E2 and E3 which can be represented in
the polygon surface table as S1: E1, E2, and E3.
Meshes
The equation for plane surface can be expressed as −

Ax + By + Cz + D = 0

Where x,y,z�,�,� is any point on the plane, and the coefficients A, B, C, and D are constants
describing the spatial properties of the plane. We can obtain the values of A, B, C, and D by solving
a set of three plane equations using the coordinate values for three non collinear points in the plane.
Let us assume that three vertices of the plane are (x1, y1, z1), (x2, y2, z2) and (x3, y3, z3).

Let us solve the following simultaneous equations for ratios A/D, B/D, and C/D. You get the values
of A, B, C, and D.

A/D x1 + B/Dy1 + C/Dz1 = -1

A/D x2 + B/Dy2 + C/Dz2 = -1

A/D x3 + B/Dy3 + C/D z3 = -1

To obtain the above equations in determinant form, apply Cramer’s rule to the above equations.

A=⎡⎣⎢111y1y2y3z1z2z3⎤⎦⎥B=⎡⎣⎢x1x2x3111z1z2z3⎤⎦⎥C=⎡⎣⎢x1x2x3y1y2y3111⎤⎦⎥D=−⎡⎣⎢x1x2x3y
1y2y3z1z2z3

For any point x,y,zwith parameters A, B, C, and D, we can say that −

 Ax + By + Cz + D ≠ 0 means the point is not on the plane.


 Ax + By + Cz + D < 0 means the point is inside the surface.
 Ax + By + Cz + D > 0 means the point is outside the surface.

Curved Lie and Surface


A bent line is called a curved line. If the curvature is not zero, we consider it a curve line, which
is generally smooth and continuous.
 Curved Line Images
We observe many objects in our surroundings that are in the shape of a curved line. Some
of them incorporate the following:
 Railways at the turning points,
 Track of a roller coaster,
 Paths of roads in hilly areas, and so on.
Apart from the real-life examples, we can also observe the curve-shaped lines in Maths; for
example, the graph of a quadratic polynomial including parabola, ogive curve, arrows, etc.

Curved Surface
In computer graphics, we often need to draw different types of objects onto the screen. Objects
are not flat all time and we need to draw curves many times to draw an object.
Types of Curves:
The curve is an infinitely large set of points. Each point has two neighbors except endpoints.
1. Implicit curves
2. Explicit curves
3. Parametric curves
4. Bezier curves
5. B-spline curves

Implicit Curves:
An implicit curve or surface is the set of zeros of a function of 2 or 3 variables. We use implicit
curve functions to define lines and planes. Provides no control over tangents at connection points
when joining several implicit functions. Implicit functions are hard to find for many shapes. Use
a function that states which points are on and off the curves.

All lines: Ax+By+C=0


In three dimensions f(X,Y, Z) defines a surface.
 Any plane Ax+By+ +D=0, with constants a,b,c, and d.
 A sphere centered at the origin with a radius:
Curves in 3D are not so easily represented in implicit form. In general, we cannot solve for points
that satisfy the implicit form.
 Implicit function form – f(x,y) =
0

 The implicit representation for the circle is: X^2+Y^2-R^2=0

Explicit curves:

 Do not allow for multiple values for a given argument


 Cannot describe vertical tangents, as infinite slopes are hard to represent.
 Cannot represent all curves (vertical lines, circles)
Gives the value of one variable, the dependent variable in other terms of the other the
independent variable. The most familiar form of the curve in 2D:
Where y is the dependent variable, and x is the independent variable.

Mathematical function:

f = y(x) can be plotted on curve


eg: y =
2X^5+3X^4

y=mx+c

Parametric curves:
Curves have a parametric form called parametric curves. A curve in the plane is said to be
parameterized if the set of coordinates on the curves (x,y,z) is represented as a function of a
variable t. The variable t is called a parameter and the relations between x,y,z, and t are called a
parametric equation The parametric form of a curve is a function that assigns a position to values
of the free parameters. That the parametric function is a vector-valued function. This example is
a 2D curve, so the output of the function is a 2-D vector, in 3D it would be a 3 vector. It is simple
and flexible
The parametric form is suitable for representing closed and multivalued curves. In parametric
curves, each coordinate of a point on a curve is represented as a function of a single parameter.
There are many curves that we cannot write down as a single equation in terms of x and y. The
position vector of a point on the curve is fixed by the value of the parameter. Since a point on a
parametric curve is specified by a single value of the parameter, the parametric form is axis-
dependent. The function of each coordinate can be defined independently
Eg: x=acost; y=asint

Bezier curves:

A bezier curve is particularly a kind of spline generated from a set of control points by forming
a set of polynomial functions. Discovered by the french engineer Pierre bezier. These functions
are computed from the coordinates of the control points. These curves can be generated under
the control of other points. Tangents by using control points are used to generate curves.
It is an approximate spline curve. A bezier curve is defined by the defining polygon. It has no
properties that make them highly useful and convenient for curve and surface design.

Different types of curves are Simple, Quadratic, and Cubic.


1. Simple curve: Simple bezier curve is a straight line from the point.

Simple

2. Quadratic curve: Quadratic bezier curve is determined by three control points.

Quadratic
3. Cubic curve: The cubic bezier curve is determined by four control points.

cubic

Properties of Bezier Curve:

1. Bezier curves are widely available and used in various CAD systems, in general graphics
packages such as GL
2. The slope at beginning of the curve is along the line joining the first two control points and
the slope at the end of the curve is along the line joining the last two points
3. Bezier curve always passes through the first and last points i.e p(o)=po, p(1,=pnlie)
4. The curves lies entirely within the convex hall formed by the four control points
5. The slope at the beginning of the curve is along the line joining the first two control points
and the slope at the end of the curve is along the line joining the last two points.
6. The degree of polynomial defining the curve segment is one less than the no of defining the
polygon.

Bezier Curve for 3 Points:

Q(u)=PoBo12(u)+P1B1,2(u)+P2B2,2(u)
 B0,2(u)=2Co*+u^0(1-u)*^2-0
= (1-U)^2
 B1,2(u)=2C1*U^1(1-u)*^2-1
= 2u(1-u)
 B2,2(u)=2C2*U^2(1-u)*^2-2
=u^2
Q(u)=P0(1-u)^2+P1*2u(1-u)+P2U^2
X(u)=(1-u)^2*X0+2U*(1-u)*X1+u^2*x2
Y(u)=(1-u)^2*y0+2u*(1-u)*y1
Bezier curves exhibit global control points means moving control points alert the shape of the
whole curve.
Different varieties of spline curves are used in graphics applications.
1. Hermit spline
2. Relaxed end spline
3. Cyclic spline
4. Anti cyclic spline
5. Normalized spline

B-spline curves:

the sum of the b-spline basic function at any parametric ‘u’ equal to 1
Summation of i from 1 to n+1 nik(u)=1
n+1= no of control points;
k=order of b-spline curve
We can add/modify any no of control points to change the shape of the curve without affecting
the degree of polynomial .Control points affect the shape of the curve only over range of
parameter values where is associated basic function is non-zero. The polynomial curve has
degree (d-1) and C^d-2 continuity over the range of u. Where each blending function is defined
our d sub intervals of the total range of u. The selected set of subintervals endpoints u is referred
to as a knot vector. The basic function is positive or zeroes for all parameter values. Expect for
k=1 each basis function has one maximum value.

Quadric Surface
Quadric surfaces are natural 3D-extensions of the so-called conics (ellipses, parabolas, and
hyperbolas), and they provide examples of fairly nice surfaces to use as examples in multivariable
calculus. The basic quadric surfaces are described by the following equations, where A, B, and C
are constants.

Spline Representation
o The spline command in AutoCAD is used to create a smooth curve, which passes through
a set of predefined points.
o It creates a non-uniform curve passing through the points.
o Thus, spline can be created by defining fit points or Control Vertices (CV) points.
o The control vertices define a control frame, which is used to control the shape of the spline,
while fit points coincided with the spline.
Let's understand by two examples.

Example 1: Using Fit points

The steps to create spline using fit points are listed below:

1. Select the Spline icon under the Draw interface from the ribbon panel, as shown below:

Or

Type SPL on the command line or command prompt and press Enter.

2. Click on the 'Method' option on the command line, as shown below:

3. Click on the 'Fit' option, as shown below:

4. Specify the fit points by clicking with the help of the mouse. It is shown below:

Continue specifying points and press Enter or Esc to exit.

The created spline is shown below:


Example 2: Using CV

The steps to create spline using Control Vertices (CV) are listed below:

1. Select the Spline icon under the Draw interface from the ribbon panel.

Or

Type SPL on the command line or command prompt and press Enter.

2. Click on the 'Method' option on the command line, as shown below:

3. Click on the 'CV' option, as shown below:

4. Specify the vertices by clicking with the help of the mouse. It is shown below:

Continue specifying points and press Enter or Esc to exit.

The created spline is shown below:


Cubic Spline interpolation methods
We estimate f(x) for arbitrary x, by drawing a smooth curve through the xi. If the desired x is
between the largest and smallest of the xi then it is called interpolation, otherwise, it is
called Extrapolation.

Linear Interpolation:
Linear Interpolation is a way of curve fitting the points by using linear polynomial such as the
equation of the line. This is just similar to joining points by drawing a line b/w the two points in
the dataset.

Linear Interpolation

Polynomial Interpolation:

Polynomial Interpolation is the way of fitting the curve by creating a higher degree polynomial
to join those points.
Spline Interpolation:
Spline interpolation similar to the Polynomial interpolation x’ uses low-degree polynomials in
each of the intervals and chooses the polynomial pieces such that they fit smoothly together.
The resulting function is called a spline.
Cubic Spline Interpolation
Cubic spline interpolation is a way of finding a curve that connects data points with a degree of
three or less. Splines are polynomial that are smooth and continuous across a given plot and
also continuous first and second derivatives where they join.
We take a set of points [xi, yi] for i = 0, 1, …, n for the function y = f(x). The
cubic spline interpolation is a piecewise continuous curve, passing through each of the values
in the table.

 Following are the conditions for the spline of degree K=3:


 The domain of s is in intervals of [a, b].
 S, S’, S” are all continuous function on [a, b].

Here Si(x) is the cubic polynomial that will be used on the subinterval [xi, xi+1].
The main factor about spline is that it combines different polynomials and not use a single
polynomial of degree n to fit all the points at once, it avoids high degree polynomials and thereby
the potential problem of overfitting. These low-degree polynomials need to be such that the
spline they form is not only continuous but also smooth.
But for the spline to be smooth and continuous, the two consecutive polynomials
and Si (x) and Si+1 (x) must join at xi.

Or, Si (x) must be passed through two end-points:


Assume, S” (x) = Mi (i= 0,1,2, …, n). Since S(x) is cubic polynomial, so S” (x) is the linear
polynomial in [xi, xi+1], then S”’ (x) will be:
By applying the Taylor series:
Let, x = xi+1:
Similarly, we apply above equation b/w range [xi-1, xi]:
Let hi =xi – xi-1
Now, we have n-1 equations, but have n+1 variables i.e M0, M1, M2,…Mn-1, Mn. Therefore, we
need to get 2 more equation. For that, we will be using additional boundary conditions.
Let’s consider that we know S’ (x0) = f0‘ and S’ (xn) = fn‘, especially if S’ (x0) and S’ (xn) both
are 0. This is called the clamped boundary condition.

Similarly, for Mn
or
Combining the above equation in to the matrix form, we get the following ma
Implementation
We will be using the Scipy to perform the linear spline interpolation. We will be using Cubic
Spline and interp1d function of scipy to perform interpolation of function f(x) =1/(1+x^2)

Bazier Curves and surface


Properties of Bezier Curve:

1. Bezier curves are widely available and used in various CAD systems, in general graphics
packages such as GL
2. The slope at beginning of the curve is along the line joining the first two control points and
the slope at the end of the curve is along the line joining the last two points
3. Bezier curve always passes through the first and last points i.e p(o)=po, p(1,=pnlie)
4. The curves lies entirely within the convex hall formed by the four control points
5. The slope at the beginning of the curve is along the line joining the first two control points
and the slope at the end of the curve is along the line joining the last two points.
6. The degree of polynomial defining the curve segment is one less than the no of defining the
polygon.

Bezier Curve for 3 Points:

Q(u)=PoBo12(u)+P1B1,2(u)+P2B2,2(u)
 B0,2(u)=2Co*+u^0(1-u)*^2-0
= (1-U)^2
 B1,2(u)=2C1*U^1(1-u)*^2-1
= 2u(1-u)
 B2,2(u)=2C2*U^2(1-u)*^2-2
=u^2
Q(u)=P0(1-u)^2+P1*2u(1-u)+P2U^2
X(u)=(1-u)^2*X0+2U*(1-u)*X1+u^2*x2
Y(u)=(1-u)^2*y0+2u*(1-u)*y1
 Bezier curves exhibit global control points means moving control points alert the shape of
the whole curve.

Different varieties of spline curves are used in graphics applications.


1. Hermit spline
2. Relaxed end spline
3. Cyclic spline
4. Anti cyclic spline
5. Normalized spline

B-spline curves:

the sum of the b-spline basic function at any parametric ‘u’ equal to 1
Summation of i from 1 to n+1 nik(u)=1
n+1= no of control points;
k=order of b-spline curve
We can add/modify any no of control points to change the shape of the curve without affecting
the degree of polynomial .Control points affect the shape of the curve only over range of
parameter values where is associated basic function is non-zero. The polynomial curve has
degree (d-1) and C^d-2 continuity over the range of u. Where each blending function is defined
our d sub intervals of the total range of u. The selected set of subintervals endpoints u is referred
to as a knot vector. The basic function is positive or zeroes for all parameter values. Expect for
k=1 each basis function has one maximum value.

B- Spline Curve and Surface


Concept of B-spline curve came to resolve the disadvantages having by Bezier curve, as we all
know that both curves are parametric in nature. In Bezier curve we face a problem, when we
change any of the control point respective location the whole curve shape gets change. But here
in B-spline curve, the only a specific segment of the curve-shape gets changes or affected by the
changing of the corresponding location of the control points.
In the B-spline curve, the control points impart local control over the curve-shape rather than the
global control like Bezier-curve.
B-spline curve shape before changing the position of control point P1 –
+B-spline curve shape after changing the position of control point P1 –

You can see in the above figure that only the segment-1st shape as we have only changed the
control point P 1, and the shape of segment-2nd remains intact.

B-spline Curve :
As we see above that the B-splines curves are independent of the number of control points and
made up of joining the several segments smoothly, where each segment shape is decided by
some specific control points that come in that region of segment. Consider a curve given below

Attributes of this curve are –

 We have “n+1” control points in the above, so, n+1=8, so n=7.


 Let’s assume that the order of this curve is ‘k’, so the curve that we get will be of a
polynomial degree of “k-1”. Conventionally it’s said that the value of ‘k’ must be in the
range: 2 ≤ k ≤ n+1. So, let us assume k=4, so the curve degree will be k-1 = 3.
 The total number of segments for this curve will be calculated through the following
formula –
Total no. of seg = n – k + 2 = 7 – 4 + 2 = 5.
Segments Control points Parameter

S0 P0,P1,P2,P3 0≤t≤2

S1 P1,P2,P3,P4 2≤t≤3

S2 P2,P3,P4,P5 3≤t≤4

S3 P3,P4,P5,P6 4≤t≤5

S4 P4,P5,P6,P7 5≤t≤6

Knots in B-spline Curve :

The point between two segments of a curve that joins each other such points are known as knots
in B-spline curve. In the case of the cubic polynomial degree curve, the knots are “n+4”. But in
other common cases, we have “n+k+1” knots. So, for the above curve, the total knots vectors
will be –
Total knots = n+k+1 = 7 + 4 + 1 = 12
These knot vectors could be of three types –
 Uniform (periodic)
 Open-Uniform
 Non-Uniform

B-spline Curve Equation :

The equation of the spline-curve is as follows –


Where Pi, k, t correspondingly represents the control points, degree, parameter of the curve.
Properties of B-spline Curve :
 Each basis function has 0 or +ve value for all parameters.
 Each basis function has one maximum value except for k=1.
 The degree of B-spline curve polynomial does not depend on the number of control points
which makes it more reliable to use than Bezier curve.
 B-spline curve provides the local control through control points over each segment of the
curve.
 The sum of basis functions for a given parameter is one.
Unit 3
3D Scalling
Transformation : It is performed to resize the 3D-object that is the dimension of the object
can be scaled(alter) in any of the x, y, z direction through S x, Sy, Sz scaling factors. Matrix
representation of Scaling transformation

Condition : The following kind of sequences occur while performing the scaling
transformations on a fixed point –
 The fixed point is translated to the origin.
 The object is scaled.
 The fixed point is translated to its original position.
Let a point in 3D space is P(x, y, z) over which we want to apply Scaling Transformation
operation and we are given with Scaling factor [S x, Sy, Sz] So, the new position of the point
after applying Scaling operation.

Note : If Scaling factor (S x, Sy, Sz), then, in this case, the 3D object will be Scaled up
uniformly in all X, Y, Z direction. Problem : Consider the above problem where a cube”
OABCDEFG” is given O(0, 0, 0, ), A(0, 4, 0), B(0, 4, 4), C(4, 4, 0), D(4, 4, 4), E(4, 0, 0), F(0,
0, 4), G (4, 0, 4) and we are given with Scaling factor S x, Sy, Sz. Perform Scaling
operation over the cube. Solution : We are asked to perform the Scaling transformation over
the given below 3D object Fig.1:

Fig.1

Now, applying the Matrix Scaling transformation condition we get – performing the Scaling
Transformation successfully the Fig.1 will look like as below Fig.2 –
Fig.2

Rotation translation

It is a process of changing the angle of the object. Rotation can be clockwise or anticlockwise. For
rotation, we have to specify the angle of rotation and rotation point. Rotation point is also called a
pivot point. It is print about which object is rotated.

Types of Rotation:

1. Anticlockwise
2. Counterclockwise

The positive value of the pivot point (rotation angle) rotates an object in a counter-clockwise (anti-
clockwise) direction.

The negative value of the pivot point (rotation angle) rotates an object in a clockwise direction.

When the object is rotated, then every point of the object is rotated by the same angle.

Straight Line: Straight Line is rotated by the endpoints with the same angle and redrawing the
line between new endpoints.

Polygon: Polygon is rotated by shifting every vertex using the same rotational angle.

Curved Lines: Curved Lines are rotated by repositioning of all points and drawing of the curve at
new positions.

Circle: It can be obtained by center position by the specified angle.

Ellipse: Its rotation can be obtained by rotating major and minor axis of an ellipse by the desired
angle.
Matrix for rotation is a clockwise direction.

Matrix for rotation is an anticlockwise direction.

Matrix for homogeneous co-ordinate rotation (clockwise)


Matrix for homogeneous co-ordinate rotation (anticlockwise)

Rotation about an arbitrary point: If we want to rotate an object or point about an arbitrary
point, first of all, we translate the point about which we want to rotate to the origin. Then rotate
point or object about the origin, and at the end, we again translate it to the original place. We get
rotation about an arbitrary point.

Example: The point (x, y) is to be rotated

The (xc yc) is a point about which counterclockwise rotation is done

Step1: Translate point (xc yc) to origin

Step2: Rotation of (x, y) about the origin


Step3: Translation of center of rotation back to its original position

Example1: Prove that 2D rotations about the origin are commutative i.e. R1 R2=R2 R1.

Solution: R1 and R2are rotation matrices


Composite transformation
A number of transformations or sequence of transformations can be combined into single one
called as composition. The resulting matrix is called as composite matrix. The process of
combining is called as concatenation.

Suppose we want to perform rotation about an arbitrary point, then we can perform it by the
sequence of three transformations

1. Translation
2. Rotation
3. Reverse Translation

The ordering sequence of these numbers of transformations must not be changed. If a matrix is
represented in column form, then the composite transformation is performed by multiplying matrix
in order from right to left side. The output obtained from the previous matrix is multiplied with the
new coming matrix.

Example showing composite transformations:

The enlargement is with respect to center. For this following sequence of transformations will be
performed and all will be combined to a single one

Step1: The object is kept at its position as in fig (a)

Step2: The object is translated so that its center coincides with the origin as in fig (b)

Step3: Scaling of an object by keeping the object at origin is done in fig (c)

Step4: Again translation is done. This second translation is called a reverse translation. It will
position the object at the origin location.

Above transformation can be represented as T V.STV-1


Parallel And Perspective Transformation
Parallel Projection use to display picture in its true shape and size. When projectors are
perpendicular to view plane then is called orthographic projection. The parallel projection is
formed by extending parallel lines from each vertex on the object until they intersect the plane of
the screen. The point of intersection is the projection of vertex.

Parallel projections are used by architects and engineers for creating working drawing of the
object, for complete representations require two or more views of an object using different planes.

1. Isometric Projection: All projectors make equal angles generally angle is of 30°.
2. Dimetric: In these two projectors have equal angles. With respect to two principle axis.
3. Trimetric: The direction of projection makes unequal angle with their principle axis.
4. Cavalier: All lines perpendicular to the projection plane are projected with no change in
length.
5. Cabinet: All lines perpendicular to the projection plane are projected to one half of their
length. These give a realistic appearance of object.
Difference Between Parallel and Perspective Projection in Computer Graphics

Here is a list of the differences between Parallel and Perspective Projection in Computer
Graphics.
Parameters Parallel Projection Perspective Projection

Representation of It represents any given object in a It represents any given object in a three-
Objects different way- as we view it on a dimensional manner.
telescope.

Shape and Size of It does not alter the shape or the In this perspective, the objects that stay
Objects size of the given object on a far away appear to be smaller in size,
plane. while the ones near to the viewer’s eyes
appear bigger in size.

Distance from The distance of the given object The distance of the given object is finite
Center of is infinite from the center of the from the center of the projection.
Projection projection.

Accuracy of View It can provide a user with an It cannot provide a user with an accurate
accurate view of the given object. view of the given object. The shapes and
sizes of the projection tend to differ from
its origination.

Lines of Projection The parallel projection lines are The perspective projection lines are not
parallel to each other. parallel to each other.

Projector The projector is also parallel. The projector is not at all parallel.

Types of There are basically two types of There are basically three types of
Projection parallel projections: perspective projections:

 Oblique  One Point


 Orthographic  Two Point
 Three Point

Realistic View Parallel Projection does not form Perspective Projection generates a very
a realistic view of the world and realistic view of the world and the objects
its objects. present in it.
Projection transformation
Representing an n-dimensional object into an n-1 dimension is known as projection. It is process
of converting a 3D object into 2D object, we represent a 3D object on a 2D plane {(x,y,z)-
>(x,y)}. It is also defined as mapping or transforming of the object in projection plane or view
plane. When geometric objects are formed by the intersection of lines with a plane, the plane is
called the projection plane and the lines are called projections.
Types of Projections:
1. Parallel projections
2. Perspective projections

Center of Projection:

It is an arbitrary point from where the lines are drawn on each point of an object.
 If cop is located at a finite point in 3D space , Perspective projection is the result
 If the cop is located at infinity, all the lines are parallel and the result is a parallel projection.

Parallel Projection:

A parallel projection is formed by extending parallel lines from each vertex of object until they
intersect plane of screen. Parallel projection transforms object to the view plane along parallel
lines. A projection is said to be parallel, if center of projection is at an infinite distance from the
projected plane. A parallel projection preserves relative proportion of objects, accurate views of
the various sides of an object are obtained with a parallel projection. The projection lines are
parallel to each other and extended from the object and intersect the view plane. It preserves
relative propositions of objects, and it is used in drafting to produce scale drawings of 3D objects.
This is not a realistic representation, the point of intersection is the projection of the vertex.
Parallel projection is divided into two parts and these two parts sub divided into many.

Orthographic Projections:

In orthographic projection the direction of projection is normal to the projection of the plane. In
orthographic lines are parallel to each other making an angle 90 with view plane. Orthographic
parallel projections are done by projecting points along parallel lines that are perpendicular to
the projection line. Orthographic projections are most often used to procedure the front, side,
and top views of an object are called evaluations. Engineering and architectural drawings
commonly employ these orthographic projections. Transformation equations for an orthographic
parallel projection as straight forward. Some special orthographic parallel projections involve
plan view, side elevations. We can also perform orthographic projections that display more than
one phase of an object, such views are called monometric orthographic projections.

Oblique Projections:

Oblique projections are obtained by projectors along parallel lines that are not perpendicular to
the projection plane. An oblique projection shows the front and top surfaces that include the
three dimensions of height, width and depth. The front or principal surface of an object is parallel
to the plane of projection. Effective in pictorial representation.
 Isometric Projections: Orthographic projections that show more than one side of an object
are called axonometric orthographic projections. The most common axonometric projection
is an isometric projection. In this projection parallelism of lines are preserved but angles are
not preserved.
 Dimetric projections: In these two projectors have equal angles with respect to two
principal axis.
 Trimetric projections: The direction of projection makes unequal angle with their principal
axis.

Cavalier Projections:
All lines perpendicular to the projection plane are projected with no change in length. If the
projected line making an angle 45 degrees with the projected plane, as a result the line of the
object length will not change.

Cabinet Projections:

All lines perpendicular to the projection plane are projected to one half of their length. These
gives a realistic appearance of object. It makes 63.4 degrees angle with the projection plane.
Here lines perpendicular to the viewing surface are projected at half their actual length.
Perspective Projections:
 A perspective projection is the one produced by straight lines radiating from a common point
and passing through point on the sphere to the plane of projection.
 Perspective projection is a geometric technique used to produce a three dimensional graphic
image on a plane, corresponding to what person sees.
 Any set of parallel lines of object that are not parallel to the projection plane are projected
into converging lines. A different set of parallel lines will have a separate vanishing point.
 Coordinate positions are transferred to the view plane along lines that converge to a point
called projection reference point.
 The distance and angles are not preserved and parallel lines do not remain parallel. Instead,
they all converge at a single point called center of projection there are 3 types of perspective
projections.

Two characteristic of perspective are vanishing point and perspective force shortening. Due to
fore shortening objects and lengths appear smaller from the center of projections. The projections
are not parallel and we specify a center of projection cop.
Different types of perspective projections:
 One point perspective projections: In this, principal axis has a finite vanishing point.
Perspective projection is simple to draw.

 Two point perspective projections: Exactly 2 principals have vanishing points. Perspective
projection gives better impression of depth.
 Three point perspective projections: All the three principal axes have finite vanishing
point. Perspective projection is most difficult to draw.

Perspective fore shortening:

The size of the perspective projection of the object varies inversely with distance of the object
from the center of projection.

Modelling
It is a 2D modeling system plus the addition of some more extra primitives. 3D system includes
all types of user-defined systems. The standard coordinate system used is called a world coordinate
system. Whereas the user-defined coordinate system is called a user coordinate system.

It is of three types

1. Solid Modelling System


2. Surface Modelling System
3. Wireframe Models

Wireframe Models:

It has a lot of other names also i.e.

1. Edge vertex models


2. Stick figure model
3. Polygonal net
4. Polygonal mesh
5. Visible line detection method
Wireframe model consists of vertex, edge (line) and polygons. Edge is used to join vertex. Polygon
is a combination of edges and vertices. The edges can be straight or curved. This model is used to
define computer models of parts, especially for computer-assisted drafting systems.

Wireframe models are Skelton of lines. Each line has two endpoints. The visibility or appearance
or look of the surface can be should using wireframe. If any hidden section exists that will be
removed or represented using dashed lines. For determining hidden surface, hidden lines methods
or visible line methods are used.

Advantage

1. It is simple and easy to create.


2. It requires little computer time for creation.
3. It requires a short computer memory, so the cost is reduced.
4. Wireframe provides accurate information about deficiencies of the surface.
5. It is suitable for engineering models composed of straight lines.
6. The clipping process in the wireframe model is also easy.
7. For realistic models having curved objects, roundness, smoothness is achieved.

Disadvantage

1. It is given only information about the outlook if do not give any information about the
complex part.
2. Due to the use of lines, the shape of the object lost in cluttering of lines.
3. Each straight line will be represented as collections of four fold lines using data points. So
complexity will be increased.
Wireframe and solid
Wireframe modeling is used to represent a 3D object with curves and lines, whereas solid modeling
is used to create a complete representation of a surface or a wireframe model, through a series of
additive and subtractive operations. The object is projected into screen space and rendered by
drawing lines at the location of each edge. The term "wire frame" comes from designers using
metal wire to represent the three-dimensional shape of solid objects.
A wireframe is a two-dimensional illustration of a page's interface that specifically focuses on
space allocation and prioritization of content, functionalities available, and intended behaviors.
For these reasons, wireframes typically do not include any styling, color, or graphics.

Hidden surface
1. One of the most challenging problems in computer graphics is the removal of hidden parts
from images of solid objects.
2. In real life, the opaque material of these objects obstructs the light rays from hidden parts
and prevents us from seeing them.
3. In the computer generation, no such automatic elimination takes place when objects are
projected onto the screen coordinate system.
4. Instead, all parts of every object, including many parts that should be invisible are
displayed.
5. To remove these parts to create a more realistic image, we must apply a hidden line or
hidden surface algorithm to set of objects.
6. The algorithm operates on different kinds of scene models, generate various forms of
output or cater to images of different complexities.
7. All use some form of geometric sorting to distinguish visible parts of objects from those
that are hidden.
8. Just as alphabetical sorting is used to differentiate words near the beginning of the alphabet
from those near the ends.
9. Geometric sorting locates objects that lie near the observer and are therefore visible.
10. Hidden line and Hidden surface algorithms capitalize on various forms of coherence to
reduce the computing required to generate an image.
11. Different types of coherence are related to different forms of order or regularity in the
image.
12. Scan line coherence arises because the display of a scan line in a raster image is usually
very similar to the display of the preceding scan line.
13. Frame coherence in a sequence of images designed to show motion recognizes that
successive frames are very similar.
14. Object coherence results from relationships between different objects or between separate
parts of the same objects.
15. A hidden surface algorithm is generally designed to exploit one or more of these coherence
properties to increase efficiency.
16. Hidden surface algorithm bears a strong resemblance to two-dimensional scan conversions.

Types of hidden surface detection algorithms

1. Object space methods


2. Image space methods

Object space methods: In this method, various parts of objects are compared. After comparison
visible, invisible or hardly visible surface is determined. These methods generally decide visible
surface. In the wireframe model, these are used to determine a visible line. So these algorithms are
line based instead of surface based. Method proceeds by determination of parts of an object whose
view is obstructed by other object and draws these parts in the same color.

Image space methods: Here positions of various pixels are determined. It is used to locate the
visible surface instead of a visible line. Each point is detected for its visibility. If a point is visible,
then the pixel is on, otherwise off. So the object close to the viewer that is pierced by a projector
through a pixel is determined. That pixel is drawn is appropriate color.

These methods are also called a Visible Surface Determination. The implementation of these
methods on a computer requires a lot of processing time and processing power of the computer.

The image space method requires more computations. Each object is defined clearly. Visibility of
each object surface is also determined.
Differentiate between Object space and Image space method

Object Space Image Space

1. Image space is object based. It concentrates 1. It is a pixel-based method. It is concerned with


on geometrical relation among objects in the the final image, what is visible within each raster
scene. pixel.

2. Here surface visibility is determined. 2. Here line visibility or point visibility is


determined.

3. It is performed at the precision with which 3. It is performed using the resolution of the
each object is defined, No resolution is display device.
considered.

4. Calculations are not based on the resolution 4. Calculations are resolution base, so the
of the display so change of object can be easily change is difficult to adjust.
adjusted.

5. These were developed for vector graphics 5. These are developed for raster devices.
system.

6. Object-based algorithms operate on 6. These operate on object data.


continuous object data.

7. Vector display used for object method has 7. Raster systems used for image space methods
large address space. have limited address space.

8. Object precision is used for application where 8. There are suitable for application where
speed is required. accuracy is required.

9. It requires a lot of calculations if the image is 9. Image can be enlarged without losing
to enlarge. accuracy.

10. If the number of objects in the scene 10. In this method complexity increase with the
increases, computation time also increases. complexity of visible parts.
Visible surface detection concept
When we view a picture containing non-transparent objects and surfaces, then we cannot see those
objects from view which are behind from objects closer to eye. We must remove these hidden
surfaces to get a realistic screen image. The identification and removal of these surfaces is
called Hidden-surface problem.

There are two approaches for removing hidden surface problems − Object-Space
method and Image-space method. The Object-space method is implemented in physical
coordinate system and image-space method is implemented in screen coordinate system.

When we want to display a 3D object on a 2D screen, we need to identify those parts of a screen
that are visible from a chosen viewing position.

Depth Buffer Z−Buffer Method

This method is developed by Cutmull. It is an image-space approach. The basic idea is to test the
Z-depth of each surface to determine the closest visible surface.

In this method each surface is processed separately one pixel position at a time across the
surface. The depth values for a pixel are compared and the closest smallestzsurface determines
the color to be displayed in the frame buffer.

It is applied very efficiently on surfaces of polygon. Surfaces can be processed in any order. To
override the closer polygons from the far ones, two buffers named frame buffer and depth
buffer, are used.

Depth buffer is used to store depth values for x,y position, as surfaces are
processed 0≤depth≤10≤ℎ≤1.

The frame buffer is used to store the intensity value of color value at each position x,y.

The z-coordinates are usually normalized to the range [0, 1]. The 0 value for z-coordinate
indicates back clipping pane and 1 value for z-coordinates indicates front clipping pane.
Algorithm

Step-1 − Set the buffer values −

Depthbuffer x,y = 0

Framebuffer x,y= background color

Step-2 − Process each polygon Oneatatime

For each projected x,y pixel position of a polygon, calculate depth z.

If Z > depthbuffer x,y

Compute surface color,

set depthbuffer x,y = z,

framebuffer x,y = surfacecolor x,y

Advantages
 It is easy to implement.
 It reduces the speed problem if implemented in hardware.
 It processes one object at a time.

Disadvantages

 It requires large memory.


 It is time consuming process.

Scan-Line Method

It is an image-space method to identify visible surface. This method has a depth information for
only single scan-line. In order to require one scan-line of depth values, we must group and
process all polygons intersecting a given scan-line at the same time before processing the next
scan-line. Two important tables, edge table and polygon table, are maintained for this.

The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of
each line, and pointers into the polygon table to connect edges to surfaces.

The Polygon Table − It contains the plane coefficients, surface material properties, other surface
data, and may be pointers to the edge table.
To facilitate the search for surfaces crossing a given scan-line, an active list of edges is formed.
The active list stores only those edges that cross the scan-line in order of increasing x. Also a flag
is set for each surface to indicate whether a position along a scan-line is either inside or outside
the surface.

Pixel positions across each scan-line are processed from left to right. At the left intersection with
a surface, the surface flag is turned on and at the right, the flag is turned off. You only need to
perform depth calculations when multiple surfaces have their flags turned on at a certain scan-line
position.

Area-Subdivision Method
The area-subdivision method takes advantage by locating those view areas that represent part of
a single surface. Divide the total viewing area into smaller and smaller rectangles until each
small area is the projection of part of a single visible surface or no surface at all.

Continue this process until the subdivisions are easily analyzed as belonging to a single surface
or until they are reduced to the size of a single pixel. An easy way to do this is to successively
divide the area into four equal parts at each step. There are four possible relationships that a
surface can have with a specified area boundary.

 Surrounding surface − One that completely encloses the area.


 Overlapping surface − One that is partly inside and partly outside the area.
 Inside surface − One that is completely inside the area.
 Outside surface − One that is completely outside the area.
The tests for determining surface visibility within an area can be stated in terms of these four
classifications. No further subdivisions of a specified area are needed if one of the following
conditions is true −

 All surfaces are outside surfaces with respect to the area.


 Only one inside, overlapping or surrounding surface is in the area.
 A surrounding surface obscures all other surfaces within the area boundaries.

Back-Face Detection
A fast and simple object-space method for identifying the back faces of a polyhedron is based on
the "inside-outside" tests. A point x,y,zis "inside" a polygon surface with plane parameters A, B,
C, and D if When an inside point is along the line of sight to the surface, the polygon must be a
back face we are inside that face and cannot see the front of it from our viewing position

We can simplify this test by considering the normal vector N to a polygon surface, which has
Cartesian components A,B,C

In general, if V is a vector in the viewing direction from the eye or camera position, then this
polygon is a back face if

V.N > 0

Furthermore, if object descriptions are converted to projection coordinates and your viewing
direction is parallel to the viewing z-axis, then −

V = (0, 0, Vz) and V.N = VZC

So that we only need to consider the sign of C the component of the normal vector N.

In a right-handed viewing system with viewing direction along the negative ZV axis, the polygon
is a back face if C < 0. Also, we cannot see any face whose normal has z component C = 0, since
your viewing direction is towards that polygon. Thus, in general, we can label any polygon as a
back face if its normal vector has a z component value −

C <= 0
Similar methods can be used in packages that employ a left-handed viewing system. In these
packages, plane parameters A, B, C and D can be calculated from polygon vertex coordinates
specified in a clockwise direction.

Also, back faces have normal vectors that point away from the viewing position and are
identified by C >= 0 when the viewing direction is along the positive Zv axis. By examining
parameter C for the different planes defining an object, we can immediately identify all the back
faces.

Back-face detection
When we project 3-D objects on a 2-D screen, we need to detect the faces that are hidden on
2D.
Back-Face detection, also known as Plane Equation method, is an object space method in
which objects and parts of objects are compared to find out the visible surfaces. Let us consider
a triangular surface that whose visibility needs to decided. The idea is to check if the triangle
will be facing away from the viewer or not. If it does so, discard it for the current frame and
move onto the next one. Each surface has a normal vector. If this normal vector is pointing in
the direction of the center of projection, then it is a front face and can be seen by the viewer. If
this normal vector is pointing away from the center of projection, then it is a back face and can
not be seen by the viewer.
Algorithm for left-handed system :
1) Compute N for every face of object.
2) If (C.(Z component) > 0)
then a back face and don't draw
else
front face and draw
The Back-face detection method is very simple. For the left-handed system, if the Z component
of the normal vector is positive, then it is a back face. If the Z component of the vector is
negative, then it is a front face.
Algorithm for right-handed system :
1) Compute N for every face of object.
2) If (C.(Z component) < 0)
then a back face and don't draw
else
front face and draw
Thus, for the right-handed system, if the Z component of the normal vector is negative, then it
is a back face. If the Z component of the vector is positive, then it is a front face.
Back-face detection can identify all the hidden surfaces in a scene that contain non-overlapping
convex polyhedra.
Recalling the polygon surface equation :
Ax + By + Cz + D < 0
While determining whether a surface is back-face or front face, also consider the viewing
direction. The normal of the surface is given by :
N = (A, B, C)
A polygon is a back face if V view.N > 0. But it should be kept in mind that after
application of the viewing transformation, viewer is looking down the negative Z-
axis. Therefore, a polygon is back face if :
(0, 0, -1).N > 0
or if C < 0
Viewer will also be unable to see surface with C = 0, therefore, identifying a
polygon surface as a back face if : C <= 0.

Considering (a),
V.N = |V||N|Cos(angle)
if 0 <= angle 0 and V.N > 0
Hence, Back-face.
Considering (b),
V.N = |V||N|Cos(angle)
if 90 < angle <= 180, then
cos(angle) < 0 and V.N < 0
Hence, Front-face.

Limitations :

1) This method works fine for convex polyhedra, but not necessarily for concave
polyhedra.

2) This method can only be used on solid objects modeled as a polygon mesh.

Depth buffer method


A-Buffer method in computer graphics is a general hidden face detection mechanism suited to
medium scale virtual memory computers. This method is also known as anti-aliased or area-
averaged or accumulation buffer. This method extends the algorithm of depth-buffer (or Z
Buffer) method. As the depth buffer method can only be used for opaque object but not for
transparent object, the A-buffer method provides advantage in this scenario. Although the A
buffer method requires more memory, but different surface colors can be correctly composed
using it. Being a descendant of the Z-buffer algorithm, each position in the buffer can reference
a linked list of surfaces. The key data structure in the A buffer is the accumulation buffer.
Each position in the A buffer has 2 fields :

1) Depth field
2) Surface data field or Intensity field

A depth field stores a positive or negative real number. A surface data field can stores surface
intensity information or a pointer to a linked list of surfaces that contribute to that pixel position.

As shown in the above figure, if the value of depth is >= 0, the number stored at that position is
the depth of single surface overlapping the corresponding pixel area. The 2nd field, i.e, the
intensity field then stores the RGB components of the surface color at that point and the percent
of pixel coverage.
As shown in the above figure, multiple-surface contributions to the pixel intensity is indicated
by depth < 0. The 2nd field, i.e, the intensity field then stores a pointer to a linked list of surface
data.
A buffer method is slightly costly than Z-buffer method because it requires more memory in
comparison to the Z-buffer method. It proceeds just like the depth buffer algorithm. Here, the
depth and opacity are used to determine the final color of the pixel. As shown in the figure below,
the A buffer method can be used to show the transparent objects.

The surface buffer in the A buffer method includes :

1. Depth
2. Surface Identifier
3. Opacity Parameter
4. Percent of area coverage
5. RGB intensity components
6. Pointer to the next surface

The other advantage of A buffer method is that it provides anti-aliasing in addition to what Z-
buffer does. The usage of A-buffer algorithm for the transparent surfaces is as shown below :
On applying the A-buffer method on all the six surfaces indicated below, the corresponding
colors are as :

In A-buffer method, each pixel is made up of a group of sub-pixels. The final color of a pixel is
computed by summing up all of its sub-pixels. Due to this accumulation taking place at sub-
pixel level, A-buffer method gets the name accumulation buffer.

Light sources
In the world of computer graphics, light sources play an essential role in bringing realism and
visual appeal to the computer-generated environment.

Whether you are looking to recreate the sunset or the diffuse reflection of a cloudy day,
understanding different types of light sources can help you achieve the desired effects. Each type
of light source has its own unique characteristics and uses.

Types of light sources

The most common types of light sources are explained below.


Ambient light source
 Provides uniform light in all directions, modeling inter-reflections.
 Does not have a specific direction.
 Ensures that no part of the scene is entirely in the dark.
 Does not contribute to the shadow of an object.
Directional light source
 Emits parallel rays as if the source is infinitely far away from all the surfaces in the scene
 Direction of light is constant for all surfaces in the scene
o Light position is not important.
o Viewer position is not important.
o Surface angle is important.
 Create shadows based on its direction despite its infinite distance

Directional light source


Point light source
 Emits light equally in all directions from a single point, similar to a light bulb
 Direction of the light is different for different points
o Light position is important.
o Viewer position is important.
o Surface angle important.

Point light source


 A normalized vector is calculated for every point we light by using the formula given
below.
l=∥b−c∥b−c

Where,

 b is the point of the light source


 c is the point on the surface

Normalized vector for every point lit


 Light attenuation is the brightness or intensity of the light source, diminishing with
distance.
o As light travels through a medium or space, it encounters particles or objects that
can scatter or absorb light, decreasing its intensity with increasing distance from
the source.
o So, the objects may appear less bright when they are farther away from a light
source.
 Light attenuation is calculated as given below.
atten=1fatten=∥b−c∥1
 The light is brightest at the source b and diminishes as we move away.
 It gives the scene a more realistic appearance.
Spotlights
 Spotlights are the point light sources.
 Have directionally falling intensity.
 Define cutoff angle for the spotlight.

Cutoff angle for the spotlight


Area light source
 Defines a 2D emissive surface, like a disc or polygon.
 Example: fluorescent light panels.

Illumination method
Illumination model, also known as Shading model or Lightning model, is used to
calculate the intensity of light that is reflected at a given point on surface. There are
three factors on which lightning effect depends on:
1. Light Source :
Light source is the light emitting source. There are three types of light sources:
1. Point Sources – The source that emit rays in all directions (A bulb in a room).
2. Parallel Sources – Can be considered as a point source which is far from the
surface (The sun).
3. Distributed Sources – Rays originate from a finite area (A tubelight).
Their position, electromagnetic spectrum and shape determine the lightning effect.
2. Surface :
When light falls on a surface part of it is reflected and part of it is absorbed. Now
the surface structure decides the amount of reflection and absorption of light. The
position of the surface and positions of all the nearby surfaces also determine the
lightning effect.
3. Observer :
The observer’s position and sensor spectrum sensitivities also affect the lightning
effect.

1. Ambient Illumination :
Assume you are standing on a road, facing a building with glass exterior and sun rays
are falling on that building reflecting back from it and the falling on the object under
observation. This would be Ambient Illumination. In simple words, Ambient
Illumination is the one where source of light is indirect.
The reflected intensity Iamb of any point on the surface is:

2. Diffuse Reflection :
Diffuse reflection occurs on the surfaces which are rough or grainy. In this reflection
the brightness of a point depends upon the angle made by the light source and the
surface.
The reflected intensity Idiff of a point on the surface is:
3. Specular Reflection :
When light falls on any shiny or glossy surface most of it is reflected back, such
reflection is known as Specular Reflection.
Phong Model is an empirical model for Specular Reflection which provides us with the
formula for calculation the reflected intensity I spec:

Color model
The colour spaces in image processing aim to facilitate the specifications of colours in some
standard way.
Different types of colour models are used in multiple fields like in hardware, in multiple
applications of creating animation, etc.
Let’s see each colour model and its application.
 RGB
 CMYK
 HSV
 YIQ
RGB: The RGB colour model is the most common colour model used in Digital image
processing and openCV. The colour image consists of 3 channels. One channel each for one
colour. Red, Green and Blue are the main colour components of this model. All other colours
are produced by the proportional ratio of these three colours only. 0 represents the black and as
the value increases the colour intensity increases.

Properties:
 This is an additive colour model. The colours are added to the black.
 3 main channels: Red, Green and Blue.
 Used in DIP, openCV and online logos.

Colour combination:
Green(255) + Red(255) = Yellow
Green(255) + Blue(255) = Cyan
Red(255) + Blue(255) = Magenta
Red(255) + Green(255) + Blue(255) = White

CMYK: CMYK colour model is widely used in printers. It stands for Cyan, Magenta, Yellow
and Black (key). It is a subtractive colour model. 0 represents the primary colour and 1 represents
the lightest colour. In this model, point (1, 1, 1) represents black, and (0,0,0) represents white. It
is a subtractive model thus the value is subtracted from 1 to vary from least intense to a most
intense colour value.

1-RGB = CMY
Cyan is negative of Red.
Magenta is negative of Green.
Yellow is negative of Blue.
HSV: The image consists of three channels. Hue, Saturation and Value are three channels. This
colour model does not use primary colours directly. It uses colour in the way humans perceive
them. HSV colour when is represented by a cone.
Hue is a colour component. Since the cone represents the HSV model, the hue represents
different colours in different angle ranges.
Red colour falls between 0 and 60 degrees in the HSV cone.
Yellow colour falls between 61 and 120 degrees in the HSV cone.
Green colour falls between 121 and 180 degrees in the HSV cone.
Cyan colour falls between 181 and 240 degrees in the HSV cone.
Blue colour falls between 241 and 300 degrees in the HSV cone.
Magenta colour falls between 301 and 360 degrees in the HSV cone.
Saturation as the name suggest describes the percentage of the colour. Sometimes this value lies
in the 0 to 1 range. 0 being the grey and 1 being the primary colour. Saturation describes the
grey colour.
The value represents the intensity of the colour chosen. Its value lies in percentage from 0 to
100. 0 is black and 100 is the brightest and reveals the colour.
HSV model is used in histogram equalization and converting grayscale images to RGB colour
images.
YIQ: YIQ is the most widely colour model used in Television broadcasting. Y stands for
luminance part and IQ stands for chrominance part. In the black and white television, only the
luminance part (Y) was broadcast. The y value is similar to the grayscale part. The colour
information is represented by the IQ part.
There exist a formula to convert RGB into YIQ and vice-versa.

YIQ model is used in the conversion of grayscale images to RGB colour images.

Shading
Shading is concerned with the implementation of the illumination model at the pixel points or
polygon surfaces of the graphics objects. In particular, in Computer Graphics, Shading is a method
used to create or enhance the illusion of depth in an image by varying the amount of darkness in
the image. It can also be used to make some objects appear to be in front of or behind other objects
in the image. You need to know that there are various types of shading - Flat Shading (a simple
and fast method to specify the color for an object), Gouraud Shading (implemented to improve the
smooth transitions of the color on round objects), and Phong Shading (an interpolation technique
for surface shading). Let's discuss Shading and all its types in detail in this video.

Shading is referred to as the implementation of the illumination model at the pixel points or
polygon surfaces of the graphics objects.

Shading model is used to compute the intensities and colors to display the surface. The shading
model has two primary ingredients: properties of the surface and properties of the illumination
falling on it. The principal surface property is its reflectance, which determines how much of the
incident light is reflected. If a surface has different reflectance for the light of different
wavelengths, it will appear to be colored.

An object illumination is also significant in computing intensity. The scene may have to save
illumination that is uniform from all direction, called diffuse illumination.
Shading models determine the shade of a point on the surface of an object in terms of a number of
attributes. The shading Mode can be decomposed into three parts, a contribution from diffuse
illumination, the contribution for one or more specific light sources and a transparency effect. Each
of these effects contributes to shading term E which is summed to find the total energy coming
from a point on an object. This is the energy a display should generate to present a realistic image
of the object. The energy comes not from a point on the surface but a small area around the point.

The simplest form of shading considers only diffuse illumination:

Epd=Rp Id

where Epd is the energy coming from point P due to diffuse illumination. I d is the diffuse
illumination falling on the entire scene, and Rp is the reflectance coefficient at P which ranges from
shading contribution from specific light sources will cause the shade of a surface to vary as to its
orientation concerning the light sources changes and will also include specular reflection effects.
In the above figure, a point P on a surface, with light arriving at an angle of incidence i, the angle
between the surface normal Np and a ray to the light source. If the energy I ps arriving from the light
source is reflected uniformly in all directions, called diffuse reflection, we have

Eps=(Rp cos i)Ips

This equation shows the reduction in the intensity of a surface as it's tipped obliquely to the light
source. If the angle of incidence i exceeds90°, the surface is hidden from the light source and we
must set Epsto zero.

Constant Intensity Shading

A fast and straightforward method for rendering an object with polygon surfaces is constant
intensity shading, also called Flat Shading. In this method, a single intensity is calculated for each
polygon. All points over the surface of the polygon are then displayed with the same intensity
value. Constant Shading can be useful for quickly displaying the general appearances of the curved
surface as shown in fig:
In general, flat shading of polygon facets provides an accurate rendering for an object if all of the
following assumptions are valid:-

The object is a polyhedron and is not an approximation of an object with a curved surface.

All light sources illuminating the objects are sufficiently far from the surface so that N. L and the
attenuation function are constant over the surface (where N is the unit normal to a surface and L
is the unit direction vector to the point light source from a position on the surface).

The viewing position is sufficiently far from the surface so that V. R is constant over the surface
(where V is the unit vector pointer to the viewer from the surface position and R represent a unit
vector in the direction of ideal specular reflection).

Gouraud shading

This Intensity-Interpolation scheme, developed by Gouraud and usually referred to as Gouraud


Shading, renders a polygon surface by linear interpolating intensity value across the surface.
Intensity values for each polygon are coordinate with the value of adjacent polygons along the
common edges, thus eliminating the intensity discontinuities that can occur in flat shading.

Each polygon surface is rendered with Gouraud Shading by performing the following calculations:

1. Determining the average unit normal vector at each polygon vertex.


2. Apply an illumination model to each vertex to determine the vertex intensity.
3. Linear interpolate the vertex intensities over the surface of the polygon.

At each polygon vertex, we obtain a normal vector by averaging the surface normals of all
polygons staring that vertex as shown in fig:

Thus, for any vertex position V, we acquire the unit vertex normal with the calculation
Once we have the vertex normals, we can determine the intensity at the vertices from a lighting
model.

Following figures demonstrate the next step: Interpolating intensities along the polygon edges.
For each scan line, the intensities at the intersection of the scan line with a polygon edge are
linearly interpolated from the intensities at the edge endpoints. For example: In fig, the polygon
edge with endpoint vertices at position 1 and 2 is intersected by the scanline at point 4. A fast
method for obtaining the intensities at point 4 is to interpolate between intensities I 1 and I2 using
only the vertical displacement of the scan line.

Similarly, the intensity at the right intersection of this scan line (point 5) is interpolated from the
intensity values at vertices 2 and 3. Once these bounding intensities are established for a scan line,
an interior point (such as point P in the previous fig) is interpolated from the bounding intensities
at point 4 and 5 as

Incremental calculations are used to obtain successive edge intensity values between scan lines
and to obtain successive intensities along a scan line as shown in fig:
If the intensity at edge position (x, y) is interpolated as

Then we can obtain the intensity along this edge for the next scan line, Y-1 as

Similar calculations are used to obtain intensities at successive horizontal pixel positions along
each scan line.

When surfaces are to be rendered in color, the intensities of each color component is calculated at
the vertices. Gouraud Shading can be connected with a hidden-surface algorithm to fill in the
visible polygons along each scan-line. An example of an object-shaded with the Gouraud method
appears in the following figure:

Gouraud Shading discards the intensity discontinuities associated with the constant-shading
model, but it has some other deficiencies. Highlights on the surface are sometimes displayed with
anomalous shapes, and the linear intensity interpolation can cause bright or dark intensity streaks,
called Match bands, to appear on the surface. These effects can be decreased by dividing the
surface into a higher number of polygon faces or by using other methods, such as Phong shading,
that requires more calculations.
Phong Shading

A more accurate method for rendering a polygon surface is to interpolate the normal vector and
then apply the illumination model to each surface point. This method developed by Phong Bui
Tuong is called Phong Shading or normal vector Interpolation Shading. It displays more realistic
highlights on a surface and greatly reduces the Match-band effect.

A polygon surface is rendered using Phong shading by carrying out the following steps:

1. Determine the average unit normal vector at each polygon vertex.


2. Linearly & interpolate the vertex normals over the surface of the polygon.
3. Apply an illumination model along each scan line to calculate projected pixel intensities
for the surface points.

Interpolation of the surface normal along a polynomial edge between two vertices as shown in fig:

Incremental methods are used to evaluate normals between scan lines and along each scan line. At
each pixel position along a scan line, the illumination model is applied to determine the surface
intensity at that point.

Intensity calculations using an approximated normal vector at each point along the scan line
produce more accurate results than the direct interpolation of intensities, as in Gouraud Shading.
The trade-off, however, is that phong shading requires considerably more calculations.
Unit 4
Multimedia
The word multi and media are combined to form the word multimedia. The word “multi”
signifies “many.” Multimedia is a type of medium that allows information to be easily
transferred from one location to another. Multimedia is the presentation of text, pictures, audio,
and video with links and tools that allow the user to navigate, engage, create, and communicate
using a computer. Multimedia refers to the computer-assisted integration of text, drawings, still
and moving images(videos) graphics, audio, animation, and any other media in which any type
of information can be expressed, stored, communicated, and processed digitally.
To begin, a computer must be present to coordinate what you see and hear, as well as to interact
with. Second, there must be interconnections between the various pieces of information. Third,
you’ll need navigational tools to get around the web of interconnected data. Multimedia is being
employed in a variety of disciplines, including education, training, and business.

Categories of Multimedia

1. Linear Multimedia

It is also called Non-interactive multimedia. In the case of linear multimedia, the end-user cannot
control the content of the application. It has literally no interactivity of any kind. Some
multimedia projects like movies in which material is thrown in a linear fashion from beginning
to end. A linear multimedia application lacks all the features with the help of which, a user can
interact with the application such as the ability to choose different options, click on
icons, control the flow of the media, or change the pace at which the media is displayed. Linear
multimedia works very well for providing information to a large group of people such as at
training sessions, seminars, workplace meetings, etc.

2. Non-Linear Multimedia

In Non-Linear multimedia, the end-user is allowed the navigational control to rove through
multimedia content at his own desire. The user can control the access of the application. Non-
linear offers user interactivity to control the movement of data. For example computer games,
websites, self-paced computer-based training packages, etc.

Multimedia Application
Multimedia indicates that, in addition to text, graphics/drawings, and photographs, computer
information can be represented using audio, video, and animation. Multimedia is used in
1. Education

In the subject of education, multimedia is becoming increasingly popular. It is often used to


produce study materials for pupils and to ensure that they have a thorough comprehension of
various disciplines. Edutainment, which combines education and entertainment, has become
highly popular in recent years. This system gives learning in the form of enjoyment to the user.

2. Entertainment

The usage of multimedia in films creates a unique auditory and video impression. Today,
multimedia has completely transformed the art of filmmaking around the world. Multimedia is
the only way to achieve difficult effects and actions.
The entertainment sector makes extensive use of multimedia. It’s particularly useful for creating
special effects in films and video games. The most visible illustration of the emergence of
multimedia in entertainment is music and video apps. Interactive games become possible thanks
to the use of multimedia in the gaming business. Video games are more interesting because of
the integrated audio and visual effects.
3. Business

Marketing, advertising, product demos, presentation, training, networked communication, etc.


are applications of multimedia that are helpful in many businesses. The audience can quickly
understand an idea when multimedia presentations are used. It gives a simple and effective
technique to attract visitors’ attention and effectively conveys information about numerous
products. It’s also utilized to encourage clients to buy things in business marketing.
4. Technology & Science

In the sphere of science and technology, multimedia has a wide range of applications. It can
communicate audio, films, and other multimedia documents in a variety of formats. Only
multimedia can make live broadcasting from one location to another possible.
It is beneficial to surgeons because they can rehearse intricate procedures such as brain removal
and reconstructive surgery using images made from imaging scans of the human body. Plans can
be produced more efficiently to cut expenses and problems.
5. Fine Arts

Multimedia artists work in the fine arts, combining approaches employing many media and
incorporating viewer involvement in some form. For example, a variety of digital mediums can
be used to combine movies and operas.
Digital artist is a new word for these types of artists. Digital painters make digital paintings,
matte paintings, and vector graphics of many varieties using computer applications.
6. Engineering
Multimedia is frequently used by software engineers in computer simulations for military or
industrial training. It’s also used for software interfaces created by creative experts and software
engineers in partnership. Only multimedia is used to perform all the minute calculations.

Architecture of Multimedia
Multimedia encompasses a large variety of technologies and integration of multiple architectures
interacting in real time. All of these multimedia capabilities must integrate with the standard user
interfaces such as Microsoft Windows. The following figure describes the architecture of a
multimedia workstation environment. In this diagram.

The right side shows the new architectural entities required for supporting multimedia applications.
For each special devices such as scanners, video cameras, VCRs and sound equipment-, a software
device driver is need to provide the interface from an application to the device. The GUI require
control extensions to support applications such as full motion video
High Resolution Graphics Display
The various graphics standards such as MCA, GGA and XGA have demonstrated the increasing
demands for higher resolutions for GUls. Combined graphics and imaging applications require
functionality at three levels. They are provided by three classes of single-monitor architecture.

(i) VGA mixing: In VGA mixing, the image acquisition memory serves as the display source
memory, thereby fixing its position and size on screen:

(ii) VGA mixing with scaling: Use of scalar ICs allows sizing and positioning of images in pre-
defined windows.
Resizing the window causes the things to be retrieved again.

(iii) Dual-buffered VGA/Mixing/Scaling: Double buffer schemes maintain the original images
in a decompression buffer and the resized image in a display buffer.

The IMA Architectural Framework

The Interactive Multimedia Association has a task group to define the architectural framework for
multimedia to provide interoperability. The task group has C0ncentrated on the desktops and the
servers. Desktop focus is to define the interchange formats. This format allows multimedia objects
to be displayed on any work station.
The architectural approach taken by IMA is based on defining interfaces to a multimedia interface
bus. This bus would be the interface between systems and multimedia sources. It provides
streaming I/O service"s, including filters and translators Figure 3.4 describes the generalized
architectural approach

Network Architecture for Multimedia Systems:

Multimedia systems need special networks. Because large volumes of images and video messages
are being transmitted.

Asynchronous Transfer Mode technology (A TM) simplifies transfers across LANs and W ANs.
Task based Multi level networking

Higher classes of service require more expensive components in the' workstations as well as in the
servers supporting the workstation applications.
Rather than impose this cost on all work stations, an alternate approach is to adjust the class of
service to the specific requirement for the user. This approach is to adjust the class of services
according to the type of data being handled at a time also.

Technologies of Multimedia
Multimedia technology applies interactive computer elements, such as graphics, text, video,
sound, and animation, to deliver a message. If you have a knack for computer work and are
interested in digital media, read on to discover career and education opportunities available in
this growing specialty.

The definition of multimedia technology includes interactive, computer-based applications that


allow people to communicate ideas and information with digital and print elements.
Professionals in the field use computer software to develop and manage online graphics and
content. The work that media technology specialists produce is used in various media, such as
training programs, web pages, and news sites.

Important Facts About Multimedia Technology

Median Salary (2018) $72,520 (multimedia artists and animators)

Key Skills Time management; organization; problem-


solving; communication

Similar Occupations Art directors; graphic designers; web


designers and developers

Job Outlook 8% (for multimedia artists and animators)


Multimedia Database
Multimedia database is the collection of interrelated multimedia data that includes text,
graphics (sketches, drawings), images, animations, video, audio etc and have vast amounts of
multisource multimedia data. The framework that manages different types of multimedia data
which can be stored, delivered and utilized in different ways is known as multimedia database
management system. There are three classes of the multimedia database which includes static
media, dynamic media and dimensional media.
Content of Multimedia Database management system :
1. Media data – The actual data representing an object.
2. Media format data – Information such as sampling rate, resolution, encoding scheme etc.
about the format of the media data after it goes through the acquisition, processing and
encoding phase.
3. Media keyword data – Keywords description relating to the generation of data. It is also
known as content descriptive data. Example: date, time and place of recording.
4. Media feature data – Content dependent data such as the distribution of colors, kinds of
texture and different shapes present in data.

Types of multimedia applications based on data management characteristic are :


1. Repository applications – A Large amount of multimedia data as well as meta-data(Media
format date, Media keyword data, Media feature data) that is stored for retrieval purpose,
e.g., Repository of satellite images, engineering drawings, radiology scanned pictures.
2. Presentation applications – They involve delivery of multimedia data subject to temporal
constraint. Optimal viewing or listening requires DBMS to deliver data at certain rate
offering the quality of service above a certain threshold. Here data is processed as it is
delivered. Example: Annotating of video and audio data, real-time editing analysis.
3. Collaborative work using multimedia information – It involves executing a complex task
by merging drawings, changing notifications. Example: Intelligent healthcare network.

There are still many challenges to multimedia databases, some of which are :
1. Modelling – Working in this area can improve database versus information retrieval
techniques thus, documents constitute a specialized area and deserve special consideration.
2. Design – The conceptual, logical and physical design of multimedia databases has not yet
been addressed fully as performance and tuning issues at each level are far more complex as
they consist of a variety of formats like JPEG, GIF, PNG, MPEG which is not easy to convert
from one form to another.
3. Storage – Storage of multimedia database on any standard disk presents the problem of
representation, compression, mapping to device hierarchies, archiving and buffering during
input-output operation. In DBMS, a ”BLOB”(Binary Large Object) facility allows untyped
bitmaps to be stored and retrieved.
4. Performance – For an application involving video playback or audio-video synchronization,
physical limitations dominate. The use of parallel processing may alleviate some problems
but such techniques are not yet fully developed. Apart from this multimedia database
consume a lot of processing time as well as bandwidth.
5. Queries and retrieval –For multimedia data like images, video, audio accessing data
through query opens up many issues like efficient query formulation, query execution and
optimization which need to be worked upon.

Areas where multimedia database is applied are :


 Documents and record management : Industries and businesses that keep detailed records
and variety of documents. Example: Insurance claim record.
 Knowledge dissemination : Multimedia database is a very effective tool for knowledge
dissemination in terms of providing several resources. Example: Electronic books.
 Education and training : Computer-aided learning materials can be designed using
multimedia sources which are nowadays very popular sources of learning. Example: Digital
libraries.
 Marketing, advertising, retailing, entertainment and travel. Example: a virtual tour of cities.
 Real-time control and monitoring : Coupled with active database technology, multimedia
presentation of information can be very effective means for monitoring and controlling
complex tasks Example: Manufacturing operation control.

Compression and decompression


Compression software works by using mathematical equations to scan file data and look for
repeating patterns. The software then replaces these repeating patterns with smaller pieces of
data, or code, that take up less room. Once the compression software has identified a repeating
pattern, it replaces that pattern with a smaller code that also shows the locations of the pattern.
For example, in a picture, compression software replaces every instance of the color red with a
code for red that also indicates everywhere in the picture red occurs.

Types of Compression

Compressed files usually end with .zip, .sit and .tar. These are called extensions, and they
indicate different compression formats--different types of software used to compress files. For
PCs, .zip is most common, .sit is used often with Macs and .tar used with Linux. When you see
a file with one of these extensions, it may be either a single large file or a group of files bundled
together.

Lossless Compression

Lossless compression is a way to compress files without losing any data. This method shoves
the data closer together by replacing it with a type of shorthand. It can reduce file sizes by around
half. The .zip format uses lossless compression. With this form, the file decompresss to provide
an exact duplicate of the compressed file, with the same quality. However, it cannot compress
files to a really small size, making it less useful for very large files.
Lossy Compression

To make files up to 80 percent smaller, lossy compression is used. Lossy compression software
removes some redundant data from a file. Because data is removed, the quality of the
decompressed file is less than the original. This method compresses graphic, audio and video
files, and the slight damage to quality may not very noticeable. JPEG uses lossy compression,
which is why files converted to JPEG lose some quality. MP3 also uses lossy compression to fit
a great deal of music files in a small space, although the sound quality is lower than with WAV,
which uses lossless compression.

Decompression

In order to use a compressed file, you must first decompress it. The software used to decompress
depends on how the file was compressed in the first place. To decompress a .zip file you need
software, such as WinZip. To decompress a .sit file, you need the Stuffit Expander program.
WinZip does not decompress .sit files, but one version of StuffIt Expander can decompress both
.zip and .sit files. Files ending in .sea or .exe are called self-extracting files. These are
compressed files that do not require any special software to decompress. Just click on the file
and it will automatically decompress and open.

Full motion
A full-motion video (FMV) is the rapid display of a series of images by a computer in such a way
that the person viewing it perceives fluid movement. An FMV can consist of live
action, animation, computer-generated imagery or a combination of those formats. It typically
includes sound and can include text superimposed over the video.
An FMV is pre-recorded or pre-rendered and is stored as compressed data on a disk, such as a
compact disc (CD), a digital video disc (DVD) or a computer's hard disk. Compression is used in
order to decrease the amount of disk space needed to store the data, which is then decompressed
as the video is played back.

As in the projection of motion pictures, full-motion video images must be displayed at a rate of at
least 24 frames per second for the video to appear to be seamless and smooth. Most full-motion
videos are displayed at 30 frames per second, the same rate that television images are transmitted.
If the computer system on which the FMV is being stored or viewed is not able to decompress and
display the data quickly enough that at least 24 frames per second can be shown, the video will
appear to be choppy.

The most common use of the term "full-motion video" refers to the use of pre-recorded or pre-
rendered videos in games for computers or video-game consoles. Full-motion video technology
also can be used to display movies, television shows, instructional videos or educational videos on
a computer. The special features on some movie DVDs include short games that include the use
of full-motion video
Digital voice and audio
Digital Audio
Sound is made up of continuous analog sine waves that tend to repeat depending on the music or
voice. The analog waveforms are converted into digital fornlat by analog-to-digital converter
(ADC) using sampling process.
Sampling process
Sampling is a process where the analog signal is sampled over time at regular intervals to obtain
the amplitude of the analog signal at the sampling time.

Sampling rate
The regular interval at which the sampling occurs is called the sampling rate.

Digital Voice
Speech is analog in nature and is cOl1veli to digital form by an analog-to-digital converter (ADC).
An ADC takes an input signal from a microphone and converts the amplitude of the sampled
analog signal to an 8, 16 or 32 bit digital value.
The four important factors governing the
ADC process are sampling rate resolution linearity and conversion speed.
Sampling Rate: The rate at which the ADC takes a sample of an analog signal. Resolution: The
number of bits utilized for conversion determines the resolution of ADC.
Linearity: Linearity implies that the sampling is linear at all frequencies and that the amplitude
tmly represents the signal.
Conversion Speed: It is a speed of ADC to convert the analog signal into Digital signals. It must
be fast enough.

VOICE Recognition System


Voice Recognition Systems can be classified into three types. 1.Isolated-word Speech
Recognition.

2.Connected-word Speech Recognition.


3.Continuous Speech Recognition.

1. Isolated-word Speech Recognition.

It provides recognition of a single word at a time. The user must separate every word by a pause.
The pause marks the end of one word and the beginning of the next word.

Stage 1: Normalization

The recognizer's first task is to carry out amplitude and noise normalization to minimize the
variation in speech due to ambient noise, the speaker's voice, the speaker's distance from and
position relative to the microphone, and the speaker's breath noise.

Stage2: Parametric Analysis

It is a preprocessing stage that extracts relevent time-varying sequences of speech parameters. This
stage serves two purposes: (i) It extracts time-varying speech parameters. (ii) It reduces the amount
of data of extracting the relevant speech parameters.

Training mode In training mode of the recognizer, the new frames are added to the reference list.
Recognizer modeIf the recognizer is in Recognizer mode, then dynamic time warping is applied
to the unknown patterns to average out the phoneme (smallest distinguishable sound, and spoken
words are constructed by concatenatic basic phonemes) time duration. The unknown pattern is
then compared with the reference patterns.

Concept of multimedia
Multimedia consists of the following 5 components:
1. Text
Characters are used to form words, phrases, and paragraphs in the text. Text appears in all
multimedia creations of some kind. The text can be in a variety of fonts and sizes to match the
multimedia software’s professional presentation. Text in multimedia systems can communicate
specific information or serve as a supplement to the information provided by the other media.
2. Graphics
Non-text information, such as a sketch, chart, or photograph, is represented digitally. Graphics
add to the appeal of the multimedia application. In many circumstances, people dislike reading
big amounts of material on computers. As a result, pictures are more frequently used than words
to clarify concepts, offer background information, and so on. Graphics are at the heart of any
multimedia presentation. The use of visuals in multimedia enhances the effectiveness and
presentation of the concept. Windows Picture, Internet Explorer, and other similar programs are
often used to see visuals. Adobe Photoshop is a popular graphics editing program that allows
you to effortlessly change graphics and make them more effective and appealing.
3. Animations
A sequence of still photographs is being flipped through. It’s a set of visuals that give the
impression of movement. Animation is the process of making a still image appear to move. A
presentation can also be made lighter and more appealing by using animation. In multimedia
applications, the animation is quite popular. The following are some of the most regularly used
animation viewing programs: Fax Viewer, Internet Explorer, etc.
4. Video
Photographic images that appear to be in full motion and are played back at speeds of 15 to 30
frames per second. The term video refers to a moving image that is accompanied by sound, such
as a television picture. Of course, text can be included in videos, either as captioning for spoken
words or as text embedded in an image, as in a slide presentation. The following programs are
widely used to view videos: Real Player, Window Media Player, etc.
5. Audio
Any sound, whether it’s music, conversation, or something else. Sound is the most serious aspect
of multimedia, delivering the joy of music, special effects, and other forms of entertainment.
Decibels are a unit of measurement for volume and sound pressure level. Audio files are used as
part of the application context as well as to enhance interaction. Audio files must occasionally
be distributed using plug-in media players when they appear within online applications and
webpages. MP3, WMA, Wave, MIDI, and RealAudio are examples of audio formats. The
following programs are widely used to view videos: Real Player, Window Media Player, etc.
Hypermedia messaging
HYPER MEDIA MESSAGING
Messaging is one of the major multimedia applications. Messaging started out as a simple text-
based electronic mail application. Multimedia components have made messaging nuch more
complex. We see how these components are added to messages.
Mobile Messaging
Mobile messaging represents a major new dimension in the users interaction with the messaging
system. With the emergence of remote access from users using personal digital assistants and
notebook computers, made possible by wireless communications developments supporting wide
ranging access using wireless modems and cellular telephone links, mobile messaging has
significantly influence messaging paradigms.
Hypermedia messaging is not restricted to the desktops; it is increasingly being used on the road
through mobile communications in metaphors very different from the traditional desktop
metaphors.

Hypermedia Message Components


A hypermedia message may be a simple message in the form of text with an embedded graphics,
sound track, or video clip, or it may be the result of analysis of material based books, CD ROMs,
and other on-line applications. An authoring sequence for a message based on such analysis may
consist of the following components.

1. The user may have watched some video presentation on the material and may want to
attach a part of that clip in the message. While watching it, the user marks possible quotes
and saves an annotated copy.

2. Some pages of the book are scanned as images. The images provide an illustration or a
clearer analysis of the topic

3. The user writes the text of the message using a word processor. The text summarizes the
highlights of the analysis and presents conclusions.

These three components must be combined in a message using an authoring tool provided by the
messaging system. The messaging system must prompt the user to enter the name of the addressee
forthe message.
The message system looks up the name in an online directory and convert it to an electronic
addresses well as routing information before sending the message. The user is now ready to
compose the message. The first step is to copy the word processed text report prepared in step 3
above in the body area of the message or use the text editor provided by the messaging system.
The user then marks the spots where the images are referenced and uses the link and embed
facilitites of the authoring tool to link in references to the images. The user also marks one or more
spots for video clips and again uses the link and embed facilities to add the video clips to the
message.
When the message is fully composed, the user signs it (electronic signature) and mails to the
message to the addressee (recipient). The addressing system must ensure that the images and video
clips referenced in the message are also transferred to a server "local' to the recipient.

Text Messages

In earlier days, messaging systems used a limited subset of plain ASCII text. Later, messaging
systems were designed to allow users to communicate using short messages. Then, new messaging
standards have added on new capabilities to simple messages. They provide various classes of
service and delivery reports.
Integrated multimedia message standard
Let us review some of the Integrated Multimedia MessageStandards in detail.
Vendor Independent Messaging (VIM)
VIM interface is designed to facilitate messaging between VIM. enabled electronic mail systems
as well as other applications. The VIM interface makes mail and messages services available
through a well defined interface. A messaging service enables its clients to communicate with each
other in a store-and-forward manner. VIM-aware applications may also use one-or-more address
books.
Address books are used to store information about users, groups, applications, and so on.
VIM
Messages:
VIM defines messaging as a stored-and-forward method of application-to-application all
program-to-program data exchange. The objects transported by a messaging system are caIled
messages. The message, along with the address is sent to the messaging system. The messaging
system providing VIM services accept the responsibility for routing and delivering the message to
the message container of the recipient.

Message Definition:
Each message has a message type. The message type defines the syntax of the message and the
type of information that can be contained in the message. A VIM message consists of message
header. It may contain one or more message items. The message header consists of header
attributes: recipient address, originator address, time/date prior A message item is a block of
arbitrary-sized (means any size) data of a defined type. The contents of the data block are defined
by the data-item type. The actual items in a message and its syntax and semantics are defined by
the message type. The message may also contain file attachments. VIM allows the nesting of
messages; means one message may be enclosed in another message.
A VIM message can be digitally signed so that we can ensure that the message 'received is without
any modification during the transit.
Mail Message: It is a message of a well-defmed type that must include a message header and
may include note parts, attachments, and other application-defined components. End users can see
their mail messages through their mail programs.
Message Delivery: If message is delivered successfully, a delivery report is generated and send to
the sender of the message if the sender requested the d~livei-y report. If a message is not delivered,
a non-delivered report is sent to the sender.

Distributed multimedia system


If the multimedia systems are supported by multiuser system, then we call those multimedia
systems as distributed multimedia systems. A multi user system designed to support multimedia
applications for a large number of users consists of a number of system components. A typical
multimedia application environment consists of the following components:

1. Application software.
2. Container object store.
3. Image and still video store.
4. Audio and video component store.
5. Object directory service agent.
6. component service agent.
7. User interface and service agent.
8. Networks (LAN and WAN).
Application Software
The application software perfom1s a number of tasks related to a specific business process. A
business process consists ofa series of actions that may be performed by one or more users.
The basic tasks combined to form an application include the following:
(1) Object Selection - The user selects a database record or a hypermedia document from a file
system, database management system, or document server.
(2) Object Retrieval- The application ret:ieves the base object.
(3) Object Component Display - Some document components are displayed automatically when
the user moves the pointer to the field or button associated with the multimedia object.
(4) User Initiated Display - Some document components require user action before
playback/display.
(5) Object Display Management and Editing: Component selection may invoke a component
control subapplication which allows a user to control playback or edit the component object.
Document store
A document store is necessary for application that requires storage oflarge volume of documents.
The following describes some characteristics of document stores.

1. Primary Document Storage: A file systems or database that contains primary document
objects (container objects). Other attached or embedded documents and multimedia objects may
be stored in the document server along with the container object.

2. Linked Object Storage: Embedded components, such as text and formatting information, and
linked information, and linked components, such as pointers to image, audio, and video.
Components contained in a document, may be stored on separate servers.

3. Linked Object Management: Link information contains the name of the component, service
class or type, general attributes such as size, duration of play for isochronous objects and hardware,
and software requirements for rendering.

Image and still video store


An image and still video is a database system optimized for storage of images. Most systems
employ optical disk libraries. Optical disk libraries consist of multiple optical disk platters that are
played back by automatically loading the appropriate platter in the drive under device driver
control.
The characteristics of image and still video stores are as follows:
Compressed information
Multi-image documents
Related annotation
Large volumes
Migration between high-volume such as an optical disk library and high-speed media such as
magnetic cache storages
Shared access: The server software managing the server has to be able to manage the different
requirements.

Audio and video Full motion video store

Audio and Video objects are isochronous. The following lists some characteristics of audio and
full-motion video object stores:

(i) Large-capacity file system: A compressed video object can be as large as six to ten M bytes for
one minute of video playback. Temporary or permanent Storage: Video objects may be stored
temporarily on client workstations, servers Providing disk caches, and multiple audio or video
object servers. Migration to high volume/lower-cost media. Playback isochronocity: Playing back
a video object requires consistent speed without breaks. Multiple shared access objects being
played back in a stream mode must be accessible by other users.

Object Directory Service Agent

The directory service agent is a distributed service that providea directory of all multimedia objects
on the server tracked by that element of the directoryy service agent.

The following describes various services provided by a directory service Agent.


(1)Directory Service: It lists all multimedia objects by class and server location.
(2) Object Assignment: The directory service agent assigns unique identification to each
multimedia object.

(3)Object Status Management: The directory service must track the current usage status of
each object.

(4)Directory Service Domains: The directory service should be modular to allow setting
up domains constructed around groups of servers that form the core operating environment
for a group of users.

(5) Directory Service Server Elements: Each multimedia object server must have directory
service element that reside on either server or some other resources.

(6)Network Access: The directory service agent must be accessible from any workstation
on the network.

Component Service Agent


A service is provided to the multimedia used workstation by each multimedia component. This
service consists of retrieving objects, managing playback of objects, storing objects, and so on.
The characteristics of services provided by each multimedia component are object creating service,
playback service, component object service agent, service agents on servers and multifaceted
services means (multifaceted services component objects may exist in several forms, such as
compressed Or uncompressed).

User Interface Service Agent


It resides on each user workstation. It provides direct services to the application software for the
management of the multimedia object display windows, creation and storage of multimedia
objects, and scaling and frame shedding for rendering of multimedia objects.
The services provided by user interface service agents are windows management, object creation
and capture, object display and playback, services on workstations and using display software. The
user interface service agent is the client side of the service agents. The user interface agent manages
all redirection since objects are located by a look-up mechanism in the directory service agent
Distributed client server operation
The agents so far we have discussed combine to form a distributed client-server system for
multimedia applications. Multimedia applications require functionality beyond the traditional
client server architecture. Most client-server systems were designed to connect a client across a
network to a server that provided database functions. In this case, the client-server link was firmly
established over the network. There was only one copy of the object on the specified server. With
the development of distributed work group computing, the picture has changed for the clients and
servers. Actually in this case, there is a provision of custom views in large databases. The
advantage of several custom views is the decoupling between the physical data and user. The
physical organization of the data can be changed without affecting the conceptual schema by
changing the distributed data dictionary and the distributed data repository.

Clients in Distributed Work Group Computing


Clients in distributed workgroup computing are the end users with workstations running
multimedia applications. The client systems interact with the data servers in any of the
following w3fs.

1. Request specific textual data.


2. Request specific multimedia objects embedded or linked in retrieved container objects.
3. Require activation of a rendering server application to display/ playback multimedia
objects.
4. Create and store multimedia-objects on servers.

Request directory information. on locations of objects on servers

Servers in Distributed Workgroup Computing


Servers are storing data objects. They provide storage for a variety f object classes, they transfer
objects on demand on clients. They rovide hierarchical storage for moving unused objects to
optical_ isk lirbaries or optical tape libraries. They provide system dministration functions for
backing up stored data. They provide le function of direct high-speed LAN and WAN server-to-
server ~ansport for copying multimedia objects.
Middleware in Distributed Workgroup Computing
The middleware is like interface between back-end database and font-end clients.The primary role
of middleware is to link back end database to front end clients in a highly flexible and loosely
connected network nodel. Middleware provides the glue for dynamically redirecting client
requests to appropriate servers that are on-line. Multimedia Object Servers The resources where
information objects are storedareknown as servers. Other users (clients) can share the information
stored in these resources through the network.

Types of Multimedia Servers

Each object type of multimedia systems would have its own dedicated server optimized for
the type of data maintained in the object. A network would consist of some combination
of the following types of servers.

(1) Data-processing servers RDBMSs and ODBMSs. (2) Document database servers.
(3) Document imaging and still-video servers. (4) Audio and voice mail servers.
(5) Full motion video server.

Data base processing servers are traditional database servers that contain alphanumeric data. In a
relational database, data fields are stored in columns in a table. In an object-oriented database these
fields become attributes of the object. The database serves the purpose of organizing the data and
providing rapid indexed access to it. The DBMS can interpret the contents of any column or
attribute for performing a search.
Important Questions

1. Explain Computer Graphics and its uses


Ans- It is difficult to display an image of any size on the computer screen. This method is simplified
by using Computer graphics. Graphics on the computer are produced by using various algorithms and
techniques. This tutorial describes how a rich visual experience is provided to the user by explaining
how all these processed by the computer. Computer Graphics involves technology to access. The
Process transforms and presents information in a visual form. The role of computer graphics
insensible. In today life, computer graphics has now become a common element in user interfaces,
T.V. commercial motion pictures. Computer Graphics is the creation of pictures with the help of a
computer. The end product of the computer graphics is a picture it may be a business graph, drawing,
and engineering. In computer graphics, two or three-dimensional pictures can be created that are
used for research. Many hardware devices algorithm has been developing for improving the speed of
picture generation with the passes of time. It includes the creation storage of models and image of
objects. These models for various fields like engineering, mathematical and so on.

Some of the applications of computer graphics are:


1. Computer Art:

Using computer graphics we can create fine and commercial art which include animation
packages, paint packages. These packages provide facilities for designing object shapes and
specifying object motion.Cartoon drawing, paintings, logo design can also be done.

2. Computer Aided Drawing:

Designing of buildings, automobile, aircraft is done with the help of computer aided drawing,
this helps in providing minute details to the drawing and producing more accurate and sharp
drawings with better specifications.

3. Presentation Graphics:

For the preparation of reports or summarising the financial, statistical, mathematical, scientific,
economic data for research reports, managerial reports, moreover creation of bar graphs, pie
charts, time chart, can be done using the tools present in computer graphics.

4. Entertainment:
Computer graphics finds a major part of its utility in the movie industry and game industry.
Used for creating motion pictures , music video, television shows, cartoon animation films.
In the game industry where focus and interactivity are the key players, computer graphics
helps in providing such features in the efficient way.
5. Education:

Computer generated models are extremely useful for teaching huge number of concepts and
fundamentals in an easy to understand and learn manner. Using computer graphics many
educational models can be created through which more interest can be generated among the
students regarding the subject.

6. Training:

Specialised system for training like simulators can be used for training the candidates in a way
that can be grasped in a short span of time with better understanding. Creation of training
modules using computer graphics is simple and very useful.

7. Visualisation:

Today the need of visualise things have increased drastically, the need of visualisation can be
seen in many advance technologies , data visualisation helps in finding insights of the data , to
check and study the behaviour of processes around us we need appropriate visualisation which
can be achieved through proper usage of computer graphics

8. Image Processing:

Various kinds of photographs or images require editing in order to be used in different places.
Processing of existing images into refined ones for better interpretation is one of the many
applications of computer graphics.

9. Machine Drawing:

Computer graphics is very frequently used for designing, modifying and creation of various parts
of machine and the whole machine itself, the main reason behind using computer graphics for
this purpose is the precision and clarity we get from such drawing is ultimate and extremely
desired for the safe manufacturing of machine using these drawings.

10. Graphical User Interface:

The use of pictures, images, icons, pop-up menus, graphical objects helps in creating a user
friendly environment where working is easy and pleasant, using computer graphics we can create
such an atmosphere where everything can be automated and anyone can get the desired action
performed in an easy fashion.
2. Explain Random Scan Display
Ans- In Random-Scan Display electron beam is directed only to the areas of screen where a
picture has to be drawn. It is also called vector display, as it draws picture one line at time. It
can draw and refresh component lines of a picture in any specified sequence. A Pen plotter is an
example of random-scan device. The number of lines regulates refresh rate on random-scan
displays. An area of memory called refresh display files stores picture definition as a set of line
drawing commands. The system returns back to first-line command in the list, after all the
drawing commands have been processed. High-quality vector systems can handle around 100,
00 short lines at this refresh rate. Faster refreshing can burn phosphor. To avoid this every refresh
cycle is delayed to prevent refresh rate greater than 60 frames per second. Suppose we want to
display a square ABCD on the screen. The commands will be:
 Draw a line from A to B
 Draw a line from B to C
 Draw a line from C to D
 Draw a line from D to A
Random-Scan Display Processors: Input in the form of an application program is stored in the
system memory along with graphics package. Graphics package translates the graphic
commands in application program into a display file stored in system memory. This display file
is then accessed by the display processor to refresh the screen. The display processor cycles
through each command in the display file program. Sometimes the display processor in a
random-scan is referred as Display Processing Unit / Graphics Controller. The structure of a
simple random scan is shown below:

ADVANTAGES:
 Higher resolution as compared to raster scan display.
 Produces smooth line drawing.
 Less Memory required.
DISADVANTAGES:
 Realistic images with different shades cannot be drawn.
 Colour limitations.
3. Explain
a. points
A Point in geometry is defined as a location in the space that is uniquely defined by an ordered
triplet (x, y, z) where x, y, & z are the distances of the point from the X-axis, Y-axis, and Z-
axis respectively in the 3-Dimensions and is defined by ordered pair (x, y) in the 2-Dimensions
where, x and y are the distances of the point from the X-axis, and Y-axis, respectively. It is
represented using the dot and is named using capital English alphabets. The figure added below
shows a point P in the 3-D which is at a distance of x, y, and z from the X-axis, Y-axis, and Z-
axis respectively.

b. circle

Circle is an eight-way symmetric figure. The shape of circle is the same in all quadrants. In each
quadrant, there are two octants. If the calculation of the point of one octant is done, then the other
seven points can be calculated easily by using the concept of eight-way symmetry.

For drawing, circle considers it at the origin. If a point is P 1(x, y), then the other seven points will
be
So we will calculate only 45°arc. From which the whole circle can be determined easily.

b. ellipses

This is an incremental method for scan converting an ellipse that is centered at the origin in
standard position i.e., with the major and minor axis parallel to coordinate system axis. It is
very similar to the midpoint circle algorithm. Because of the four-way symmetry property we
need to consider the entire elliptical curve in the first quadrant.

Let's first rewrite the ellipse equation and define the function f that can be used to decide if the
midpoint between two candidate pixels is inside or outside the ellipse:

Now divide the elliptical curve from (0, b) to (a, 0) into two parts at point Q where the slope of the
curve is -1.

Slope of the curve is defined by the f(x, y) = 0 is where fx & fy are partial derivatives
of f(x, y) with respect to x & y.

d. Input Graphics

The Input Devices are the hardware that is used to transfer transfers input to the computer. The
data can be in the form of text, graphics, sound, and text. Output device display data from the
memory of the computer. Output can be text, numeric data, line, polygon, and other objects.
These Devices include:

1. Keyboard
2. Mouse
3. Trackball
4. Spaceball
5. Joystick
6. Light Pen
7. Digitizer
8. Touch Panels
9. Voice Recognition
10. Image Scanner

e. polygon Filling

In this technique 4-connected pixels are used as shown in the figure. We are putting the pixels
above, below, to the right, and to the left side of the current pixels and this process will continue
until we find a boundary with different color.
Algorithm

Step 1 − Initialize the value of seed point seedx,seedy , fcolor and dcol.

Step 2 − Define the boundary values of the polygon.

Step 3 − Check if the current seed point is of default color, then repeat the steps 4 and 5 till the
boundary pixels reached.

If getpixel(x, y) = dcol then repeat step 4 and 5

Step 4 − Change the default color with the fill color at the seed point.

setPixel(seedx, seedy, fcol)

Step 5 − Recursively follow the procedure with four neighborhood points.

FloodFill (seedx – 1, seedy, fcol, dcol)


FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)

Step 6 − Exit

There is a problem with this technique. Consider the case as shown below where we tried to fill
the entire region. Here, the image is filled only partially. In such cases, 4-connected pixels
technique cannot be used.
8-Connected Polygon

In this technique 8-connected pixels are used as shown in the figure. We are putting pixels above,
below, right and left side of the current pixels as we were doing in 4-connected technique.

In addition to this, we are also putting pixels in diagonals so that entire area of the current pixel is
covered. This process will continue until we find a boundary with different color.

Algorithm

Step 1 − Initialize the value of seed point seedx,seedy, fcolor and dcol.

Step 2 − Define the boundary values of the polygon.

Step 3 − Check if the current seed point is of default color then repeat the steps 4 and 5 till the
boundary pixels reached

If getpixel(x,y) = dcol then repeat step 4 and 5

Step 4 − Change the default color with the fill color at the seed point.

setPixel(seedx, seedy, fcol)

Step 5 − Recursively follow the procedure with four neighbourhood points


FloodFill (seedx – 1, seedy, fcol, dcol)
FloodFill (seedx + 1, seedy, fcol, dcol)
FloodFill (seedx, seedy - 1, fcol, dcol)
FloodFill (seedx, seedy + 1, fcol, dcol)
FloodFill (seedx – 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy + 1, fcol, dcol)
FloodFill (seedx + 1, seedy - 1, fcol, dcol)
FloodFill (seedx – 1, seedy - 1, fcol, dcol)

Step 6 − Exit

The 4-connected pixel technique failed to fill the area as marked in the following figure which
won’t happen with the 8-connected technique.

4. Explain Transformation with the help of Examples (2D and


3D)
Ans- 2D Translation is a transformation technique that changes the position of each point in an
object or a coordinate system by a specified distance in the x and y

Applying 2D translation, we can say:

X’ = X + tx
Y’ = Y + ty

(tx, ty) represents the shift or the translation vector. The equations can be expressed using column
vectors for efficient representation and computation.

P=[X]/[Y] p’ = [X′]/[Y′] T = [tx]/[ty]

This can also be written as:

P’ = P + T

2D Scaling in Computer Graphics

2D Scaling in Computer Graphics involves resizing objects or coordinate systems in a 2D plane.


It allows us to change the size of each point in the object or coordinate system by applying scaling
factors in the x and y directions.

To perform 2D scaling, we utilize scaling factors: sx for the x-axis and sy for the y-axis. These
factors determine how much each coordinate should be scaled along its respective axis.

If the scaling factor (SX and SY) is greater than 1, the object is enlarged and moves away from the
origin. A scaling factor of 1 leaves the object unchanged, while a scaling factor less than 1 shrinks
the object and moves it closer to the origin.

The equations for scaling are X’ = X * SX and Y’ = Y * SY, where X and Y are the original
coordinates of a point, and X’ and Y’ are the scaled coordinates after the transformation.

These equations can also be represented in matrix form as:

OR

P’ = P . S

The scaling process is depicted using the scaling matrix S in the given figure:
2D Reflection in Computer Graphics

2D reflection is a transformation technique that involves flipping or mirroring an object or


coordinate system across a specific axis in a 2D plane. It allows us to change the orientation of
each point in the object or coordinate system in relation to the reflection axis.

The figures depict the X and Y axes and the origin:

2D Shearing in Computer Graphics

2D Shearing transformation slants or distorts an object or coordinate system along either the x-
axis or y-axis in a 2D plane. It involves shifting the position of points in a specific direction based
on their original coordinates.

To shear the given image along the x-axis, we use the shearing parameter shx.
The equation will be:

X’= X + Y.shx

Y’ = X

The shearing matrix along with the x-axis is:

The shearing matrix along with the y-axis is:

To shear the provided image along the Y-axis, we utilize the shearing parameter shy.

The equation is now:

Y’= X.shy + Y

X’ = X

The shearing matrix along with the x-axis is:


Difference between 2d and 3d Transformation in Computer Graphics

The difference between 2D and 3D transformations in computer graphics lies in the dimensionality
of the space in which the transformations are applied. The prime difference between 2d and 3d
Transformation in Computer Graphics are listed in the table below:

2D Transformations 3D Transformations

Dimension Two-dimensional space (x and y axes) Three-dimensional space (x, y, and z axes)

Representation Objects are represented on a flat Objects are represented in a 3D


surface (e.g., a computer screen) environment

Types Translation, rotation, scaling, Translation, rotation, scaling, shearing,


shearing, reflection reflection, perspective projection, etc.

Coordinates Only x and y coordinates x, y, and z coordinates


Affected

Depth No depth information (z-coordinate is Depth information allows objects with


constant) volume and depth

Realism Limited to flat, 2D representations Enables more realistic 3D graphics and


animations

Applications GUIs, image processing, 2D 3D modeling, animation, virtual reality,


animations, CAD simulations, game development.
5. Explain the DDA algorithm
DDA stands for Digital Differential Analyzer. It is an incremental method of scan conversion of
line. In this method calculation is performed at each step but by using results of previous steps.

Suppose at step i, the pixels is (xi,yi)

The line of equation for step i


yi=mxi+b......................equation 1

Next value will be


yi+1=mxi+1+b.................equation 2

m=
yi+1-yi=∆y.......................equation 3
yi+1-xi=∆x......................equation 4
yi+1=yi+∆y
∆y=m∆x
yi+1=yi+m∆x
∆x=∆y/m
xi+1=xi+∆x
xi+1=xi+∆y/m

Case1: When |M|<1 then (assume that x1<x2)


x= x1,y=y1 set ∆x=1
yi+1=y1+m, x=x+1
Until x = x2</x

Case2: When |M|<1 then (assume that y1<y2)


x= x1,y=y1 set ∆y=1

xi+1= , y=y+1
Until y → y2</y

Advantage:
1. It is a faster method than method of using direct use of line equation.
2. This method does not use multiplication theorem.
3. It allows us to detect the change in the value of x and y ,so plotting of same point twice is
not possible.
4. This method gives overflow indication when a point is repositioned.
5. It is an easy method because each step involves just two additions.
Disadvantage:
1. It involves floating point additions rounding off is done. Accumulations of round off error
cause accumulation of error.
2. Rounding off operations and floating point operations consumes a lot of time.
3. It is more suitable for generating line using the software. But it is less suited for hardware
implementation.

DDA Algorithm:

Step1: Start Algorithm

Step2: Declare x1,y1,x2,y2,dx,dy,x,y as integer variables.

Step3: Enter value of x1,y1,x2,y2.

Step4: Calculate dx = x2-x1

Step5: Calculate dy = y2-y1

Step6: If ABS (dx) > ABS (dy)


Then step = abs (dx)
Else

Step7: xinc=dx/step
yinc=dy/step
assign x = x1
assign y = y1

Step8: Set pixel (x, y)

Step9: x = x + xinc
y = y + yinc
Set pixels (Round (x), Round (y))

Step10: Repeat step 9 until x = x2

Step11: End Algorithm


6. Explain Breshman’s Algorithm with an Example
This algorithm is used for scan converting a line. It was developed by Bresenham. It is an efficient
method because it involves only integer addition, subtractions, and multiplication operations.
These operations can be performed very rapidly so lines can be generated quickly.

In this method, next pixel selected is that one who has the least distance from true line.

The method works as follows:

Assume a pixel P1'(x1 ',y1'),then select subsequent pixels as we work our may to the night, one pixel
position at a time in the horizontal direction toward P2'(x2',y2 ').

Once a pixel in choose at any step

The next pixel is

1. Either the one to its right (lower-bound for the line)


2. One top its right and up (upper-bound for the line)

The line is best approximated by those pixels that fall the least distance from the path between
P1',P2'.

To chooses the next one between the bottom pixel S and top pixel T.
If S is chosen
We have xi+1=xi+1 and yi+1=yi
If T is chosen
We have xi+1=xi+1 and yi+1=yi+1

The actual y coordinates of the line at x = xi+1 is


y=mxi+1+b
The distance from S to the actual line in y direction
s = y-yi

The distance from T to the actual line in y direction


t = (yi+1)-y

Now consider the difference between these 2 distance values


s-t

When (s-t) <0 ⟹ s < t

The closest pixel is S

When (s-t) ≥0 ⟹ s < t

The closest pixel is T

This difference is
s-t = (y-yi)-[(yi+1)-y]
= 2y - 2yi -1

Substituting m by and introducing decision variable


di=△x (s-t)

di=△x (2 (xi+1)+2b-2yi-1)
=2△xyi-2△y-1△x.2b-2yi△x-△x
di=2△y.xi-2△x.yi+c

Where c= 2△y+△x (2b-1)

We can write the decision variable di+1 for the next slip on
di+1=2△y.xi+1-2△x.yi+1+c
di+1-di=2△y.(xi+1-xi)- 2△x(yi+1-yi)
Since x_(i+1)=xi+1,we have
di+1+di=2△y.(xi+1-xi)- 2△x(yi+1-yi)

Special Cases

If chosen pixel is at the top pixel T (i.e., di≥0)⟹ yi+1=yi+1


di+1=di+2△y-2△x

If chosen pixel is at the bottom pixel T (i.e., d i<0)⟹ yi+1=yi


di+1=di+2△y

Finally, we calculate d1
d1=△x[2m(x1+1)+2b-2y1-1]
d1=△x[2(mx1+b-y1)+2m-1]

Since mx1+b-yi=0 and m = , we have


d1=2△y-△x

Advantage:

1. It involves only integer arithmetic, so it is simple.

2. It avoids the generation of duplicate points.

3. It can be implemented using hardware because it does not use multiplication and division.

4. It is faster as compared to DDA (Digital Differential Analyzer) because it does not involve
floating point calculations like DDA Algorithm.

Disadvantage:

1. This algorithm is meant for basic line drawing only Initializing is not a part of Bresenham's line
algorithm. So to draw smooth lines, you should want to look into a different algorithm.

Bresenham's Line Algorithm:

Step1: Start Algorithm

Step2: Declare variable x1,x2,y1,y2,d,i1,i2,dx,dy

Step3: Enter value of x1,y1,x2,y2


Where x1,y1are coordinates of starting point
And x2,y2 are coordinates of Ending point
Step4: Calculate dx = x2-x1
Calculate dy = y2-y1
Calculate i1=2*dy
Calculate i2=2*(dy-dx)
Calculate d=i1-dx

Step5: Consider (x, y) as starting point and xendas maximum possible value of x.
If dx < 0
Then x = x2
y = y2
xend=x1
If dx > 0
Then x = x1
y = y1
xend=x2

Step6: Generate point at (x,y)coordinates.

Step7: Check if whole line is generated.


If x > = xend
Stop.

Step8: Calculate co-ordinates of the next pixel


If d < 0
Then d = d + i1
If d ≥ 0
Then d = d + i2
Increment y = y + 1

Step9: Increment x = x + 1

Step10: Draw a point of latest (x, y) coordinates

Step11: Go to step 7

Step12: End of Algorithm


7. Explain B- Spline and Bezier Curve
B-spline curve came to resolve the disadvantages having by Bezier curve, as we all know that
both curves are parametric in nature. In Bezier curve we face a problem, when we change any
of the control point respective location the whole curve shape gets change. But here in B-spline
curve, the only a specific segment of the curve-shape gets changes or affected by the changing
of the corresponding location of the control points.
In the B-spline curve, the control points impart local control over the curve-shape rather than
the global control like Bezier-curve.

B-spline curve shape before changing the position of control point P1 –

B-spline curve shape after changing the position of control point P1 –

You can see in the above figure that only the segment-1st shape as we have only changed the
control point P 1, and the shape of segment-2nd remains intact.

B-spline Curve :

As we see above that the B-splines curves are independent of the number of control points and
made up of joining the several segments smoothly, where each segment shape is decided by
some specific control points that come in that region of segment. Consider a curve given below

Attributes of this curve are –
 We have “n+1” control points in the above, so, n+1=8, so n=7.
 Let’s assume that the order of this curve is ‘k’, so the curve that we get will be of a polynomial
degree of “k-1”. Conventionally it’s said that the value of ‘k’ must be in the range: 2 ≤ k ≤
n+1. So, let us assume k=4, so the curve degree will be k-1 = 3.
 The total number of segments for this curve will be calculated through the following formula

Total no. of seg = n – k + 2 = 7 – 4 + 2 = 5.

Segments Control points Parameter

S0 P0,P1,P2,P3 0≤t≤2

S1 P1,P2,P3,P4 2≤t≤3

S2 P2,P3,P4,P5 3≤t≤4

S3 P3,P4,P5,P6 4≤t≤5

S4 P4,P5,P6,P7 5≤t≤6

Knots in B-spline Curve :

The point between two segments of a curve that joins each other such points are known as knots
in B-spline curve. In the case of the cubic polynomial degree curve, the knots are “n+4”. But in
other common cases, we have “n+k+1” knots. So, for the above curve, the total knots vectors
will be –
Total knots = n+k+1 = 7 + 4 + 1 = 12
These knot vectors could be of three types –
 Uniform (periodic)
 Open-Uniform
 Non-Uniform

B-spline Curve Equation : The equation of the spline-curve is as follows –

Where Pi, k, t correspondingly represents the control points, degree, parameter of the curve.
Properties of B-spline Curve :
 Each basis function has 0 or +ve value for all parameters.
 Each basis function has one maximum value except for k=1.
 The degree of B-spline curve polynomial does not depend on the number of control points
which makes it more reliable to use than Bezier curve.
 B-spline curve provides the local control through control points over each segment of the
curve.
 The sum of basis functions for a given parameter is one.

Bezier Curve
A bezier curve is particularly a kind of spline generated from a set of control points by forming
a set of polynomial functions. Discovered by the french engineer Pierre bezier. These functions
are computed from the coordinates of the control points. These curves can be generated under
the control of other points. Tangents by using control points are used to generate curves.
It is an approximate spline curve. A bezier curve is defined by the defining polygon. It has no
properties that make them highly useful and convenient for curve and surface design.

Different types of curves are Simple, Quadratic, and Cubic.


1. Simple curve: Simple bezier curve is a straight line from the point.

Simple

2. Quadratic curve: Quadratic bezier curve is determined by three control points.


Quadratic

3. Cubic curve: The cubic bezier curve is determined by four control points.

Properties of Bezier Curve:

1. Bezier curves are widely available and used in various CAD systems, in general graphics
packages such as GL
2. The slope at beginning of the curve is along the line joining the first two control points and
the slope at the end of the curve is along the line joining the last two points
3. Bezier curve always passes through the first and last points i.e p(o)=po, p(1,=pnlie)
4. The curves lies entirely within the convex hall formed by the four control points
5. The slope at the beginning of the curve is along the line joining the first two control points
and the slope at the end of the curve is along the line joining the last two points.
6. The degree of polynomial defining the curve segment is one less than the no of defining the
polygon.

Bezier Curve for 3 Points:

Q(u)=PoBo12(u)+P1B1,2(u)+P2B2,2(u)
 B0,2(u)=2Co*+u^0(1-u)*^2-0
= (1-U)^2
 B1,2(u)=2C1*U^1(1-u)*^2-1
= 2u(1-u)
 B2,2(u)=2C2*U^2(1-u)*^2-2
=u^2
Q(u)=P0(1-u)^2+P1*2u(1-u)+P2U^2
X(u)=(1-u)^2*X0+2U*(1-u)*X1+u^2*x2
Y(u)=(1-u)^2*y0+2u*(1-u)*y1
 Bezier curves exhibit global control points means moving control points alert the shape of
the whole curve.
Different varieties of spline curves are used in graphics applications.
1. Hermit spline
2. Relaxed end spline
3. Cyclic spline
4. Anti cyclic spline
5. Normalized spline

7. Explain Color Models


There are many color models. Some of them are RGB, CMYK, YIQ, HSV, and HLS, etc.
These color spaces are directly related to saturation and brightness. All of these color spaces
can be derived using RGB information using devices such as cameras and scanners.

RGB Color Space

RGB stands for Red, Green, and Blue. This color space is widely used in computer graphics.
RGB are the main colors from which many colors can be made. RGB can be represented in the
3-dimensional form:
Below table is 100% RGB color bar contains values for 100% amplitude, 100% saturated, and
for video test signal.

CMYK Color Model

CMYK stands for Cyan, Magenta, Yellow and Black. CMYK color model is used in
electrostatic and ink-jet plotters which deposits the pigmentation on paper. In these model,
specified color is subtracted from the white light rather than adding blackness. It follows the
Cartesian coordinate system and its subset is a unit cube.

HSV Color Model

HSV stands for Hue, Saturation, and Value (brightness). It is a hexcone subset of the cylindrical
coordinate system. The human eye can see 128 different hues, 130 different saturations and
number values between 16 (blue) and 23 (yellow).
HLS Color Model

HLS stands for Hue Light Saturation. It is a double hexcone subset. The maximum saturation
of hue is S= 1 and L= 0.5. It is conceptually easy for people who want to view white as a point.

Introduction to JPEG Compression

JPEG is an image compression standard which was developed by "Joint Photographic Experts
Group". In 1992, it was accepted as an international standard. JPEG is a lossy image compression
method. JPEG compression uses the DCT (Discrete Cosine Transform) method for coding
transformation. It allows a tradeoff between storage size and the degree of compression can be
adjusted.

Following are the steps of JPEG Image Compression-


Step 1: The input image is divided into a small block which is having 8x8 dimensions. This
dimension is sum up to 64 units. Each unit of the image is called pixel.

Step 2: JPEG uses [Y,Cb,Cr] model instead of using the [R,G,B] model. So in the 2 nd step, RGB
is converted into YCbCr.

Step 3: After the conversion of colors, it is forwarded to DCT. DCT uses a cosine function and
does not use complex numbers. It converts information?s which are in a block of pixels from the
spatial domain to the frequency domain.

DCT Formula

Step 4: Humans are unable to see important aspects of the image because they are having high
frequencies. The matrix after DCT conversion can only preserve values at the lowest frequency
that to in certain point. Quantization is used to reduce the number of bits per sample.

There are two types of Quantization:

1. Uniform Quantization
2. Non-Uniform Quantization
Step 5: The zigzag scan is used to map the 8x8 matrix to a 1x64 vector. Zigzag scanning is used
to group low-frequency coefficients to the top level of the vector and the high coefficient to the
bottom. To remove the large number of zero in the quantized matrix, the zigzag matrix is used.

Step 6: Next step is vectoring, the different pulse code modulation (DPCM) is applied to the DC
component. DC components are large and vary but they are usually close to the previous value.
DPCM encodes the difference between the current block and the previous block.

Step 7: In this step, Run Length Encoding (RLE) is applied to AC components. This is done
because AC components have a lot of zeros in it. It encodes in pair of (skip, value) in which skip
is non zero value and value is the actual coded value of the non zero components.

Step 8: In this step, DC components are coded into Huffman.


8. what is shading and what different type of shading
Shading is referred to as the implementation of the illumination model at the pixel points or
polygon surfaces of the graphics objects.

Shading model is used to compute the intensities and colors to display the surface. The shading
model has two primary ingredients: properties of the surface and properties of the illumination
falling on it. The principal surface property is its reflectance, which determines how much of
the incident light is reflected. If a surface has different reflectance for the light of different
wavelengths, it will appear to be colored.

An object illumination is also significant in computing intensity. The scene may have to save
illumination that is uniform from all direction, called diffuse illumination.

Shading models determine the shade of a point on the surface of an object in terms of a number
of attributes. The shading Mode can be decomposed into three parts, a contribution from
diffuse illumination, the contribution for one or more specific light sources and a transparency
effect. Each of these effects contributes to shading term E which is summed to find the total
energy coming from a point on an object. This is the energy a display should generate to present
a realistic image of the object. The energy comes not from a point on the surface but a small
area around the point.

The simplest form of shading considers only diffuse illumination:

Epd=Rp Id
where Epd is the energy coming from point P due to diffuse illumination. I d is the diffuse
illumination falling on the entire scene, and Rp is the reflectance coefficient at P which
ranges from shading contribution from specific light sources will cause the shade of a
surface to vary as to its orientation concerning the light sources changes and will also
include specular reflection effects. In the above figure, a point P on a surface, with light
arriving at an angle of incidence i, the angle between the surface normal N p and a ray to
the light source. If the energy Ips arriving from the light source is reflected uniformly in all
directions, called diffuse reflection, we have

Eps=(Rp cos i)Ips

This equation shows the reduction in the intensity of a surface as it's tipped obliquely to the
light source. If the angle of incidence i exceeds90°, the surface is hidden from the light source
and we must set Epsto zero.

Gouraud shading is a method used in computer graphics to simulate the differing effect of light
and color across the surface of an object. Intensity- interpolation is used to develop Gouraud
shading. By intensity interpolation, the intensity of each and every pixel is calculated. Gouraud
shading shades a polygon by linearly interpolating intensity values across the surface. By this,
if we know the intensities of two points then we can able to find the intensity of any point in
between them.
By Gouraud shading, we can overcome the discontinue intensity values for each polygon that
are matched with the values of adjacent polygons along the common edges.
Each polygon surface is rendered with Gouraud shading by performing the following
calculations.
 The first step is to determine the average normal vector as add a polygon vertex. Determining
the average unit normal vertex at each polygon vertex.

Calculating the average unit normal vector at point p. Point p is attached with four polygons.
So, the average unit normal vector = N1+N2+N3+N4/|N1+N2+N3+N4|
for n polygons —-> summation of k Nk / | summation of k Nk | (where k is initialized from 1
to N)
 Applying an illumination model at each vertex to calculate vertex intensity.
 In computer graphics, we use an illumination model to calculate vertex intensity at each
vertex.
 Linearly interpolated the vertex intensities over the surface of the polygon for each scanline
the intensity at the intersection with a polygon edge is linearly interpolated.

In the above example, the vertex values and intensities of 1,2,3 are given. By linear interpolating,
we can find the intensity at point 4 (by points 1 and 2) and at point 5 (by points 3 and 2)

I1 intensity at vertex 1
I2 intensity at vertex 2
I3 intensity at vertex 3
I4 intensity at vertex 4
I5 intensity at vertex 5
Now we can able to find the intensity at point p (by points 4 and 5)

And now we are taking y-1 as our next scanline.

Similar calculations are used to obtain intensities at successive horizontal pixel positions along
each scan line. Incremental interpolation of intensity value along a polygon edge for successive
scan line: When the surfaces are to be rendered in color, the intensities of each color component
are calculated at the vertices. Gouraud can be connected with a hidden surface algorithm to fill
in the visible polygons along each scan line.

Advantages :

 Gouraud shading discards the intensity discontinuities associated with the constant shading
model.
 Linear intensity interpolation can cause bright or dark intensity streaks, called match bands,
to appear on the surface.
 The match band effect can decrease by dividing the surface into a higher no of polygon faces
or by using other methods such as Phong shading which requires more calculations .

Disadvantages :

 Highlights on the surface are sometimes displayed with anomalous shapes.


 The linear intensity interpolation can result bright or dark intensity streaks to appear o n the
surface. These bright or dark intensity streaks, are called Mach bands. The mach band effect
can be reduced by breaking the surface into a greater number of smaller polygons.
 Sharp drop of intensity values on the polygon surface can not be displayed.
9. Explain Multimedia and different types of multimedia
Ans- The word multi and media are combined to form the word multimedia. The word “multi”
signifies “many.” Multimedia is a type of medium that allows information to be easily
transferred from one location to another. Multimedia is the presentation
of text, pictures, audio, and video with links and tools that allow the user to navigate, engage,
create, and communicate using a computer. Multimedia refers to the computer-assisted
integration of text, drawings, still and moving images(videos) graphics, audio, animation, and
any other media in which any type of information can be expressed, stored, communicated, and
processed digitally.
Categories of Multimedia

1. Linear Multimedia
It is also called Non-interactive multimedia. In the case of linear multimedia, the end-user
cannot control the content of the application. It has literally no interactivity of any kind. Some
multimedia projects like movies in which material is thrown in a linear fashion from beginning
to end. A linear multimedia application lacks all the features with the help of which, a user can
interact with the application such as the ability to choose different options, click on
icons, control the flow of the media, or change the pace at which the media is displayed.
Linear multimedia works very well for providing information to a large group of people such as
at training sessions, seminars, workplace meetings, etc.

2. Non-Linear Multimedia
In Non-Linear multimedia, the end-user is allowed the navigational control to rove through
multimedia content at his own desire. The user can control the access of the application. Non-
linear offers user interactivity to control the movement of data. For example computer games,
websites, self-paced computer-based training packages, etc.

Applications of Multimedia
Multimedia indicates that, in addition to text, graphics/drawings, and photographs, computer
information can be represented using audio, video, and animation. Multimedia is used in:

1. Education
In the subject of education, multimedia is becoming increasingly popular. It is often used to
produce study materials for pupils and to ensure that they have a thorough comprehension of
various disciplines. Edutainment, which combines education and entertainment, has become
highly popular in recent years. This system gives learning in the form of enjoyment to the user.
2. Entertainment
The usage of multimedia in films creates a unique auditory and video impression. Today,
multimedia has completely transformed the art of filmmaking around the world. Multimedia is
the only way to achieve difficult effects and actions.
The entertainment sector makes extensive use of multimedia. It’s particularly useful for creating
special effects in films and video games. The most visible illustration of the emergence of
multimedia in entertainment is music and video apps. Interactive games become possible thanks
to the use of multimedia in the gaming business. Video games are more interesting because of
the integrated audio and visual effects.
3. Business
Marketing, advertising, product demos, presentation, training, networked communication, etc.
are applications of multimedia that are helpful in many businesses. The audience can quickly
understand an idea when multimedia presentations are used. It gives a simple and effective
technique to attract visitors’ attention and effectively conveys information about numerous
products. It’s also utilized to encourage clients to buy things in business marketing.
4. Technology & Science
In the sphere of science and technology, multimedia has a wide range of applications. It can
communicate audio, films, and other multimedia documents in a variety of formats. Only
multimedia can make live broadcasting from one location to another possible.
It is beneficial to surgeons because they can rehearse intricate procedures such as brain removal
and reconstructive surgery using images made from imaging scans of the human body. Plans can
be produced more efficiently to cut expenses and problems.
5. Fine Arts
Multimedia artists work in the fine arts, combining approaches employing many media and
incorporating viewer involvement in some form. For example, a variety of digital mediums can
be used to combine movies and operas.
Digital artist is a new word for these types of artists. Digital painters make digital paintings,
matte paintings, and vector graphics of many varieties using computer applications.
6. Engineering
Multimedia is frequently used by software engineers in computer simulations for military or
industrial training. It’s also used for software interfaces created by creative experts and software
engineers in partnership. Only multimedia is used to perform all the minute calculations.
Components of Multimedia
Multimedia consists of the following 5 components:

1. Text
Characters are used to form words, phrases, and paragraphs in the text. Text appears in all
multimedia creations of some kind. The text can be in a variety of fonts and sizes to match the
multimedia software’s professional presentation. Text in multimedia systems can communicate
specific information or serve as a supplement to the information provided by the other media.

2. Graphics
Non-text information, such as a sketch, chart, or photograph, is represented digitally. Graphics
add to the appeal of the multimedia application. In many circumstances, people dislike reading
big amounts of material on computers. As a result, pictures are more frequently used than words
to clarify concepts, offer background information, and so on. Graphics are at the heart of any
multimedia presentation. The use of visuals in multimedia enhances the effectiveness and
presentation of the concept. Windows Picture, Internet Explorer, and other similar programs are
often used to see visuals. Adobe Photoshop is a popular graphics editing program that allows
you to effortlessly change graphics and make them more effective and appealing.

3. Animations
A sequence of still photographs is being flipped through. It’s a set of visuals that give the
impression of movement. Animation is the process of making a still image appear to move. A
presentation can also be made lighter and more appealing by using animation. In multimedia
applications, the animation is quite popular. The following are some of the most regularly used
animation viewing programs: Fax Viewer, Internet Explorer, etc.

4. Video
Photographic images that appear to be in full motion and are played back at speeds of 15 to 30
frames per second. The term video refers to a moving image that is accompanied by sound, such
as a television picture. Of course, text can be included in videos, either as captioning for spoken
words or as text embedded in an image, as in a slide presentation. The following programs are
widely used to view videos: Real Player, Window Media Player, etc.

5.Audio
Any sound, whether it’s music, conversation, or something else. Sound is the most serious aspect
of multimedia, delivering the joy of music, special effects, and other forms of entertainment.
Decibels are a unit of measurement for volume and sound pressure level. Audio files are used as
part of the application context as well as to enhance interaction. Audio files must occasionally
be distributed using plug-in media players when they appear within online applications and
webpages. MP3, WMA, Wave, MIDI, and RealAudio are examples of audio formats. The
following programs are widely used to view videos: Real Player, Window Media Player, etc.
Some of the advantages of multimedia are:
 It is interactive and integrated: The digitization process integrates all of the numerous
mediums. The ability to receive immediate input enhances interactivity.
 It’s quite user-friendly: The user does not use much energy because they can sit and watch
the presentation, read the text, and listen to the audio.
 It is Flexible: Because it is digital, this media can be easily shared. Adapted to suit various
settings and audiences.
 It appeals to a variety of senses: It makes extensive use of the user’s senses while utilizing
multimedia, for example, hearing, observing and conversing
 Available for all type of audiences: It can be utilized for a wide range of audiences, from a
single individual to a group of people.
11. Define
a. Hyper messaging
Hyper Messaging is one of the major multimedia applications. Messaging started out as a simple
text-based electronic mail application. Multimedia components have made messaging nuch more
complex. We see how these components are added to messages. A hypermedia message may be a
simple message in the form of text with an embedded graphics, sound track, or video clip, or it
may be the result of analysis of material based books, CD ROMs, and other on-line applications.
An authoring sequence for a message based on such analysis may consist of the following
components.
Mobile Messaging
Mobile messaging represents a major new dimension in the users interaction with the messaging
system. With the emergence of remote access from users using personal digital assistants and
notebook computers, made possible by wireless communications developments supporting wide
ranging access using wireless modems and cellular telephone links, mobile messaging has
significantly influence messaging paradigms.
Hypermedia messaging is not restricted to the desktops; it is increasingly being used on the road
through mobile communications in metaphors very different from the traditional desktop
metaphors.

b. digital Voice and Audio

Digital Voice
Speech is analog in nature and is cOl1veli to digital form by an analog-to-digital converter (ADC).
An ADC takes an input signal from a microphone and converts the amplitude of the sampled
analog signal to an 8, 16 or 32 bit digital value.
The four important factors governing the
Sampling Rate: The rate at which the ADC takes a sample of an analog signal.
Resolution: The number of bits utilized for conversion determines the resolution
of ADC.

Linearity: Linearity implies that the sampling is linear at all frequencies and that the amplitude
tmly represents the signal.

Conversion Speed: It is a speed of ADC to convert the analog signal into Digital signals. It must
be fast enough
Audio format defines the quality and loss of audio data. Based on application different type of
audio format are used. Audio formats are broadly divided into three parts:
1. Uncompressed Format
2. Lossy Compressed format
3. Lossless Compressed Format

Uncompressed Audio Format:

 PCM –
It stands for Pulse-Code Modulation. It represents raw analog audio signals in digital form.
To convert analog signal into digital signal it has to be recorded at a particular interval. Hence
it has sampling rate and bit rate (bits used to represent each sample). It a exact representation
of the analog sound and do not involve compression. It is the most common audio format
used in CDs and DVDs
 WAV –
It stands for Waveform Audio File Format, it was developed by Microsoft and IBM in 1991.
It is just a Windows container for audio formats. That means that a WAV file can contain
compressed audio. Most WAV files contain uncompressed audio in PCM format. It is just a
wrapper. It is compatible with both Windows and Mac.
 AIFF –
It stands for Audio Interchange File Format. It was developed by Apple for Mac systems in
1988. Like WAV files, AIFF files can contain multiple kinds of audio. It contain
uncompressed audio in PCM format. It is just a wrapper for the PCM encoding. It is
compatible with both Windows and Mac.

Lossy Compressed Format:

It is a form of compression that loses data during the compression process. But difference in
quality no noticeable to hear.

 MP3 –
It stands for MPEG-1 Audio Layer 3. It was released in 1993 and became popular. It is most
popular audio format for music files. Main aim of MP3 is to remove all those sounds which
not hearable or less noticeable by humans ears. Hence making size of music file small. MP3
is like universal format which is compatible almost every device.
 AAC –
It stands for Advanced Audio Coding. It was developed in 1997 after MP3.The compression
algorithm used by AAC is much more complex and advanced than MP3, so when compared
a particular audio file in MP3 and AAC formats at the same bitrate, the AAC one will
generally have better sound quality. It is the standard audio compression method used by
YouTube, Android, iOS, iTunes, and PlayStations.
 WMA –
It stands for Windows Media Audio. It was released in 1999.It was designed to remove some
of the flaws of MP3 compression method. In terms of quality it is better than MP3. But is
not widely used.
Lossless compression:

This method reduces file size without any loss in quality. But is not as good as lossy compression
as the size of file compressed to lossy compression is 2 and 3 times more.
 FLAC –
It stands for Free Lossless Audio Codec. It can compress a source file by up to 50% without
losing data. It is most popular in its category and is open-source.
 ALAC –
It stands for Apple Lossless Audio Codec. It was launched in 2004 and became free after
2011. It was developed by Apple.

c. Full motion Video


Full-motion video is a computer system capable of displaying full video images and sound on a
computer. Depending on the compression used by the computer and the computer hardware, the
FPS (frames per second) can vary. Computers not capable of displaying at least 24 fps appear
choppy.
A full-motion video (FMV) is the rapid display of a series of images by a computer in such a way
that the person viewing it perceives fluid movement. An FMV can consist of live
action, animation, computer-generated imagery or a combination of those formats. It typically
includes sound and can include text superimposed over the video.
An FMV is pre-recorded or pre-rendered and is stored as compressed data on a disk, such as a
compact disc (CD), a digital video disc (DVD) or a computer's hard disk. Compression is used in
order to decrease the amount of disk space needed to store the data, which is then decompressed
as the video is played back.

FMV features and benefits

You can move the video player anywhere on your computer display, resize it, minimize it, and
close it. The video player is linked to the map display, enabling the following: Display of the
video footprint, sensor location, and field of view on the map.

d. Decompression I/O Technologies


Multimedia Input and Output Devices

Wide ranges of Input and output devices are available for multimedia.

MULTIMEDIA INPUT/OUTPUT TECHNOLOGIES

Multimedia Input and Output Devices


Wide ranges of Input and output devices are available for multimedia.
Image Scanners: Image scanners are the scanners by which documents or a manufactured part
are scanned. The scanner acts as the camera eye and take a photograph of the document, creating
an unaltered electronic pixel representation of the original.

Sound and Voice: When voice or music is captured by a microphone, it generates an electrical
signal. This electrical signal has analog sinusoidal waveforms. To digitize, this signal is converted
into digital voice using an analog-to-digital converter.

Full-Motion Video: It is the most important and most complex component of Multimedia
System. Video Cameras are the primary source of input for full-motion video.

Pen Driver: It is a pen device driver that interacts with the digitizer to receive all digitized
information about the pen location and builds pen packets for the recognition context manager.
Recognition context manager: It is the main part of the pen system. It is responsible for co-
ordinating windows pen applications with the pen. It works with Recognizer, dictionary, and
display driver to recognize and display pen drawn objects.
Recognizor: It recognizes hand written characters and converts them to ASCII.
Dictionary: A dictionary is a dynamic link library (DLL); The windows form pen computing
system uses this dictionary to validate the recognition results.
Display Driver: It interacts with the graphics device interface' and display hardware. When a user
starts writing or drawing, the display driver paints the ink trace on the screen.
Video and Image Display Systems Display System Technologies
There are variety of display system technologies employed for decoding compressed data for
displaying. Mixing and scaling technology: For VGA screen, these technologies are used.
VGA mixing: Images from multiple sources are mixed in the image acquisition memory.
VGA mixing with scaling: Scalar ICs are used to sizing and positioning of images in
predefined windows.
Dual buffered VGA mixing/Scaling: If we provide dual buffering, the original image is
prevented from loss. In this technology, a separate buffer is used to maintain the original image.

Visual Display Technology Standards


MDA: Monochrome Display Adapter.
It was introduced by IBM .
:. It displays 80 x 25 rows and columns .

:. It could not display bitmap graphics .

:. It was introduced in 1981.

CGA: Color Graphics Adapter .

:. It was introduced in 1981.


.:. It was designed to display both text and bitmap graphicsi

it supported RGB color display,

.:. It could display text at a resolution of 640 x 200 pixels .

:. It displays both 40 x 25 and 80 x 25 row!' and columns of text characters.

MGA: Monochrome Gr.aphics Adapter .

:. It was introduced in 1982 .

:. It could display both text and graphics .

:. It could display at a resolution 720 x 350 for text and


720 x 338 for Graphics . MDA is compatible mode for this standard.

EGA: Enhanced Graphics Adapter .


:. It was introduced in 1984 .

:. It emulated both MDt. and CGA standards .


It allowed the display of both text and graphics in 16 colors at a resolution of 640 x· 350 pixels.

PGA: Professional Graphics Adapter.


.:. It was introduced in 1985 .
:. It could display bit map graphics at 640 x 480 resolution and 256 colors .
:.Compatible mode of this standard is CGA.

VGA: Video Graphics Array . :. It was introduced by IBM in 1988 .


:. It offers CGA and EGA compatibility .

:. It display both text and graphics .

:. It generates analog RGB signals to display 256 colors .

:. It remains the basic standard for most video display systems.

SVGA: Super Video Graphics Adapter. It is developed by VESA (Video Electronics


Standard Association) . It's goal is to display with higher resolution than the VGA

with higher refresh rates with minimize flicker.

e. Animation

Animation refers to the movement on the screen of the display device created by displaying a
sequence of still images. Animation is the technique of designing, drawing, making layouts and
preparation of photographic series which are integrated into the multimedia and gaming products.
Animation connects the exploitation and management of still images to generate the illusion of
movement. A person who creates animations is called animator. He/she use various computer
technologies to capture the pictures and then to animate these in the desired sequence. Animation
is the process of creating a scene through the rapid display of pictures and motions. When we hear
the word animation, we think about cartoon-like Doraemon, shin-chan etc. So in earlier times,
animation was done by the continuous movement of the pictures of characters and scenes using
hand-like puppets. Nowadays, with the help of many tools, it is possible to create the characters
and scenes in 2D or 3D and make the animation.
There are a lot of tools created by the developers to make the animation, like Blenders3D, Maya,
etc. Animation can be of various types like 2D animation, 3D animation, paper animation,
traditional animation, puppet animation, etc.

There are some topics that the term "animation" covers in today's society, which is full of creativity
and visualizations. Everyone immediately conjures up images of cartoons and various Disney
World shows when they hear this word. Children love animated films like Disney World,
Doraemon, etc. All cartoons and animated images are a sort of animation created by combining
thousands of individual images and playing them out in a predetermined order.

When we think back a few decades, all animation was produced by hand or by painting, and certain
puppet-like structures were made to display the animation. These types of animation, however, are
real-world animations, while in that technological era, digital animation will advance.

You might also like