CGPresentation Week2 (API, GPU&OpenGLInstallation)

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 55

Computer Graphics

Week-2

Presentation by: Ms. Ifrah Mansoor


API’s , GPU, Graphic API’s
API’s
• An Application Programming Interface (API)
is a way for two or more computer programs to
communicate with each other.

• There are many types of APIs but when it comes


to GPUs, people will generally refer to them as a
graphics API.

• Each time you use an app like Facebook, send an


instant message, or check the weather on your
phone, you're using an API.
A public API is open An internal or private API is
and available for use intended only for use within the
by any outside enterprise to connect systems and
developer or business. data within the business.
It is also called
external API.

Composite APIs generally


A partner API, only combine two or more APIs to
available to specifically craft a sequence of related or
selected and authorized interdependent operations.
outside developers /
partners.
GPU
• Graphics processing unit, a specialized processor
originally designed to initialize graphics rendering.

• The GPU main role is to render images, however,


to do this, it requires space to hold the information
which is required to make the full completed
image, therefore it uses RAM (Random Access
Memory) to store this data.
• The data consists of each pixel associated with the
image, as well as its colour and its location on the
screen.
Types of GPUs:
There are two types of GPU, integrated and discrete:
• Integrated GPU:
• The term-integrated graphics refers to a computer where the GPU is built on the same chip as the
CPU.
• Integrated GPUs utilize the system RAM, rather than having their own RAM like discrete GPUs.

• Discrete GPU:
• A discrete GPU is a dedicated graphics card, which is completely separate from the CPU. The
graphics card have their own RAM to store image data.
Graphic AcPI
What is a graphic API?
• A graphic API is a collection of documented libraries and commands
that communicate with the computer hardware in creating 2D and
3D applications

• Graphics API is a type of API that tells graphics hardware how to draw
something on the screen.
What are the most efficient graphics APIs?
• There are only two graphics API with widespread support:
Microsoft's DirectX and ARB's OpenGL. For PC gaming, DirectX was
typically used as Windows dominated the personal computer market

• Microsoft DirectX is a collection of application programming


interfaces for handling tasks related to multimedia, especially game
programming and video, on Microsoft platforms.
Graphics API
• OpenGL - A popular graphics API supported on desktop platforms.
• Direct3D (a subset of DirectX) - A graphics API supported on Microsoft
Windows platforms.
• OpenGLES - A popular graphics API supported on mobile platforms.
• Metal - A graphics API supported on Apple platforms.
• OpenCL - An API for CPUs supported on both desktop and mobile
platforms.
API’s In Computer Graphics
• These APIs for 3D computer graphics are particularly popular:
• Direct3D (a subset of DirectX)
• Glide.
• Mantle developed by AMD.
• Metal developed by Apple.
• OpenGL and the OpenGL Shading Language.
• OpenGL ES 3D API for embedded devices.
• QuickDraw 3D developed by Apple Computer starting in 1995, abandoned
in 1998.
• RenderMan.
Graphic Functions
Graphics Function System
Graphics Functions
Installation of OpenGL
Installation of OpenGL in Visual Studio

• We will be using Visual Studio.


• First we have to download
Visual Studio 2019-2022.
Installation of OpenGL in Visual Studio
Now select the required components as shown in below image and click install while
Downloading
Installation of OpenGL in Visual Studio
• Create a new
C++,
windows and
console
based Empty
Project.
Installation of OpenGL in Visual Studio

• Apply configurations.
Installation of OpenGL in Visual Studio
• Select Tools option from the menu bar.

• Select
NuGet Package Manager > Package Manager Console

• Package Manager console window will get open at the


bottom of the project screen.

NuGet Package Manager contains reusable code that


other developers have made available to you for use in
your projects
Installation of OpenGL in Visual Studio

• Run the given command in


Package Manager Console:
Install-Package nupengl.core

• Nupengl.core is the library


Package that allow users to
Access OpenGL library in VS.
Implementing OpenGL with C++
Implementing OpenGL with C++
• Right-Click on the
created Project and
select Add option.

• Click New Item and add


new C++ file to the
created Project.

• Note: For every new project


you will have to install
nupengl.core library.
Implementing OpenGL with C++
• Now write a basic C++ code for drawing red colored square using Opengl.
• Step 1: Import Libraries
#include<gl/GL.h> // GL.h header file
#include<gl/GLU.h> // GLU.h header file
#include<gl/glut.h> // glut.h header file from freeglut\include\GL folder
#include<conio.h>
#include<stdio.h>
#include<math.h>
#include<string.h> // Init_OpenGL() function

#include <windows.h> // for MS Windows Mandatory


libraries for
#include <GL/glut.h> // GLUT, include glu.h and gl.h
opengl
Implementing OpenGL with C++
• Step 2: Write display function
void display() {
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Set background color to black and opaque
glClear(GL_COLOR_BUFFER_BIT); // Clear the color buffer (background)

// Draw a Red 1x1 Square centered at origin


glBegin(GL_QUADS); // Each set of 4 vertices form a quad
glColor3f(1.0f, 0.0f, 0.0f); // Red

glVertex2f(-0.5f, -0.5f); // x, y
glVertex2f(0.5f, -0.5f);
glVertex2f(0.5f, 0.5f);
glVertex2f(-0.5f, 0.5f);
glEnd();

glFlush(); // Render now


}
Implementing OpenGL with C++
• Step 3: Create Main function

int main(int argc, char** argv) {


glutInit(&argc, argv); // Initialize GLUT
glutCreateWindow("OpenGL Setup Test"); // Create a window with the given title
glutInitWindowSize(320, 320); // Set the window's initial width & height
glutInitWindowPosition(50, 50); // Position the window's initial top-left corner
glutDisplayFunc(display); // Register display callback handler for window re-paint
glutMainLoop(); // Enter the event-processing loop
return 0;
}
Implementing OpenGL with C++
Graphics Pipeline
Computer Graphics Pipeline
• In computer graphics,
• a computer graphics pipeline is a rendering pipeline
• The Rendering Pipeline is the sequence of steps that OpenGL takes when
rendering objects.
• Simply graphics pipeline is a conceptual model that describes what steps a
graphics system needs to perform to render a 3D scene to a 2D screen.
Computer Graphics Pipeline
• Once a 3D model has been created, for instance in a video game or any other 3D
computer animation, the graphics pipeline is the process of turning that 3D model
into what the computer displays.
• Because the steps required for this operation depend on the software and hardware
used and the desired display characteristics, there is no universal graphics pipeline
suitable for all cases.
• However, graphics application programming interfaces (APIs) such
as Direct3D and OpenGL were created to unify similar steps and to control the
graphics pipeline of a given hardware accelerator.
Structure
• A graphics pipeline can be divided into three main parts: Application,
Geometry and Rasterization.

1 2 3
1. Application
• The application step is executed by the software on the main processor (CPU).

• In the application step, changes are made to the scene as required, for example,
by user interaction by means of input devices or during an animation.

• In a modern Game Engine such as Unity, the programmer deals almost


exclusively with the application step, and uses a high-level language such as C#,
as opposed to C or C++.
• The new scene with all its primitives, usually triangles, lines and points, is then
passed on to the next step in the pipeline.
2. Geometry
• The geometry step is the first stage in computer graphics systems which
perform image generation based on geometric models, which is responsible
for the majority of the operations with polygons and their vertices , can be
divided into the following five tasks.
• It depends on the particular implementation of how these tasks are
organized as actual parallel pipeline steps.
2. Geometry

a. Camera Transformation
• In addition to the objects, the scene also
defines a virtual camera or viewer that
indicates the position and direction of view
relative to which the scene is rendered.
• The scene is transformed so that the camera
is at the origin looking along the Z axis.
• The resulting coordinate system is called the
camera coordinate system and the
transformation is called camera • Left: Position and direction of the virtual viewer (camera),
transformation or View Transformation. as defined by the user. Right: Positioning the objects after
the camera transformation. The light gray area is the
visible volume.
2. Geometry
b. Lighting
• Often a scene contains light sources placed at different positions to
make the lighting of the objects appear more realistic.
• A general lighting is applied to all surfaces.
• It is the diffuse and thus direction-independent brightness of the scene.
• The sun is a directed light source, which can be assumed to be
infinitely far away.
2. Geometry
c. Projection
• The 3D projection step transforms the view volume into a cube with the
corner point coordinates (-1, -1, 0) and (1, 1, 1); Occasionally other target
volumes are also used.
• This step is called projection, even though it transforms a volume into
another volume, since the resulting Z coordinates are not stored in the
image, but are only used in Z-buffering in the later rastering step.
• To limit the number of displayed objects, two additional clipping planes are
used; The visual volume is therefore a truncated pyramid (frustum).
• The parallel or orthogonal projection is used, for example, for technical
representations because it has the advantage that all parallels in the object
space are also parallel in the image space, and the surfaces and volumes are
the same size regardless of the distance from the viewer.
• Maps use, for example, an orthogonal projection (so-called orthophoto), but
oblique images of a landscape cannot be used in this way - although they frustum
can technically be rendered, they seem so distorted that we cannot make any
use of them.
2. Geometry
What is Z-buffering in Projection?
• A depth buffer, also known as a z-buffer, is a type of data
buffer used in computer graphics to represent depth information of
objects in 3D space from a particular perspective.
• Depth buffers are an aid to rendering a scene to ensure that the
correct polygons properly occlude other polygons.
• Z-buffering was first described in 1974 by Wolfgang Straßer in his
PhD thesis on fast algorithms for rendering occluded objects.
2. Geometry
d. Clipping
• Any procedure that identifies those portions of a picture that are
either inside or outside of a specified region of space is referred to as
a clipping algorithm (or simply clipping)
• Everything outside the window is discarded
• Clipping algorithms can be applied in world co-ordinates, so that only
the contents of the window interior are mapped to device co-
ordinates
• Alternately, the complete world co-ordinate picture can be mapped
first to device co-ordinates, then clipped against the viewport
2. Geometry
d. Clipping
• World co-ordinate clipping removes those primitives outside the
window from further consideration, thus eliminating the processing
necessary to transform those primitives to device space.
• Viewport clipping ,can reduce calculations by allowing concatenation
of viewing and geometric transformation matrices.
• But viewport clipping does require that the transformation to device
co-ordinates be performed for all objects, including those outside the
window.
2. Geometry
Clipping Example: Line Clipping
• Line clipping against rectangles
(x1, y1)

ymax
(x0, y0)

ymin
xmin xmax

The problem: Given a set of 2D lines or polygons and a window, clip the lines or polygons to their
regions that are inside the window.
Clipping is tricky!

In: 3 vertices
Clip
Out: 6 vertices

Clip In: 1 polygon


Out: 2 polygons
2. Geometry

e. Window-Viewport transformation
• In order to output the image to any target
area (viewport) of the screen, another
transformation, the Window-Viewport
transformation, must be applied.
• This is a shift, followed by scaling.
• The resulting coordinates are the device
coordinates of the output device.
• The viewport contains 6 values: height and
width of the window in pixels, the upper
left corner of the window in window
coordinates (usually 0, 0) and the minimum
and maximum values for Z (usually 0 and
1).
3. Rasterization
• Raster Images
• These are the types of images that
are produced when scanning or
photographing an object.
• Raster images are compiled using
pixels, or tiny dots, containing unique
color and tonal information that come
together to create the image.
• Since raster images are pixel based,
they are resolution dependent.
3. Rasterization
• What is Rasterization using top
left rule?
• its center lies completely inside
the triangle.
• its center lies exactly on the
triangle edge (or multiple edges
in case of corners) that is (or, in
case of corners, all are)
either top or left edge.
THANKYOU

You might also like