Professional Documents
Culture Documents
Computer Science Past Papers Solved
Computer Science Past Papers Solved
(20)
(i) Cache Memory (ii) Static & Dynamic RAM (iii) Instruction Cycle
(iv) Buses & their types (v) Segment Registers (vi) Instruction Pipelining
Answers:
Cache Memory:
Cache memory is a small-sized type of volatile computer memory that provides high-speed data
access to a processor and stores frequently used computer programs, applications and data.
A temporary storage of memory, cache makes data retrieving easier and more efficient. It is the
fastest memory in a computer, and is typically integrated onto the motherboard and directly
embedded in the processor or main random access memory (RAM).
Static RAM:
Data is stored in transistors and requires a constant power flow. Because of the continuous power,
SRAM doesn’t need to be refreshed to remember the data being stored. SRAM is called static as no
change or action i.e. refreshing is not needed to keep the data intact. It is used in cache memories.
Dynamic RAM:
Data is stored in capacitors. Capacitors that store data in DRAM gradually discharge energy, no energy
means the data has been lost. So, a periodic refresh of power is required in order to function. DRAM is
called dynamic as constant change or action (change is continuously happening) i.e. refreshing is needed
to keep the data intact. It is used to implement main memory.
Instruction Cycle:
A program consisting of the memory unit of the computer includes a series of instructions. The
program is implemented on the computer by going through a cycle for each instruction.
In the basic computer, each instruction cycle includes the following procedures −
After the following four procedures are done, the control switches back to the first step and
repeats the similar process for the next instruction. Therefore, the cycle continues until a Halt
condition is met. The figure shows the phases contained in the instruction cycle.
Bus and its types:
Bus
A bus is a high-speed internal connection. Buses are used to send control signals and data
between the processor and other components.
Address bus - carries memory addresses from the processor to other components such as
primary storage and input/output devices. The address bus is unidirectional.
Data bus - carries the data between the processor and other components. The data bus is
bidirectional.
Control bus - carries control signals from the processor to other components. The control
bus also carries the clock's pulses. The control bus is unidirectional.
Segment Registers
Segments are specific areas defined in a program for containing data, code and stack. There are
three main segments −
Code Segment − It contains all the instructions to be executed. A 16-bit Code Segment
register or CS register stores the starting address of the code segment.
Data Segment − It contains data, constants and work areas. A 16-bit Data Segment
register or DS register stores the starting address of the data segment.
Stack Segment − It contains data and return addresses of procedures or subroutines. It is
implemented as a 'stack' data structure. The Stack Segment register or SS register stores
the starting address of the stack.
Instruction Pipeline
Pipeline processing can occur not only in the data stream but in the instruction stream as well.
Most of the digital computers with complex instructions require instruction pipeline to carry out
operations like fetch, decode and execute instructions.
In general, the computer needs to process each instruction with the following sequence of steps.
The five states that are being used in this process model are:
1. Running: It means a process that is currently being executed. Assuming that there is only
a single processor in the below execution process, so there will be at most one processor
at a time that can be running in the state.
2. Ready: It means a process that is prepared to execute when given the opportunity by the
OS.
3. Blocked/Waiting: It means that a process cannot continue executing until some event
occurs like for example, the completion of an input-output operation.
4. New: It means a new process that has been created but has not yet been admitted by the
OS for its execution. A new process is not loaded into the main memory, but its process
control block (PCB) has been created.
5. Exit/Terminate: A process or job that has been released by the OS, either because it is
completed or is aborted for some issue.
(b) Explain multi level feedback queue scheduling algorithm.
Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is like Multilevel
Queue(MLQ) Scheduling but in this processes can move between the queues. And thus, much
more efficient than multilevel queue scheduling.
It is more flexible.
It allows different processes to move between different queues.
It prevents starvation by moving a process that waits too long for the lower priority queue
to the higher priority queue.
For the selection of the best scheduler, it requires some other means to select the values.
It produces more CPU overheads.
1. Simplex channel
2. Half duplex channel
3. Full duplex channel
1. Simplex Channel-
A simplex communication channel can send the signals only in one direction.
Thus, entire bandwidth of the channel can be used during the transmission.
Example-
Radio station
A half duplex communication channel can send signals in both the directions but in only
one direction at a time.
It may be considered as a simplex communication channel whose transmission direction
can be switched.
Example-
Walkie-Talkie
A full duplex communication channel can send signals in both the directions at the same
time.
Full duplex communication channels greatly increases the efficiency of communication.
Example-
Telephone
The total number of bits a channel can hold is called as its capacity.
SECTION - II
Q.4. (a) What are Virtual Functions? And how they can be utilized for
polymorphism?
Use C++ for writing example program. (10)
(b) Explain with examples ANY TWO: (10)
(i) Inheritance & Aggregation (ii) Data Hiding & Encapsulation
(iii) Constructors & Destructors (iv) Class, Object and Abstraction
What are Virtual Functions? And how they can be utilized for
polymorphism?
A virtual function is a member function of the base class, that is overridden in derived class. The
classes that have virtual functions are called polymorphic classes.
The compiler binds virtual function at runtime, hence called runtime polymorphism. Use of
virtual function allows the program to decide at runtime which function is to be called based on
the type of the object pointed by the pointer.
In C++, the member function of a class is selected at runtime using virtual function. The function
in the base class is overridden by the function with the same name of the derived class.
C++ virtual function : Syntax
The keyword virtual is used for defining virtual function.
Class class_name
{
public:
virtual return func_name( args.. )
{
//function definition
}
}
// C++ program to demonstrate how we will calculate
#include <fstream>
#include <iostream>
class Shape {
public:
virtual ~Shape()
};
public:
void calculate()
virtual ~Rectangle()
};
public:
void calculate()
{
virtual ~Square()
};
int main()
Shape* S;
Rectangle r;
S = &r;
S->calculate();
Square sq;
// initialization of reference variable
S = &sq;
S->calculate();
// successfully
return 0;
Aggregation: create new functionality by taking other classes and combining them into a new
class. Attach an common interface to this new class for interoperability with other code
Data hiding
It is associated with data security.
It also helps conceal the complexities of the application.
It focuses on hiding/restricting the data usage.
It is considered as a process and a technique.
This data is always private and inaccessible.
Encapsulation
It can be defined as the wrapping up of data into a single module.
This will hide the complicated and confidential information about the application.
This encapsulated data can be private or public, depending on the requirement.
It is considered as a sub-process in the bigger process of data hiding.
(iv) Class, Object and Abstraction
Class:
The building block of C++ that leads to Object-Oriented programming is a Class. It is a user-defined data
type, which holds its own data members and member functions, which can be accessed and used by
creating an instance of that class. A class is like a blueprint for an object.
Object:
This is the basic unit of object oriented programming. That is both data and function that operate on
data are bundled as a unit called as object.
Abstraction:
Abstraction is the concept of object-oriented programming that “shows” only essential attributes and
“hides” unnecessary information. The main purpose of abstraction is hiding the unnecessary details
from the users. Abstraction is selecting data from a larger pool to show only relevant details of the
object to the user. It helps in reducing programming complexity and efforts. It is one of the most
important concepts of OOPs.
Q.5. (a) Write and explain algorithm for Binary Search. (8)
(b) Explain ANY TRHEE: (12)
(i) Stack & Queue (ii) Tree & Graph (iii) Linked List & Array
(iv) Algorithm & Program (v) Complexity of Algorithm
. (a) Write and explain algorithm for Binary Search
Queue is a linear data structure in which elements can be inserted only from one side of the list
called rear, and the elements can be deleted only from the other side called the front. The queue
data structure follows the FIFO (First In First Out) principle, i.e. the element inserted at first in
the list, is the first element to be removed from the list. The insertion of an element in a queue is
called an enqueue operation and the deletion of an element is called a dequeue operation. In
queue, we always maintain two pointers, one pointing to the element which was inserted at the
first and still present in the list with the front pointer and the second pointer pointing to the
element inserted at the last with the rear pointer
A graph is a collection of two sets V and E where V is a finite non-empty set of vertices and E is
a finite non-empty set of edges.
For Example:
G = {{V1, V2, V3, V4, V5, V6}, {E1, E2, E3, E4, E5, E6, E7}}
Tree :
Types of Edges They can be directed or undirected They are always directed
Loop Formation A cycle can be formed. There will not be any cycle.
For graph traversal, we use Breadth-First We traverse a tree using in-order, pre-
Traversal
Search (BFS), and Depth-First Search (DFS). order, or post-order traversal methods.
For finding shortest path in networking For game trees, decision trees, the tree
Applications
graph is used. is used.
Array: Arrays store elements in contiguous memory locations, resulting in easily calculable
addresses for the elements stored and this allows faster access to an element at a specific index.
Linked List: Linked lists are less rigid in their storage structure and elements are usually not
stored in contiguous locations, hence they need to be stored with additional tags giving a
reference to the next element.
This difference in the data storage scheme decides which data structure would be more suitable
for a given situation.
Complexities of an Algorithm
The complexity of an algorithm computes the amount of time and spaces required by an
algorithm for an input of size (n). The complexity of an algorithm can be divided into two types.
The time complexity and the space complexity.
Asymptotic notation is a mathematical tool that calculates the required time in terms of input size and
does not require the execution of the code.
The following 3 asymptotic notations are mostly used to represent the time complexity of
algorithms:
Big-O Notation (Ο) – Big-O notation specifically describes the worst-case scenario.
Omega Notation (Ω) – Omega(Ω) notation specifically describes the best-case scenario.
Theta Notation (θ) – This notation represents the average complexity of an algorithm.
The process encompasses the entire range of activities, from initial customer inception to
software production and maintenance. It's also known as the Software Development Life
Cycle (SDLC). Let's take a look at each of the steps involved in a typical software
engineering process.
Step 1: Understanding Customer Requirements
This step is also known as the ''requirements collection'' step. It's all about
communicating with the customer before building a software, so you get to know their
requirements thoroughly. It's usually conducted by a business analyst or product analyst.
A Customer Requirement Specification (CRS) document is written from a customer's
perspective and describes, in a simple way, what the software is going to do.
Step 2: Requirement Analysis: Is the Project Feasible?
This stage involves exploring issues related to the financial, technical, operational, and
time management aspects of software development. It's an essential step towards creating
functional specifications and design. It's usually done by a team of product managers,
business analysts, software architects, developers, HR, and finance managers.
Step 3: Creating a Design
Once the analysis stage is over, it's time to create a blueprint for the software. Architects
and senior developers create a high-level design of the software architecture, along with a
low-level design describing how each and every component in the software should work.
Step 4: Coding, Testing, and Installation
Next, software developers implement the design by writing code. After all the code
developed by different teams is integrated, test engineers check if the software meets the
required specifications, so that developers can debug code. The release engineer then
deploys the software on a server.
Step 5: Keeping it Going: Maintenance
Maintenance is the application of each of the previous steps to the existing modules in the
software in order to modify or add new features, depending on what the customer needs.
The following figure illustrates all the stages of the software engineering process:
Waterfall model
V model
Incremental model
RAD model
Agile model
Iterative model
Prototype model
Spiral model
Spiral Model
The spiral model is a risk driven iterative software process model. The spiral model delivers
projects in loops. Unlike other process models, its steps aren’t activities but phases for
addressing whatever problem has the greatest risk of causing a failure.
It was designed to include the best features from the waterfall and introduces risk-assessment.
You develop the concept in the first few cycles, and then it evolves into an implementation.
Though this model is great for managing uncertainty, it can be difficult to have stable
documentation. The spiral model can be used for projects with unclear needs or projects still in
research and development.
E-R model and Relational model are two types of data models present in DBMS. Let’s have a
brief look of them:
1. E-R Model : E-R model stands for Entity-Relationship model. ER Model is used to model the
logical view of the system from data perspective which consists of these components: Entity,
Entity Type, Entity Set. An Entity may be an object with a physical existence – a particular
person, car, house, or employee – or it may be an object with a conceptual existence – a
company, a job, or a university course. An Entity is an object of Entity Type and set of all
entities is called an entity set. e.g.; E1 is an entity having Entity Type Student and set of all
students is called Entity Set. An Entity Type defines a collection of similar entities and set of all
entities is called an entity set.
2. Relational model : Relational Model was proposed by E.F. Codd to model data in the form of
relations or tables. After designing the conceptual model of Database using ER diagram, we
need to convert the conceptual model in the relational model which can be implemented using
any RDBMS languages like Oracle SQL, MySQL etc. Consider a relation STUDENT with
attributes ROLL_NO, NAME, ADDRESS, PHONE and AGE shown in Table 1.STUDENT
i) Computer Graphics
Graphics are defined as any sketch or a drawing or a special network that pictorially represents
some meaningful information. Computer Graphics is used where a set of images needs to be
manipulated or the creation of the image in the form of pixels and is drawn on the computer.
Computer Graphics can be used in digital photography, film, entertainment, electronic gadgets,
and all other core technologies which are required. It is a vast subject and area in the field of
computer science. Computer Graphics can be used in UI design, rendering, geometric objects,
animation, and many more. In most areas, computer graphics is an abbreviation of CG. There are
several tools used for the implementation of Computer Graphics. The basic is the <graphics.h>
header file in Turbo-C, Unity for advanced and even OpenGL can be used for its
Implementation. It was invented in 1960 by great researchers Verne Hudson and William Fetter
from Boeing.
The manipulation and the representation of the image or the data in a graphical manner.
Various technology is required for the creation and manipulation.
Digital synthesis and its manipulation.
Applications
Computer Graphics are used for an aided design for engineering and architectural system-
These are used in electrical automobiles, electro-mechanical, mechanical, electronic devices. For
example gears and bolts.
Computer Art – MS Paint.
Presentation Graphics – It is used to summarize financial statistical scientific or economic data.
For example- Bar chart, Line chart.
Entertainment- It is used in motion pictures, music videos, television gaming.
Education and training- It is used to understand the operations of complex systems. It is also
used for specialized system such for framing for captains, pilots and so on.
Visualization- To study trends and patterns.For example- Analyzing satellite photo of earth.
A common characteristic in pixel art is the low overall colour count in the image. Pixel art as a medium
mimics a lot of traits found in older video game graphics, rendered by machines which were capable of
only outputting a limited number of colours at once. Additionally, many pixel artists are of the opinion
that in most cases, using a large number of colours, especially when very similar to each other in value,
is unnecessary, and detracts from the overall cleanliness of the image, making it look messier. Many
experienced pixel artists recommend not using more colours than necessary.
Vector Graphics:
Vector graphics is a form of computer graphics in which visual images are created directly from
geometric shapes defined on a Cartesian plane, such as points, lines, curves and polygons. The
associated mechanisms may include vector display and printing hardware, vector data models
and file formats, as well as the software based on these data models (especially graphic design
software, computer-aided design, and geographic information systems). Vector graphics is an
alternative to raster or bitmap graphics, with each having advantages and disadvantages in
specific situations.[1]
While vector hardware has largely disappeared in favor of raster-based monitors and printers, [2]
vector data and software continues to be widely used, especially when a high degree of
geometric precision is required, and when complex information can be decomposed into simple
geometric primitives. Thus, it is the preferred model for domains such as engineering,
architecture, surveying, 3D rendering, and typography, but is entirely inappropriate for
applications such as photography and remote sensing, where raster is more effective and
efficient. Some application domains, such as geographic information systems (GIS) and graphic
design, use both vector and raster graphics at times, depending on purpose.
Vector graphics are based on the mathematics of analytic or coordinate geometry, and is not
related to other mathematical uses of the term vector. This can lead to some confusion in
disciplines in which both meanings are used.
Computer Animation:
Generally, Computer animation is a visual digital display technology that processes the moving
images on screen. In simple words, it can be put or defined as the art or power of giving life,
energy and emotions etc. to any non-living or inanimate object via computers. It can be
presented in form of any video or movie. Computer animation has the ability to make any dead
image alive. The key/main concept behind computer animation is to play the defined images at a
faster rate to fool the viewer so that the viewer should interpret those images as a continuous
motion of images.
Computer Animation is a sub-part or say small part of computer graphics and animation.
Nowadays, animation can be seen in many area around us. It is used in a lot of movies, films and
games, education, e-commerce, computer art, training etc. It is a big part of entertainment area as
most of the sets and background is all build up through VFX and animation.
Rendering:
Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic
image from a 2D or 3D model by means of a computer program. The resulting image is referred
to as the render. Multiple models can be defined in a scene file containing objects in a strictly
defined language or data structure. The scene file contains geometry, viewpoint, texture, lighting,
and shading information describing the virtual scene. The data contained in the scene file is then
passed to a rendering program to be processed and output to a digital image or raster graphics
image file. The term "rendering" is analogous to the concept of an artist's impression of a scene.
The term "rendering" is also used to describe the process of calculating effects in a video editing
program to produce the final video output.
Rendering is one of the major sub-topics of 3D computer graphics, and in practice it is always
connected to the others. It is the last major step in the graphics pipeline, giving models and
animation their final appearance. With the increasing sophistication of computer graphics since
the 1970s, it has become a more distinct subject.
Rendering has uses in architecture, video games, simulators, movie and TV visual effects, and
design visualization, each employing a different balance of features and techniques. A wide
variety of renderers are available for use. Some are integrated into larger modeling and
animation packages, some are stand-alone, and some are free open-source projects. On the
inside, a renderer is a carefully engineered program based on multiple disciplines, including light
physics, visual perception, mathematics, and software development.
SECTION – I
Q.2. (a) List OSI Seven Layers in order and briefly describe functions of each layer (10+6+4)
(b) What is difference between IPv4 and IPv6. Why IPv6 was developed when there
IPv4 was already available and implemented?
(c) What is difference between physical address, logical address, domain and port number?
(a) List OSI Seven Layers in order and briefly describe functions of each layer
OSI stands for Open Systems Interconnection. It has been developed by ISO – ‘International
Organization for Standardization‘, in the year 1984.
The lowest layer of the OSI reference model is the physical layer. It is responsible for the actual
physical connection between the devices. The physical layer contains information in the form of
bits. It is responsible for transmitting individual bits from one node to the next. When receiving
data, this layer will get the signal received and convert it into 0s and 1s and send them to the
Data Link layer, which will put the frame back together.
1. Bit synchronization: The physical layer provides the synchronization of the bits by
providing a clock. This clock controls both sender and receiver thus providing
synchronization at bit level.
2. Bit rate control: The Physical layer also defines the transmission rate i.e. the number of
bits sent per second.
3. Physical topologies: Physical layer specifies the way in which the different,
devices/nodes are arranged in a network i.e. bus, star, or mesh topology.
4. Transmission mode: Physical layer also defines the way in which the data flows
between the two connected devices. The various transmission modes possible are
simplex, half-duplex and full-duplex.
* Hub, Repeater, Modem, Cables are Physical Layer devices.
** Network Layer, Data Link Layer, and Physical Layer are also known as Lower
Layers or Hardware Layers.
2. Data Link Layer (DLL) (Layer 2):
The data link layer is responsible for the node-to-node delivery of the message. The main
function of this layer is to make sure data transfer is error-free from one node to another, over the
physical layer. When a packet arrives in a network, it is the responsibility of DLL to transmit it
to the Host using its MAC address.
Data Link Layer is divided into two sub layers:
The packet received from the Network layer is further divided into frames depending on the
frame size of NIC (Network Interface Card). DLL also encapsulates Sender and Receiver’s MAC
address in the header.
1. Framing: Framing is a function of the data link layer. It provides a way for a sender to transmit a
set of bits that are meaningful to the receiver. This can be accomplished by attaching special bit
patterns to the beginning and end of the frame.
2. Physical addressing: After creating frames, the Data link layer adds physical addresses (MAC
address) of the sender and/or receiver in the header of each frame.
3. Error control: Data link layer provides the mechanism of error control in which it detects and
retransmits damaged or lost frames.
4. Flow Control: The data rate must be constant on both sides else the data may get corrupted
thus, flow control coordinates the amount of data that can be sent before receiving
acknowledgement.
5. Access control: When a single communication channel is shared by multiple devices, the MAC
sub-layer of the data link layer helps to determine which device has control over the channel at
a given time.
The network layer works for the transmission of data from one host to the other located in
different networks. It also takes care of packet routing i.e. selection of the shortest path to
transmit the packet, from the number of routes available. The sender & receiver’s IP addresses
are placed in the header by the network layer.
1. Routing: The network layer protocols determine which route is suitable from source to
destination. This function of the network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork uniquely, the
network layer defines an addressing scheme. The sender & receiver’s IP addresses are
placed in the header by the network layer. Such an address distinguishes each device
uniquely and universally.
The transport layer provides services to the application layer and takes services from the network
layer. The data in the transport layer is referred to as Segments. It is responsible for the End to
End Delivery of the complete message. The transport layer also provides the acknowledgement
of the successful data transmission and re-transmits the data if an error is found.
At sender’s side: Transport layer receives the formatted data from the upper layers, performs
Segmentation, and also implements Flow & Error control to ensure proper data transmission.
It also adds Source and Destination port numbers in its header and forwards the segmented data
to the Network Layer.
Note: The sender needs to know the port number associated with the receiver’s application.
Generally, this destination port number is configured, either by default or manually. For
example, when a web application makes a request to a web server, it typically uses port number
80, because this is the default port assigned to web applications. Many applications have default
ports assigned.
At receiver’s side: Transport Layer reads the port number from its header and forwards the Data
which it has received to the respective application. It also performs sequencing and reassembling
of the segmented data.
– Connection Establishment
– Data Transfer
– Termination / disconnection
In this type of transmission, the receiving device sends an acknowledgement, back to the source
after a packet or group of packets is received. This type of transmission is reliable and secure.
B. Connectionless service: It is a one-phase process and includes Data Transfer. In this type of
transmission, the receiver does not acknowledge receipt of a packet. This approach allows for
much faster communication between devices. Connection-oriented service is more reliable than
connectionless Service.
1. Session establishment, maintenance, and termination: The layer allows the two processes to
establish, use and terminate a connection.
2. Synchronization: This layer allows a process to add checkpoints which are considered
synchronization points into the data. These synchronization points help to identify the error so
that the data is re-synchronized properly, and ends of the messages are not cut prematurely and
data loss is avoided.
3. Dialog Controller: The session layer allows two systems to start communication with each other
in half-duplex or full-duplex.
**All the below 3 layers(including Session Layer) are integrated as a single layer in the TCP/IP
model as “Application Layer”.
**Implementation of these 3 layers is done by the network application itself. These are also
known as Upper Layers or Software Layers.
The presentation layer is also called the Translation layer. The data from the application layer is
extracted here and manipulated as per the required format to transmit over the network.
The functions of the presentation layer are :
At the very top of the OSI Reference Model stack of layers, we find the Application layer which
is implemented by the network applications. These applications produce the data, which has to
be transferred over the network. This layer also serves as a window for the application services
to access the network and for displaying the received information to the user.
The primary function of IPv6 is to allow for more unique TCP/IP address identifiers to be
created, now that we’ve run out of the 4.3 billion created with IPv4. This is one of the main
reasons why IPv6 is such an important innovation for the Internet of Things (IoT). Internet-
connected products are becoming increasingly popular, and while IPv4 addresses couldn’t meet
the demand for IoT products, IPv6 gives IoT products a platform to operate on for a very long
time.
There are dozens of reasons why IPv6 is superior to IPv4 (and why this new internet protocol is
important for companies to understand), but we’re zeroing in on IPv6 for IoT. Let’s take a look
at three of the distinct advantages it offers.
With billions of new smart products being created every day, security is an important thought in
the back of all IoT engineers’ minds. Organizations and individuals have learned of the real and
imminent threat that hackers pose in past years, but the IoT brings up a whole new line of
security intricacies. Hacking a secure network and harvesting millions of credit card numbers is
terrible—but if someone with ill intentions was to hack into a smart city, or a neighborhood of
smart houses, the outcome could be far more catastrophic. You can tell why IoT security is very
important—and the good news is that IPv6 offers better security solutions than its predecessor,
largely due to IPSec.
For one thing, IPv6 can run end-to-end encryption. While this technology was retrofitted into
IPv4, it remains an extra option that is not universally used. The encryption and integrity-
checking used in current virtual private networks (VPNs) are a standard component in IPv6,
available for all connections and supported by all compatible devices and systems. Widespread
adoption of IPv6 will therefore make “man-in-the-middle” attacks—i.e., thinking that you’re
signing into a secure bank log in when you’re actually walking into a cyber “trap”—significantly
more difficult.
IPv6 also supports more-secure name resolution. The Secure Neighbor Discovery (SEND)
protocol is capable of enabling cryptographic confirmation that a host is who it claims to be at
the time of the connection. This renders Address Resolution Protocol (ARP) poisoning and other
naming-based attacks more difficult. And while IPv6 isn’t a replacement for application- or
service-layer verification, it still offers an improved level of trust in connections. With IPv4, it’s
fairly easy for an attacker to redirect traffic between two legitimate hosts and manipulate the
conversation or at least observe it—but IPv6 makes this very difficult.
These added security features depend entirely on proper design and implementation of IPv6, and
the more complex, flexible infrastructure of IPv6 makes this process more difficult.
Nevertheless, if properly configured, IPv6 networking will be significantly more secure than
IPv4 by a longshot.
2. Scalability
According to a report put out by Gartner, 25 billion “things” will be connected to the internet by
the year 2020. That’s a pretty incredible estimation, considering the same report notes that 4.9
billion devices will be connected in 2015. This purported 400% increase in growth in only five
years sheds some light on how much exponential IoT growth we can expect to see in the next 10,
20, or even 50 years.
Given these numbers, it’s easy to understand why IPv6 (and its trillions upon trillions of new
addresses) are important for IoT devices. Creators of IoT products that are connected over
TCP/IP can rest assured that there will be a unique identifier available for their devices for a
long, long time.
3. Connectability
With billions of new IoT devices entering the market each year, connectibility—i.e., allowing
network-connected devices to “speak” to each other—is vital.
With IPv4, there were quite a few issues with allowing IoT products to speak with one another.
Network Address Translation (NAT) posed one of these major issues. NAT was created as a
workaround for organizations that needed multiple people and devices to be able to work off of
the same IPv4 address. Not only does this pose a security issue (which we’ll talk about in a
moment), but it also poses a difficult issue for IoT products. IPv6 allows IoT products to be
uniquely addressable without having to work around all of the traditional NAT and firewall
issues. Larger and more advanced host devices have all sorts of tools to make working with
firewalls and NAT routers easier, but small IoT endpoints do not. By using IPv6, many of these
issues become easier for TCP/IP enabled IoT devices to handle.
Logical Address: An IP address of the system is called logical address. This address is the
combnation of Net ID and Host ID. This address is used by network layer to identify a particular
network (source to destination) among the networks. This address can be changed by changing
the host position on the network. So it is called logical address.
Physical address: Each system having a NIC(Network Interface Card) through which two
systems physically connected with each other with cables. The address of the NIC is called
Physical address or mac address. This is specified by the mmanufacturer company of the
card. This address is used by data link layer.
Port Address: There are many application running on the computer. Each application run with a
port no.(logically) on the computer. This port no. for application is decided by the Kernal of the
OS. This port no. is called port address.
Domain Name:
When referring to an Internet address or name, a domain or domain name is the location of a website.
For example, the domain name "google.com" points to the IP address "216.58.216.164". Generally, it's
easier to remember a name rather than a long string of numbers. A domain name contains a maximum
of sixty three characters, with one character minimum, and is entered after the protocol in the URL, as
shown in the following example.
What is Shadowing
Shadowing is a concept of OOP (Object Oriented Programming) paradigm. Using Shadowing,
we can provide a new implementation to base class member without overriding it, which means
that the original implementation of base class member gets shadowed (hidden) with the new
implementation of base class member provided in derived class.
Polymorphism: The word polymorphism means having many forms. In simple words, we can
define polymorphism as the ability of a message to be displayed in more than one form.
A person at the same time can have different characteristic. Like a man at the same time is a
father, a husband, an employee. So the same person posses different behaviour in different
situations. This is called polymorphism.
An operation may exhibit different behaviours in different instances. The behaviour depends
upon the types of data used in the operation.
Example: Suppose we have to write a function to add some integers, some times there are 2
integers, some times there are 3 integers. We can write the Addition Method with the same name
having different parameters, the concerned method will be called according to parameters.
Copy Constructor:
A copy constructor is a member function that initializes an object using another object of the same class.
In simple terms, a constructor which creates an object by initializing it with an object of the same class,
which has been created previously is known as a copy constructor.
3. The process of initializing members of an object through a copy constructor is known as copy
initialization.
4. It is also called member-wise initialization because the copy constructor initializes one object with the
existing object, both belonging to the same class on a member-by-member copy basis.
5. The copy constructor can be defined explicitly by the programmer. If the programmer does not define
the copy constructor, the compiler does it for us.
Example:
// C++ program to demonstrate the working
// of a COPY CONSTRUCTOR
#include <iostream>
using namespace std;
class Point {
private:
int x, y;
public:
Point(int x1, int y1)
{
x = x1;
y = y1;
}
// Copy constructor
Point(const Point& p1)
{
x = p1.x;
y = p1.y;
}
int main()
{
Point p1(10, 15); // Normal constructor is called here
Point p2 = p1; // Copy constructor is called here
A table can have only one primary key, which may consist of single or multiple fields. When
multiple fields are used as a primary key, they are called a composite key.
If a table has a primary key defined on any field(s), then you cannot have two records having the
same value of that field(s).
Alternate Key:
Alternate Key or Secondary Key is the key that has not been selected to be the primary key, but are
candidate keys. However, it is considered a candidate key for the primary key. A candidate key not
selected as a primary key is called alternate or secondary key.