IBM Daksh Question

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 29

1) Booting process,

2) Options virtualization
3) DHCP, How it works
4) DNS Operation
5) Port number of DNS and DHCP
6) What are client OS and server OS their examples
7) Work experience(basic details, why did you leave, and reason to join Concentrix)
8) POST and its operation
9) Boot process, steps what we can see after starting the boot process
10) BIOS
11) Difference between device drivers and Firmware

Concept of Booting
Booting
Booting is basically the process of starting the computer. When the
CPU is first switched on it has nothing inside the Memory. In order to
start the Computer, load the Operating System into the Main Memory
and then Computer is ready to take commands from the User. Learn the
types of booting.

What happens in the Process of Booting?


Booting happens when you start the computer. This happens when we
turned ON the power or the computer restarts. The system BIOS (Basic
Input/Output System) makes the peripheral devices active. Further, it
requires that the boot device loads the operating system into the main
memory.
Booting

Boot Devices
Booting can be done either through hardware (pressing the start button)
or by giving software commands. Therefore, a boot device is a device
that loads the operating system. Moreover, it contains the instructions
and files which start the computer. Examples are the hard drive, floppy
disk drive, CD drive, etc. Among them, the hard drive is the most used
one.

Browse more topics under Classification of Computers

 Basics of Computer and its Operation


 Functional Components and their Interconnections

Types of Booting
There are two types of booting:
Cold Booting

A cold boot is also called a hard boot. It is the process when we first
start the computer. In other words, when the computer is started from its
initial state by pressing the power button it is called cold boot. The
instructions are read from the ROM and the operating system is loaded
in the main memory.

Warm Booting

Warm Boot is also called soft boot. It refers to when we restart the
computer. Here, the computer does not start from the initial state. When
the system gets stuck sometimes it is required to restart it while it is ON.
Therefore, in this condition the warm boot takes place. Restart button or
CTRL+ALT+DELETE keys are used for warm boot.

Steps of Booting
We can describe the boot process in six steps:

1. The Startup

It is the first step that involves switching the power ON. It supplies
electricity to the main components like BIOS and processor.

2. BIOS: Power On Self Test

It is an initial test performed by the BIOS. Further, this test performs an


initial check on the input/output devices, computer’s main memory,
disk drives, etc. Moreover, if any error occurs, the system produces a
beep sound.

3. Loading of OS
In this step, the operating system is loaded into the main memory. The
operating system starts working and executes all the initial files and
instructions.

4. System Configuration

In this step, the drivers are loaded into the main memory. Drivers are
programs that help in the functioning of the peripheral devices.

5. Loading System Utilities

System utilities are basic functioning programs, for example, volume


control, antivirus, etc. In this step, system utilities are loaded into the
memory.

6. User Authentication

If any password has been set up in the computer system, the system
checks for user authentication. Once the user enters the login Id and
password correctly the system finally starts.

Frequently Asked Questions (FAQs)


Q1. What is booting?

A1. The starting up of the computer is known as booting. It initiates all


the devices before starting any work on the computer. Moreover, the
operating system is loaded into the main memory.

Q2. What is BIOS?


A2. BIOS stands for Basic Input/Output System. It helps in the
functioning of all the input/output devices. Further, it also helps to start
and initiate the working of all devices during the boot process.

Q3. What are the boot devices?

A3. Boot devices are the devices that have the operating system loaded
inside them during the boot process. Common devices are the hard
drive, disk drive, floppy drive, etc.

Q4. What are the types of booting?

A4. There are two types of the boot:

1. Cold Boot/Hard Boot


2. Warm Boot/Soft Boot
Q5. Why do we need booting?

A5. We perform this so that the operating system along with the initial
files and instructions load into the main memory. And as a result, the
computer starts.

Q6. What are the basic steps of booting?

A6. Basic steps are:

1. The start-up
2. Power On Self Test
3. Loading OS
4. System Configuration
5. Loading system utilities
6. User authentication

What is virtualization?

Image by:
Opensource.com

Virtualization is the process of running a virtual instance of a computer system


in a layer abstracted from the actual hardware. Most commonly, it refers to
running multiple operating systems on a computer system simultaneously. To
the applications running on top of the virtualized machine, it can appear as if
they are on their own dedicated machine, where the operating system,
libraries, and other programs are unique to the guest virtualized system and
unconnected to the host operating system which sits below it.

There are many reasons why people utilize virtualization in computing. To


desktop users, the most common use is to be able to run applications meant
for a different operating system without having to switch computers or reboot
into a different system. For administrators of servers, virtualization also offers
the ability to run different operating systems, but perhaps, more importantly, it
offers a way to segment a large system into many smaller parts, allowing the
server to be used more efficiently by a number of different users or
applications with different needs. It also allows for isolation, keeping programs
running inside of a virtual machine safe from the processes taking place in
another virtual machine on the same host.

What is a hypervisor?
A hypervisor is a program for creating and running virtual machines.
Hypervisors have traditionally been split into two classes: type one, or "bare
metal" hypervisors that run guest virtual machines directly on a system's
hardware, essentially behaving as an operating system. Type two, or "hosted"
hypervisors behave more like traditional applications that can be started and
stopped like a normal program. In modern systems, this split is less prevalent,
particularly with systems like KVM. KVM, short for kernel-based virtual
machine, is a part of the Linux kernel that can run virtual machines directly,
although you can still use a system running KVM virtual machines as a normal
computer itself.

What is a virtual machine?


A virtual machine is the emulated equivalent of a computer system that runs
on top of another system. Virtual machines may have access to any number
of resources: computing power, through hardware-assisted but limited access
to the host machine's CPU and memory; one or more physical or virtual disk
devices for storage; a virtual or real network interface; as well as any devices
such as video cards, USB devices, or other hardware that are shared with the
virtual machine. If the virtual machine is stored on a virtual disk, this is often
referred to as a disk image. A disk image may contain the files for a virtual
machine to boot, or, it can contain any other specific storage needs.

What is the difference between a container and a


virtual machine?
You may have heard of Linux containers, which are conceptually similar to
virtual machines, but function somewhat differently. While both containers and
virtual machines allow for running applications in an isolated environment,
allowing you to stack many onto the same machine as if they are separate
computers, containers are not full, independent machines. A container is
actually just an isolated process that shared the same Linux kernel as the host
operating system, as well as the libraries and other files needed for the
execution of the program running inside of the container, often with a network
interface such that the container can be exposed to the world in the same way
as a virtual machine. Typically, containers are designed to run a single
program, as opposed to emulating a full multi-purpose server.

DHCP defined and how it works


If Dynamic Host Configuration Protocol (DHCP) didn’t exist, network
administrators would have to manually parcel out IP addresses from
the available pool, which would be prohibitively time consuming,
inefficient, and error prone. Fortunately, DHCP does exist.

What is DHCP and how does it work?


DHCP is an under-the-covers mechanism that automates the
assignment of IP addresses to fixed and mobile hosts that are
connected wired or wirelessly.

When a device wants access to a network that’s using DHCP, it sends


a request for an IP address that is picked up by a DHCP server. The
server responds be delivering an IP address to the device, then
monitors the use of the address and takes it back after a specified
time or when the device shuts down. The IP address is then returned
to the pool of addresses managed by the DHCP server to be reassigned
to another device as it seeks access to the network.

While the delegation of IP addresses is the central function of the


protocol, DHCP also assigns a variety of related networking
parameters including subnet mask, default gateway address, and
domain name server (DNS). DHCP is an IEEE standard built on top of
the older BOOTP (bootstrap protocol), which has become obsolete
because it only works on IPv4 networks.

Benefits of DHCP
DHCP provides a range of benefits to network administrators:

Reliable IP address configuration


You can’t have two users with the same IP address because it would
create a conflict where one or both devices could not connect to the
network. DHCP eliminates human error so that address conflicts,
configuration errors, or simple typos are minimized.

Reduced network administration

DHCP provides centralized and automated TCP/IP configuration. By


deploying a DHCP relay agent, a DHCP server is not needed on every
subnet.

Mobility

DHCP efficiently handles IP address changes for users on portable


devices who move to different locations on wired or wireless
networks.

IP address optimization

DHCP not only assigns addresses, it automatically takes them back


and returns them to the pool when they are no longer being used.

Efficient change management

DHCP makes it simple for an organization to change its IP address


scheme from one range of addresses to another. DHCP enables
network administrators to make those changes without disrupting end
users.

DHCP components
When working with DHCP, it’s important to understand all of its
components. Below is a list of them and what they do:

DHCP server

This is a networked device running the DCHP service that holds IP


addresses and related configuration information. This is most typically
a server or a router but could be anything that acts as a host, such as
an SD-WAN appliance.
DHCP client

This endpoint endpoint software requests and receives configuration


information from a DHCP server. This can be installed on a computer,
mobile device, IoT endpoint or anything else that requires connectivity
to the network. Most are configured to receive DHCP information by
default.

IP address pool

The range of IP addresses that are available to DHCP clients is the IP


address. Addresses are typically handed out sequentially from lowest
to highest.

Subnet

IP networks can be partitioned into segments known as subnets.


Subnets help keep networks manageable.

Lease

The length of time for which a DHCP client holds the IP address
information is known as the lease. When a lease expires, the client
must renew it.

DHCP relay

A router or host that listens for client messages being broadcast on


that network and then forwards them to a configured server is the
DHCP relay. The server then sends responses back to the relay agent
that passes them along to the client. This can be used to centralize
DHCP servers instead of having a server on each subnet.

Assigning IP addresses
The existential question associated with DHCP is how does an end
user connect to the network in the first place without having an IP
address?
The answer is that there’s a complex system of back-and-forth
requests and acknowledgments. First, all modern device operating
systems include a DHCP client, which is typically enabled by default.
In order to request an IP address, the client device sends out a
broadcast message—DHCPDISCOVER. The network directs that
request to the appropriate DHCP server.

DHCP server functionality is typically assigned to a physical server


plus a backup. Other devices can also act as DHCP servers, such as
SD-WAN appliances or wireless access points.

The server then determines the appropriate IP address and sends an


OFFER packet to the client, which responds with a REQUEST packet.
In the final step in the process, the server sends an ACK packet
confirming that the client has been given an IP address.

This is all done quickly and automatically and without the need for the
end user to take any action. The catch is that the IP address isn’t
permanent. It’s only good for a specified period of time, known as the
lease time.

Controlling lease time


If all DHCP did was assign IP addresses permanently, it wouldn’t be
dynamic, it would be static. Static addresses are appropriate for some
devices, such as network printers. However, under the DHCP protocol,
every time the DHCP server assigns an address there is an associated
lease time. When the lease expires, the client can no longer use the IP
address and is essentially kicked off the network.

The protocol is designed so active clients automatically contact the


DHCP server halfway through the lease period to renew the lease. If
the server doesn’t respond immediately, the client continues to ask
the DHCP server for a lease renewal until it is approved.

Typically, when a host shuts down, the lease is automatically


terminated, in order to free up its IP address so it can be used by
another client on the network.

DHCP networking functionality


In addition to providing the client with the ability to connect to
network and internet resources through the IP address, the DHCP
server assigns additional networking parameters that provide
efficiency and security. These include:

Default gateway

This gateway is responsible for transferring data back and forth


between the local network and Internet, or between local subnets.

Subnet mask

IP networking uses a subnet mask for separate the host address and
the network address portions of an IP address.

DNS server

Translates domain names (networkworld.com) into IP addresses,


which are represented by long strings of numbers.

Scopes and user classes of IP addresses


DHCP assigns addresses dynamically, but not randomly. Since DHCP
connects hosts to the network and also assigns networking
parameters, there are scenarios in which a network administrator
might want to assign certain sets of subnet parameters to specific
groups of users.

A scope is a consecutive range of IP addresses that a DHCP server


can draw on to fulfill an IP address request from a DHCP client. By
defining one or more scopes on the DHCP server, the server can
manage the distribution and assignment of IP addresses to DHCP
clients. Under the DHCP protocol, network admins can set unlimited
numbers of scopes, as needed.

A class is a subset of a scope. Classes are useful if the network


administrator wants to separate groups of devices to one segment of a
larger scope. For example, SD-WAN clients for employees working
remotely.
DHCP security concerns
With DHCP, the initial assignment of an IP address is designed to be
fast and efficient. The tradeoff is that the DHCP protocol doesn’t
require authentication. Of course, enterprises have set up strong
authentication requirements for users to access resources once they
are on the network, but that still leaves the DHCP server itself as a
weak link in the security chain.

An attacker could take over or spoof the DHCP server and hand out
bad information to legitimate end users, sending them to a fake site.
Or it could hand out legitimate IP addresses to unauthorized users.
This could lead to man-in-the-middle attacks and denial of service
attacks.

The DHCP specification does address some of these issues. There is a


relay-agent information option that enables network engineers to tag
DHCP messages as they arrive. This tag can be used to control
network access. In addition, network administrators can use 802.1x
authentication (network access control) to help secure DHCP.

What is DNS and how does it work?


The Domain Name System resolves the names of internet sites
with their underlying IP addresses adding efficiency and even
security in the process.
Thinkstock
The Domain Name System (DNS) is one of the foundations of the
internet, yet most people outside of networking probably don’t realize
they use it every day to do their jobs, check their email or waste time
on their smartphones.

At its most basic, DNS is a directory of names that match with


numbers. The numbers, in this case are IP addresses, which
computers use to communicate with each other. Most descriptions of
DNS use the analogy of a phone book, which is fine for people over the
age of 30 who know what a phone book is.

[Get regularly scheduled insights by signing up for Network World newsletters.]


If you’re under 30, think of DNS like your smartphone’s contact list,
which matches people’s names with their phone numbers and email
addresses. Then multiply that contact list by everyone else on the
planet.

A brief history of DNS


When the internet was very, very small, it was easier for people to
correspond specific IP addresses with specific computers, but that
didn’t last for long as more devices and people joined the growing
network. It's still possible to type a specific IP address into a browser
to reach a website, but then, as now, people wanted an address made
up of easy-to-remember words, of the sort that we would recognize as
a domain name (like networkworld.com) today. In the 1970s and early
'80s, those names and addresses were assigned by one person
— Elizabeth Feinler at Stanford – who maintained a master list of
every Internet-connected computer in a text file called HOSTS.TXT.

This was obviously an untenable situation as the Internet grew, not


least because Feinler only handled requests before 6 p.m. California
time, and took time off for Christmas. In 1983, Paul Mockapetris, a
researcher at USC, was tasked with coming up with a compromise
among multiple suggestions for dealing with the problem. He basically
ignored them all and developed his own system, which he dubbed DNS.
While it's obviously changed quite a bit since then, at a fundamental
level it still works the same way it did nearly 40 years ago.

How DNS servers work


The DNS directory that matches name to numbers isn’t located all in
one place in some dark corner of the internet. With more than 332
million domain names listed at the end of 2017, a single directory
would be very large indeed. Like the internet itself, the directory is
distributed around the world, stored on domain name servers
(generally referred to as DNS servers for short) that all communicate
with each other on a very regular basis to provide updates and
redundancies.
Authoritative DNS servers vs. recursive DNS
servers
When your computer wants to find the IP address associated with a
domain name, it first makes its request to a recursive DNS server, also
known as recursive resolver. A recursive resolver is a server that is
usually operated by an ISP or other third-party provider, and it knows
which other DNS servers it needs to ask to resolve the name of a site
with its IP address. The servers that actually have the needed
information are called authoritative DNS servers.

DNS servers and IP addresses


Each domain can correspond to more than one IP address. In fact,
some sites have hundreds or more IP addresses that correspond with a
single domain name. For example, the server your computer reaches
for www.google.com is likely completely different from the server
that someone in another country would reach by typing the same site
name into their browser.

Another reason for the distributed nature of the directory is the


amount of time it would take for you to get a response when you were
looking for a site if there was only one location for the directory,
shared among the millions, probably billions, of people also looking for
information at the same time. That’s one long line to use the phone
book.

What is DNS caching?


To get around this problem, DNS information is shared among many
servers. But information for sites visited recently is also cached
locally on client computers. Chances are that you use google.com
several times a day. Instead of your computer querying the DNS name
server for the IP address of google.com every time, that information is
saved on your computer so it doesn’t have to access a DNS server to
resolve the name with its IP address. Additional caching can occur on
the routers used to connect clients to the internet, as well as on the
servers of the user’s Internet Service Provider (ISP). With so much
caching going on, the number of queries that actually make it to DNS
name servers is a lot lower than it would seem.

How do I find my DNS server?


Generally speaking, the DNS server you use will be established
automatically by your network provider when you connect to the
internet. If you want to see which servers are your primary
nameservers — generally the recursive resolver, as described above —
there are web utilities that can provide a host of information about
your current network connection. Browserleaks.com is a good one,
and it provides a lot of information, including your current DNS servers.

Can I use 8.8.8.8 DNS?


It's important to keep in mind, though, that while your ISP will set a
default DNS server, you're under no obligation to use it. Some users
may have reason to avoid their ISP's DNS — for instance, some ISPs
use their DNS servers to redirect requests for nonexistent addresses
to pages with advertising.

If you want an alternative, you can instead point your computer to a


public DNS server that will act as a recursive resolver. One of the most
prominent public DNS servers is Google's; its IP address is 8.8.8.8.
Google's DNS services tend to be fast, and while there are certain
questions about the ulterior motives Google has for offering the free
service, they can't really get any more information from you that they
don't already get from Chrome. Google has a page with detailed
instructions on how to configure your computer or router to connect
to Google's DNS.

How DNS adds efficiency


DNS is organized in a hierarchy that helps keep things running quickly
and smoothly. To illustrate, let’s pretend that you wanted to visit
networkworld.com.

The initial request for the IP address is made to a recursive resolver,


as discussed above. The recursive resolver knows which other DNS
servers it needs to ask to resolve the name of a site
(networkworld.com) with its IP address. This search leads to a root
server, which knows all the information about top-level domains, such
as .com, .net, .org and all of those country domains like .cn (China) and
.uk (United Kingdom). Root servers are located all around the world, so
the system usually directs you to the closest one geographically.

Once the request reaches the correct root server, it goes to a top-level
domain (TLD) name server, which stores the information for the
second-level domain, the words used before you get to
the .com, .org, .net (for example, that information for
networkworld.com is “networkworld”). The request then goes to the
Domain Name Server, which holds the information about the site and
its IP address. Once the IP address is discovered, it is sent back to the
client, which can now use it to visit the website. All of this takes mere
milliseconds.

Because DNS has been working for the past 30-plus years, most
people take it for granted. Security also wasn’t considered when
building the system, so hackers have taken full advantage of this,
creating a variety of attacks.

DNS reflection attacks


DNS reflection attacks can swamp victims with high-volume messages
from DNS resolver servers. Attackers request large DNS files from all
the open DNS resolvers they can find and do so using the spoofed IP
address of the victim. When the resolvers respond, the victim receives
a flood of unrequested DNS data that overwhelms their machines.

DNS cache poisoning


DNS cache poisoning can divert users to malicious Web sites.
Attackers manage to insert false address records into the DNS so
when a potential victim requests an address resolution for one of the
poisoned sites, the DNS responds with the IP address for a different
site, one controlled by the attacker. Once on these phony sites,
victims may be tricked into giving up passwords or suffer malware
downloads.

DNS resource exhaustion


DNS resource exhaustion attacks can clog the DNS infrastructure of
ISPs, blocking the ISP’s customers from reaching sites on the internet.
This can be done by attackers registering a domain name and using
the victim’s name server as the domain’s authoritative server. So if a
recursive resolver can’t supply the IP address associated with the site
name, it will ask the name server of the victim. Attackers generate
large numbers of requests for their domain and toss in non-existent
subdomains to boot, which leads to a torrent of resolution requests
being fired at the victim’s name server, overwhelming it.

What is DNS Sec?


DNS Security Extensions is an effort to make communication among
the various levels of servers involved in DNS lookups more secure. It
was devised by the Internet Corporation for Assigned Names and
Numbers (ICANN), the organization in charge of the DNS system.

ICANN became aware of weaknesses in the communication between


the DNS top-level, second-level and third-level directory servers that
could allow attackers to hijack lookups. That would allow the
attackers to respond to requests for lookups to legitimate sites with
the IP address for malicious sites. These sites could upload malware
to users or carry out phishing and pharming attacks.

DNSSEC would address this by having each level of DNS server


digitally sign its requests, which insures that the requests sent in by
end users aren’t commandeered by attackers. This creates a chain of
trust so that at each step in the lookup, the integrity of the request is
validated.

In addition, DNSSec can determine if domain names exist, and if one


doesn’t, it won’t let that fraudulent domain be delivered to innocent
requesters seeking to have a domain name resolved.

As more domain names are created, and more devices continue to join
the network via internet of things devices and other “smart” systems,
and as more sites migrate to IPv6, maintaining a healthy DNS
ecosystem will be required. The growth of big data and analytics
also brings a greater need for DNS management.
SIGRed: A wormable DNS flaw rears its head
The world got a good look recently at the sort of chaos weaknesses in
DNS could cause with the discovery of a flaw in Windows DNS servers.
The potential security hole, dubbed SIGRed, requires a complex
attack chain, but can exploit unpatched Windows DNS servers to
potentially install and execute arbitrary malicious code on clients. And
the exploit is "wormable," meaning that it can spread from computer
to computer without human intervention. The vulnerability was
considered alarming enough that U.S. federal agencies were given
only a few days to install patches.

DNS over HTTPS: A new privacy landscape


As of this writing, DNS is on the verge of one of its biggest shifts in its
history. Google and Mozilla, who together control the lion's share of
the browser market, are encouraging a move towards DNS over
HTTPS, or DoH, in which DNS requests are encrypted by the same
HTTPS protocol that already protects most web traffic. In Chrome's
implementation, the browser checks to see if the DNS servers support
DoH, and if they don't, it reroutes DNS requests to Google's 8.8.8.8.

It's a move not without controversy. Paul Vixie, who did much of the
early work on the DNS protocol back in the 1980s, calls the move a
"disaster" for security: corporate IT will have a much harder time
monitoring or directing DoH traffic that traverses their network, for
instance. Still, Chrome is omnipresent and DoH will soon be turned on
by default, so we'll see what the future holds.

What is Bois and It’s Function?


BIOS (basic input/output system) is the program a
computer's microprocessor uses to start the computer system after it is
powered on. It also manages data flow between the computer's operating
system (OS) and attached devices, such as the hard disk, video
adapter, keyboard, mouse and printer.

History of BIOS
The term BIOS was first coined in 1975 by American computer scientist Gary Kildall.
It was incorporated into IBM's first personal computer in 1981 and, in the years to
come, gained popularity within other PCs, becoming an integral part of computers for
some time. However, BIOS' popularity has waned in favor of a newer technology:
Unified Extensible Firmware Interface (UEFI). Intel announced a plan in 2017 to
retire support for legacy BIOS systems by 2020, replacing them with UEFI.

Uses of BIOS
The main use of BIOS is to act as a middleman between OSes and the hardware they
run on. BIOS is theoretically always the intermediary between the microprocessor and
I/O device control information and data flow. Although, in some cases, BIOS can
arrange for data to flow directly to memory from devices, such as video cards, that
require faster data flow to be effective.

How does BIOS work?


BIOS comes included with computers, as firmware on a chip on the motherboard. In
contrast, an OS like Windows or iOS can either be pre-installed by the manufacturer
or vendor or installed by the user. BIOS is a program that is made accessible to the
microprocessor on an erasable programmable read-only memory (EPROM) chip.
When users turn on their computer, the microprocessor passes control to the BIOS
program, which is always located at the same place on EPROM.

When BIOS boots up a computer, it first determines whether all of the necessary
attachments are in place and operational. Any piece of hardware containing files the
computer needs to start is called a boot device. After testing and ensuring boot devices
are functioning, BIOS loads the OS -- or key parts of it -- into the computer's random
access memory (RAM) from a hard disk or diskette drive (the boot device).
The 4 functions of BIOS
BIOS identifies, configures, tests and connects computer hardware to the OS
immediately after a computer is turned on. The combination of these steps is called
the boot process.

These tasks are each carried out by BIOS' four main functions:

1. Power-on self-test (POST). This tests the hardware of the computer before
loading the OS.

2. Bootstrap loader. This locates the OS.

3. Software/drivers. This locates the software and drivers that interface with the OS
once running.

4. Complementary metal-oxide semiconductor (CMOS) setup. This is a


configuration program that enable users to alter hardware and system settings.
CMOS is the name of BIOS' non-volatile memory.
Accessing BIOS
With BIOS, the OS and its applications are freed from having to understand exact
details, such as computer hardware addresses, about the attached I/O devices. When
device details change, only the BIOS program needs to be changed. Sometimes, this
change can be made during system setup.

Users can access BIOS and configure it through BIOS Setup Utility. Accessing BIOS
Setup Utility varies somewhat depending on the computer being used. However, the
following steps generally enable users to access and configure BIOS through Setup
Utility:

 Reset or power off the computer.

 When the computer turns back on, look for a message that says "entering setup"
or something similar. Accompanying that message will be a key that the user
should press to enter system configuration. Here's an example message a user
might see: "Press [key] to enter BIOS setup." Some keys often used as prompts
are Del, Tab, Esc and any of the function keys (F1-F12).

 Upon seeing the prompt, quickly press the key specified.

Once in BIOS Setup Utility, users can change hardware settings, manage memory
settings, change the boot order or boot device, and reset the BIOS password, among
other configuration tasks.

BIOS security
BIOS security is a somewhat overlooked component of cybersecurity; however, it
should still be managed to prevent hackers from executing malicious code on the
OS. Security group Cylance, in 2017, showed how modern BIOS security flaws could
enable ransomware programs inside a motherboard's UEFI and exploit other PC BIOS
vulnerabilities.

Another unique exploit involving the manipulation of BIOS was Plundervolt.


Plundervolt could be used to mess with a computer's power supply at the time data
was being written to memory, causing errors that lead to security gaps. Intel released a
BIOS patch to defend against it.

BIOS manufacturers
BIOS, in its beginnings, was originally owned by IBM. However, some companies,
such as Phoenix Technologies, have reverse-engineered IBM's original version to
create their own. Phoenix, in doing this, allowed other companies to create clones of
the IBM PC and, more importantly, create non-IBM computers that work with BIOS.
One company that did this was Compaq.

Today, many manufacturers produce motherboards with BIOS chips in them. Some
examples are the following:

 AMI

 Asus
 Foxconn

 Hewlett Packard (HP)

 Ricoh

Knowing the motherboard manufacturer is important because users may want to


update their BIOS and chipset drivers -- the drivers that enable the OS to work with
other devices in the computer, such as a video card -- to the most recent versions.
Driver updates may improve computer performance or patch recent BIOS-level
security vulnerabilities. Each manufacturer has a unique way of updating these
drivers.

Difference Between Device Driver and


Firmware
A typical computer consists of hardware, software and firmware.
These components work together to make the computer work in a
way it’s designed to work. Hardware is any physical device that
you can actually see and touch, whether internal to the computer
or external devices attached to the computer. We use numerous
hardware devices with a computer, such as printer, scanner, mice,
keyboard, monitor, disk drive, audio card, video card,
and modem are all examples of hardware devices. Software is a
set of instructions that tell the computer how to work and execute
specific tasks. Unlike hardware which describes the physical
aspects of a computer, software is anything that can be stored
electronically and it’s an immaterial part that runs a computer.
Device drivers are also software. Firmware is also software, but
programmed on a hardware device.
What is a Device Driver?
Device driver is a particular type of software program that enable
hardware devices to interact with each other. It is a software
application that acts as an intermediary between a piece of
hardware and an application or the operating system. A computer
operates a great many kinds of devices, most of which fit into the
general category of storage devices, transmission devices, and
human-interface devices. A device communicates with a computer
system through its associated device driver. So, a device
driver communicates with the hardware device via a connection
point, or port – for example, a serial port. Device drivers are
specific to operating system which allow the kernel of the OS to
communicate with hardware devices, without worrying about the
details of how they actually work. A device driver presents a
uniform device-access interface to the I/O subsystem, much like
the system calls which provide a standard interface between the
application program and the operating system.

What is a Firmware?
Firmware is a set of instructions programmed into a hardware
device, typically in non-volatile memory such as read-only
memory or flash memory. Firmware is a special form of software
that enables a device to perform functions without the need of
installing additional software. It refers to computer programs and
data loaded into a class of memory that cannot be dynamically
modified by the computer during processing. Firmware includes
the internal set of instructions used by a hardware device for
initiation and operation, often encoded in non-volatile memory. A
basic input output system (BIOS) chip is a common example of a
firmware. The computer programs and data contained in
firmware are classified as software. Firmware is typically stored in
the read-only memory of a hardware device, and can be erased
and rewritten. Firmware updates often require specialized
standalone applications, custom boot mechanisms, and require
extensive research as each vendor provides one or more versions
of firmware updates for its device.

Difference between Device Driver and


Firmware
Basics

– Device driver is a particular type of software program that


enables the operating system to communicate with and control
devices. Device drivers are pretty much device-specific meaning
they are written and distributed by the manufacturer of a
particular device. Firmware, on the other hand, is a special form
of software that enables a device to perform functions without the
need of installing additional software. Firmware is program code
stored in a hardware device, typically in non-volatile memory
such as read-only memory or flash memory.

Functionality

– Device drivers are operating system-specific and hardware


dependent that enable operating system and other software
programs to access hardware functions without worrying about
the details on how the hardware devices work. Without a device
driver, the OS won’t be able to communicate with a hardware
device. Firmware, on the other hand, is software permanently
etched into a hardware device that enables the device to perform
functions like basic input/output tasks, without the need of
installing additional software. It carries out the integral functions
of hardware devices.

Purpose

– The purpose of a device driver is to ensure smooth functioning


of hardware device for which it is intended to work and to also
allow it to be used with different operating systems. For example,
a graphics driver enables the OS to communicate with and control
your graphics card or video card or on-board graphics. Firmware,
on the other hand, is a software program that gives life to the
hardware device, programming it to give instructions in order to
communicate with other devices and perform functions like basic
input/output tasks.

Device Driver vs. Firmware: Comparison Chart


Summary of Device Driver vs. Firmware
The main difference between a device driver and a firmware is
their intended purpose. Device drivers enable operating system
and other software programs to access hardware functions
without worrying about the details on how the hardware devices
actually work. Firmware is also software, in the context that it is
program code. The difference lies in how the program code is
stored. Firmware includes the internal set of instructions used by
a hardware device for initiation and operation, often encoded in
non-volatile memory.

You might also like