Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 59

IT Questions Asked in IBPS (SO) IT Officer 2016 Exam

Dear Aspirants,
Today IBPS conducted exam for IT Officer. The Exam was conducted in 2 slots.
here we are providing IT question (both shift) of IBPS (SO) IT Officer Exam
2016these question is very helpful for upcoming Exam.
Morning Shift IT Officer Question:
(1) One question related to home page?
(2) Which of the following option correct about search engine?
(3) What is difference between LAN AND WAN?
(4) One question related to URL?
(5) Which of the link used for go from existing page to another?
(6) What is use of insert a row?
(7) One question asked to use of router?
(8) Choose a correct option data directory?
(9) Which of the following option correct about RISC?
(10) Which of the following option is programming language?
(11) One question related to two development semantic software?
(12) Question asked to use delete button in DBSM?
(13) Full form of SQL
(14) What is full form DSCI in IT?
(15) Which of the following is data sub query language?
(16) What is logical design phrase?
(17) The Round button in web pages is called
(18) UDP and TCP belongs to which layer ?
Afternoon Shift IT Officer Question:
(1) Which normal form is not concerned with functional dependency?
(2) One question related to : *Kernel
(3) one question related to : Select *Operating System
(4) ARP determines which address??
(5) stealing one's identity & representing him is called as??
(6) In java variables are called using??
(7) which of the following is example declaration of a class in c++??
(8) web layout is present in
(9) 0ne question related to: *Normal Form(NF)
(10) 0ne question related to: UNIX
(11) maximum no.of programs that can be run in a linux of 32-bit os.
(12) A question relating to kernel & application program.
(13) To create a new folder what is the short cut key?
(14) How data is present in DLL?
Computer Networks means interconnected set of autonomous system that permit
distributed processing to information.
Five components:

 Sender Computer
 Sender equipment (Modem)
 Communication Channel ( Telephone Cables)
 Receiver Equipment(Modem)
 Receiver Computer

There are two aspects in computer networks.

 Hard Ware: It includes physical connection (using adapter, cable, router,


bridge etc)
 Soft Ware: It includes set of protocols (nothing but a set of rules)

Methods of Message Delivery: A message can be delivered in the following


ways

 Unicast: One device sends message to the other to its address.


 Broadcast: One device sends message to all other devices on the network.
The message is sent to an address reserved for this goal.
 Multicast: One device sends message to a certain group of devices on the
network.

Types of Networks:

Mainly three types of network based on their coverage areas: LAN, MAN, and WAN.
LAN (Local Area Network) :
LAN is privately owned network within a single building or campus.A local area
network is relatively smaller and privately owned network with the maximum span
of 10 km.
Characteristics of Networking:

 Topology: The geometrical arrangement of the computers or nodes.


 Protocols: How they communicate.
 Medium: Through which medium.

MAN (Metropolitan Area Network)


MAN is defined for less than 50 Km and provides regional connectivity within a
campus or small geographical area. An example of MAN is cable television network
in city.
WAN (Wide Area Network)
A wide Area Network (WAN) is a group Communication Technology ,provides no
limit of distance.
A wide area network or WAN spans a large geographical area often a country.
Internet It is also known as network of networks. The Internet is a system of linked
networks that are world wide in scope and facilitate data communication services
such as remote login, file transfer, electronic mail, World Wide Web and
newsgroups etc.
Network Topology
Network topology is the arrangement of the various elements of a computer or
biological network. Essentially it is the topological structure of a network, and may
be depicted physically or logically. Physical topology refers to the placement of the
network's various components, inducing device location and cable installation, while
logical topology shows how data flows within a network, regardless of its physical
design.
The common network topologies include the following sections

 Bus Topology: In bus topology, each node is directly connected to a


common cable.

Note:

1. In bus topology at the first, the message will go through the bus then one user
can communicate with other.
2. The drawback of this topology is that if the network cable breaks, the entire
network will be down.

 Star Topology: In this topology, each node has a dedicated set of wires
connecting it to a central network hub. Since, all traffic passes through' the
hub, it becomes a central point for isolating network problems and gathering
network statistics.
 Ring Topology: A ring 'topology features a logically closed loop. Data
packets travel in a single direction around the ring from one network device to
the next. Each network device acts as a repeater to keep the signal strong
enough as it travels.

 Mesh Topology: In mesh topology, each system is connected to all other


systems in the network.

 Note:-
o In bus topology at the first, the message will go through the bus then
one user can communicate with other.
o In star topology, first the message will go to the hub then that message
will go to other user.
o In ring topology, user can communicate as randomly.
o In mesh topology, any user can directly communicate with other users.
 Tree Topology: In this type of network topology, in which a central root is
connected to two or more nodes that are one level lower in hierarchy.
Hardware/Networking Devices
Networking hardware may also be known as network equipment computer
networking devices.
Network Interface Card (NIC): NIC provides a physical connection between the
networking cable and the computer's internal bus. NICs come in three basic
varieties 8 bit, 16 bit and 32 bit. The larger number of bits that can be transferred to
NIC, the faster the NIC can transfer data to network cable.
Repeater: Repeaters are used to connect together two Ethernet segments of any
media type. In larger designs, signal quality begins to deteriorate as segments
exceed their maximum length. We also know that signal transmission is always
attached with energy loss. So, a periodic refreshing of the signals is required.
Hubs: Hubs are actually multi part repeaters. A hub takes any incoming signal and
repeats it out all ports.
Bridges: When the size of the LAN is difficult to manage, it is necessary to breakup
the network. The function of the bridge is to connect separate networks together.
Bridges do not forward bad or misaligned packets.
Switch: Switches are an expansion of the concept of bridging. Cut through switches
examine the packet destination address, only before forwarding it onto its
destination segment, while a store and forward switch accepts and analyzes the
entire packet before forwarding it to its destination. It takes more time to examine
the entire packet, but it allows catching certain packet errors and keeping them from
propagating through the network.
Routers: Router forwards packets from one LAN (or WAN) network to another. It is
also used at the edges of the networks to connect to the Internet.
Gateway: Gateway acts like an entrance between two different networks. Gateway
in organisations is the computer that routes the traffic from a work station to the
outside network that is serving web pages. ISP (Internet Service Provider) is the
gateway for Internet service at homes.
Data Transfer Modes: There are mainly three modes of data transfer.

 Simplex: Data transfer only in one direction e.g., radio broadcasting.


 Half Duplex: Data transfer in both direction, but not simultaneously e.g., talk
back radio.
 Full Duplex or Duplex: Data transfer in both directions, simultaneously e.g.,
telephone
Data representation: Information comes in different forms such as text, numbers,
images, audio and video.

 Text: Text is represented as a bit pattern. The number of bits in a pattern


depends on the number of symbols in the language.
 ASCII: The American National Standards Institute developed a code called
the American Standard code for Information Interchange .This code uses 7
bits for each symbol.
 Extended ASCII: To make the size of each pattern 1 byte (8 bits), the ASCII
bit patterns are augmented with an extra 0 at the left.
 Unicode: To represent symbols belonging to languages other than English, a
code with much greater capacity is needed. Unicode uses 16 bits and can
represent up to 65,536 symbols.
 ISO: The international organization for standardization known as ISO has
designed a code using a 32-bit pattern. This code can represent up to
4,294,967,296 symbols.
 Numbers: Numbers are also represented by using bit patterns. ASCII is not
used to represent numbers. The number is directly converted to a binary
number.
 Images: Images are also represented by bit patterns. An image is divided into
a matrix of pixels, where each pixel is a small dot. Each pixel is assigned a bit
pattern. The size and value of the pattern depends on the image. The size of
the pixel depends on what is called the resolution.
 Audio: Audio is a representation of sound. Audio is by nature different from
text, numbers or images. It is continuous not discrete
 Video: Video can be produced either a continuous entity or it can be a
combination of images.

OSI Model
The Open System Interconnection (OSI) model is a reference tool for understanding
data communication between any two networked systems. It divides the
communication processes into 7 layers. Each layer performs specific functions to
support the layers above it and uses services of the layers below it.
Physical Layer: The physical layer coordinates the functions required to transmit a
bit stream over a physical medium. It deals with the mechanical and electrical
specifications of interface and transmission medium. It also defines the procedures
and functions that physical devices and interfaces have to perform for transmission
to occur.
Data Link Layer: The data link layer transforms the physical layer, a raw
transmission facility, to a reliable link and is responsible for Node-to-Node delivery.
It makes the physical layer appear error free to the upper layer (i.e, network layer).
Network Layer: Network layer is responsible for source to destination delivery of a
packet possibly across multiple networks (links). If the two systems are connected
to the same link, there is usually no need for a network layer. However, if the two
systems are attached to different networks (links) with connecting devices between
networks, there is often a need of the network layer to accomplish source to
destination delivery.
Transport Layer: The transport layer is responsible for- source to destination (end-
to-end) delivery of the entire message. Network layer does not recognise any
relationship between the packets delivered. Network layer treats each packet
independently, as though each packet belonged to a separate message, whether or
not it does. The transport layer ensures that the whole message arrives intact and in
order.
Session Layer: The session layer is the network dialog controller. It establishes,
maintains and synchronises the interaction between communicating systems. It also
plays important role in keeping applications data separate.
Presentation Layer: This layer is responsible for how an application formats data
to be sent out onto the network. This layer basically allows an application to read (or
understand) the message.
Ethernet
It is basically a LAN technology which strikes a good balance between speed, cost
and ease of installation.

 Ethernet topologies are generally bus and/or bus-star topologies.


 Ethernet networks are passive, which means Ethernet hubs do not reprocess
or alter the signal sent by the attached devices.
 Ethernet technology uses broadcast topology with baseband signalling and a
control method called Carrier Sense Multiple Access/Collision Detection
(CSMA/CD) to transmit data.
 The IEEE 802.3 standard defines Ethernet protocols for (Open Systems
Interconnect) OSI’s Media Access Control (MAC) sublayer and physical layer
network characteristics.
 The IEEE 802.2 standard defines protocols for the Logical Link Control (LLC)
sublayer.

Ethernet refers to the family of local area network (LAN) implementations that
include three principal categories.

 Ethernet and IEEE 802.3: LAN specifications that operate at 10 Mbps over
coaxial cable.
 100-Mbps Ethernet: A single LAN specification, also known as Fast
Ethernet, which operates at 100 Mbps over twisted-pair cable.
 1000-Mbps Ethernet: A single LAN specification, also known as Gigabit
Ethernet, that operates at 1000 Mbps (1 Gbps) over fiber and twisted-pair
cables.

IEEE Standards

 IEEE 802.1: Standards related to network management.


 IEEE 802.2: Standard for the data link layer in the OSI Reference Model
 IEEE 802.3: Standard for the MAC layer for bus networks that use CSMA/CD.
(Ethernet standard)
 IEEE 802.4: Standard for the MAC layer for bus networks that use a token-
passing mechanism (token bus networks).
 IEEE 802.5: Standard for the MAC layer for token-ring networks.
 IEEE 802.6: Standard for Metropolitan Area Networks (MANs).

FLOW CONTROL:
Flow control coordinates that amount of data that can be sent before receiving ACK
It is one of the most important duties of the data link layer.
ERROR CONTROL: Error control in the data link layer is based on ARQ (automatic
repeat request), which is the retransmission of data.

 The term error control refers to methods of error detection and


retransmission.
 Anytime an error is detected in an exchange, specified frames are
retransmitted. This process is called ARQ.

Error Control (Detection and Correction)


Many factors including line noise can alter or wipe out one or more bits of a given
data unit.
 Reliable systems must have mechanism for detecting and correcting such
errors.
 Error detection and correction are implemented either at the data link layer or
the transport layer of the OSI model.

Error Detection
Error detection uses the concept of redundancy, which means adding extra bits for
detecting errors at the destination.

Note: Checking function performs the action that the received bit stream passes the
checking criteria, the data portion of the data unit is accepted else rejected.
Vertical Redundancy Check (VRC)
In this technique, a redundant bit, called parity bit, is appended to every data unit,
so that the total number of 1's in the unit (including the parity bit) becomes even. If
number of 1's are already even in data, then parity bit will be 0.
Some systems may use odd parity checking, where the number of 1's should be
odd. The principle is the same, the calculation is different.
Checksum
There are two algorithms involved in this process, checksum generator at sender
end and checksum checker at receiver end.
The sender follows these steps

 The data unit is divided into k sections each of n bits.


 All sections are added together using 1's complement to get the sum.
 The sum is complemented and becomes the checksum.
 The checksum is sent with the data.

The receiver follows these steps

 The received unit is divided into k sections each of n bits.


 All sections are added together using 1's complement to get the sum.
 The sum is complemented.
 If the result is zero, the data are accepted, otherwise they are rejected.

Cyclic Redundancy Check (CRC): CRC is based on binary division. A sequence


of redundant bits called CRC or the CRC remainder is appended to the end of a
data unit, so that the resulting data unit becomes exactly divisible by a second,
predetermined binary number. At its destination, the incoming data unit is divided by
the same number. If at this step there is no remainder, the data unit is assumed to
be intact and therefore is accepted.
Error Correction: Error correction in data link layer is implemented simply anytime,
an error is detected in an exchange, a negative acknowledgement NAK is returned
and the specified frames are retransmitted. This process is called Automatic Repeat
Request (ARQ). Retransmission of data happens in three Cases: Damaged frame,
Lost frame and Lost acknowledgement.
Stop and Wait ARQ: Include retransmission of data in case of lost or damaged
framer. For retransmission to work, four features are added to the basic flow control
mechanism.

 If an error is discovered in a data frame, indicating that it has been corrupted


in transit, a NAK frame is returned. NAK frames, which are numbered, tell the
sender to retransmit the last frame sent.
 The sender device is equipped with a timer. If an expected acknowledgement
is not received within an allotted time period, the sender assumes that the last
data frame was lost in transmit and sends it again.

Sliding Window ARQ: To cover retransmission of lost or damaged frames, three


features are added to the basic flow control mechanism of sliding window.

 The sending device keeps copies of all transmitted frames, until they have
been acknowledged. .
 In addition to ACK frames, the receiver has the option of returning a NAK
frame, if the data have been received damaged. NAK frame tells the sender
to retransmit a damaged frame. Here, both ACK and NAK frames must be
numbered for identification. ACK frames carry the number of next frame
expected. NAK frames on the other hand, carry the number of the damaged
frame itself. If the last ACK was numbered 3, an ACK 6 acknowledges the
receipt of frames 3, 4 and 5 as well. If data frames 4 and 5 are received
damaged, both NAK 4 and NAK 5 must be returned.
 Like stop and wait ARQ, the sending device in sliding window ARQ is
equipped with a timer to enable it to handle lost acknowledgements.

Go-back-n ARQ: In this method, if one frame is lost or damaged all frames sent,
since the last frame acknowledged are retransmitted.
Selective Reject ARQ: In this method, only specific damaged or lost frame is
retransmitted. If a frame is corrupted in transmit, a NAK is returned and the frame is
resent out of sequence. The receiving device must be able to sort the frames it has
and insert the retransmitted frame into its proper place in the sequence.
Flow Control
One important aspect of data link layer is flow control. Flow control refers to a set of
procedures used to restrict the amount of data the sender can send before waiting
for acknowledgement.

Stop and Wait: In this method, the sender waits for an acknowledgement after
every frame it sends. Only when a acknowledgment has been received is the next
frame sent. This process continues until the sender transmits an End of
Transmission (EOT) frame.

 We can have two ways to manage data transmission, when a fast sender
wants to transmit data to a low speed receiver.
 The receiver sends information back to sender giving it permission to send
more data i.e., feedback or acknowledgment based flow control.
 Limit the rate at which senders may transmit data without using feedback from
receiver i.e., Rate based-flow control.

Advantages of Stop and Wait:


It's simple and each frame is checked and acknowledged well.
Disadvantages of Stop and Wait:

 It is inefficient, if the distance between devices is long.


 The time spent for waiting ACKs between each frame can add significant
amount to the total transmission time.

Sliding Window: In this method, the sender can transmit several frames before
needing an acknowledgement. The sliding window refers imaginary boxes at both
the sender and the receiver. This window can hold frames at either end and
provides the upper limit on the number of frames that can be transmitted before
requiring an acknowledgement.
 The frames in the window are numbered modulo-n, which means they are
numbered from 0 to n -1. For example, if n = 8, the frames are numbered 0, 1,
2, 3, 4, 5, 6, 7, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1...so on. The size of the window is (n -
1) in this case size of window = 7. .
 In other words, the window can't cover the whole module (8 frames) it covers
one frame less that is 7.
 When the receiver sends an ACK, it includes the number of the next frame it
expects to receive. When the receiver sends an ACK containing the number
5, it means all frames upto number 4 have been received.

Switching
Whenever, we have multiple devices we have a problem of how to connect them to
make one-to-one communication possible. Two solutions could be like as given
below

 Install a point-to-point connection between each pair of devices (Impractical


and wasteful approach when applied to very large network).
 For large network, we can go for switching. A switched network consists of a
series of interlinked nodes, called switches.

Classification of Switching
Circuit Switching: It creates a direct physical connection between two devices
such as phones or computers.
Space Division Switching: Separates the path in the circuit from each other
spatially.
Time Division Switching: Uses time division multiplexing to achieve switching.
Circuit switching was designed for voice communication. In a telephone
conversation e.g., Once a circuit is established, it remains connected for the
duration of the session.
Disadvantages of Circuit Switching

 Less suited to data and other non-voice transmissions.


 A circuit switched link creates the equivalent of a single cable between two
devices and thereby assumes a single data rate for both devices. This
assumption limits the flexibility and usefulness of a circuit switched
connection.
 Once a circuit has been established, that circuit is the path taken by all parts
of the transmission, whether or not it remains the most efficient or available.
 Circuit switching sees all transmissions as equal. Any request is granted to
whatever link is available. But often with data transmission, we want to be
able to prioritise.

Packet Switching
To overcome the disadvantages of circuit switch. Packet switching concept came
into the picture.
In a packet switched network, data are transmitted in discrete units of potentially
variable length blocks called packets. Each packet contains not only data but also a
header with control information (such as priority codes and source and destination
address). The packets are sent over the network node to node. At each node, the
packet is stored briefly, then routed according to the information in its header.
There are two popular approaches to packet switching.

1. Datagram
2. Virtual circuit

Datagram Approach: Each packet is treated independently from all others. Even
when one packet represents just a piece of a multi packet transmission, the network
(and network layer functions) treats it as though it existed alone.
Virtual Circuit Approach: The relationship between all packets belonging to a
message or session is preserved. A single route is chosen between sender and
receiver at the beginning of the session. When the data are sent, all packets of the
transmission travel one after another along that route.
We can implement it into two formats:

 Switched Virtual Circuit (SVC)


 Permanent Virtual Circuit (PVC)

SVC (Switched Virtual Circuit)


This SVC format is comparable conceptually to dial-up lines in circuit switching. In
this method, a virtual circuit is created whenever, it is needed and exists only for the
duration of the specific exchange.
PVC (Permanent Virtual Circuit)
The PVC format is comparable to leased lines in circuit switching. In this method,
the same virtual circuit is provided between two users on a continuous basis. The
circuit is dedicated to the specific users. No one else can use it and because it is
always in place, it can be used without connection establishment and connection
termination.
Message Switching
It is also known as store and forward. In this mechanism, a node receives a
message, stores it, until the appropriate route is free, and then sends it along.
Store and forward is considered a switching technique because there is no direct
link between the sender and receiver of a transmission. A message is delivered to
the node along one path, then rerouted along another to its destination.
In message switching, the massages are stored and relayed from secondary
storage (disk), while in packet switching the packets are stored and forwarded from
primary storage (RAM).
Internet Protocol: It is a set of technical rules that defines how computers
communicate over a network.
IPv4: It is the first version of Internet Protocol to be widely used, and accounts for
most of today’s Internet traffic.

 Address Size: 32 bits


 Address Format: Dotted Decimal Notation: 192.149.252.76
 Number of Addresses: 232 = 4,294,967,296 Approximately
 IPv4 header has 20 bytes
 IPv4 header has many fields (13 fields)
 It is subdivided into classes <A-E>.
 Address uses a subnet mask.
 IPv4 has lack of security.

IPv6: It is a newer numbering system that provides a much larger address pool than
IPv4.

 Address Size: 128 bits


 Address Format: Hexadecimal Notation: 3FFE:F200:0234:AB00:
0123:4567:8901:ABCD
 Number of Addresses: 2128
 IPv6 header is the double, it has 40 bytes
 IPv6 header has fewer fields, it has 8 fields.
 It is classless.
 It uses a prefix and an Identifier ID known as IPv4 network
 It uses a prefix length.
 It has a built-in strong security (Encryption and Authentication)

Classes and Subnetting


There are currently five different field length pattern in use, each defining a class of
address.
An IP address is 32 bit long. One portion of the address indicates a network (Net ID)
and the other portion indicates the host (or router) on the network (i.e., Host ID).
To reach a host on the Internet, we must first reach the network, using the first
portion of the address (Net ID). Then, we must reach the host itself, using the 2nd
portion (Host ID).
The further division a network into smaller networks called subnetworks.

For Class A: First bit of Net ID should be 0 like in following pattern


01111011 . 10001111 . 1111100 . 11001111

For Class B: First 2 bits of Net ID should be 1 and 0 respective, as in below


pattern 10011101 . 10001111 . 11111100 . 11001111

For Class C: First 3 bits Net ID should be 1, 1 and 0 respectively, as follows


11011101 . 10001111 . 11111100 . 11001111

For Class D: First 4 bits should be 1110 respectively, as in pattern


11101011 . 10001111 . 11111100 . 11001111

For Class E: First 4 bits should be 1111 respectively, like


11110101 . 10001111 . 11111100 . 11001111

Class Ranges of Internet Address in Dotted Decimal Format


Three Levels of Hierarchy: Adding subnetworks creates an intermediate level of
hierarchy in the IP addressing system. Now, we have three levels: net ID; subnet ID
and host ID. e.g.,
Masking
Masking is process that extracts the address of the physical network form an IP
address. Masking can be done whether we have subnetting or not. If we have not
subnetted the network, masking extracts the network address form an IP address. If
we have subnetted, masking extracts the subnetwork address form an IP address.

Masks without Subnetting: To be compatible, routers use mask even, if there is


no subneting.

Masks with Subnetting: When there is subnetting, the masks can vary

Masks for Unsubnetted Networks

Masks for Subnetted Networks


Types of Masking
There are two types of masking as given below

Boundary Level Masking: If the masking is at the boundary level (the mask
numbers are either 255 or 0), finding the subnetwork address is very easy. Follow
these 2 rules

 The bytes in IP address that correspond to 255 in the mask will be repeated in
the subnetwork address.
 The bytes in IP address that correspond to 0 in the mask will change to 0 in
the subnetwork address.

Non-boundary Level Masking: If the masking is not at the boundary level (the
mask numbers are not just 255 or 0), finding subnetwork address involves using the
bit-wise AND operator, follow these 3 rules

 The bytes in IP address that correspond to 255 in the mask will be repeated in
the subnetwork address.
 The bytes in the IP address that correspond to 0 in the mask will be change to
0 in the subnetwork address.
 For other bytes, use the bit-wise AND operator

As we can see, 3 bytes are ease {, to determine. However, the 4th bytes needs the
bit-wise AND operation.
Router: A router is a hardware component used to interconnect networks. Routers
are devices whose primary purpose is to connect two or more networks and to filter
network signals so that only desired information travels between them. Routers are
much more powerful than bridges.

 A router has interfaces on multiple networks


 Networks can use different technologies
 Router forwards packets between networks
 Transforms packets as necessary to meet standards for each network
 Routers are distinguished by the functions they perform:
o Internal routers: Only route packets within one area.
o Area border routers: Connect to areas together
o Backbone routers: Reside only in the backbone area
o AS boundary routers: Routers that connect to a router outside the AS.

Routers can filter traffic so that only authorized personnel can enter restricted areas.
They can permit or deny network communications with a particular Web site. They
can recommend the best route for information to travel. As network traffic changes
during the day, routers can redirect information to take less congested routes.

 Routers operate primarily by examining incoming data for its network routing
and transport information.
 Based on complex, internal tables of network information that it compiles, a
router then determines whether or not it knows how to forward the data
packet towards its destination.
 Routers can be programmed to prevent information from being sent to or
received from certain networks or computers based on all or part of their
network routing addresses.
 Routers also determine some possible routes to the destination network and
then choose the one that promises to be the fastest.

Two key router functions of Router:

 Run routing algorithms/protocol (RIP, OSPF, BGP)


 Forwarding datagrams from incoming to outgoing link.

Address Resolution Protocol (ARP)


ARP is used to find the physical address of the node when its Internet address is
known. Anytime, a host or a router needs to find the physical address of another
has on its network; it formats an ARP query packet that includes that IP address
and broadcasts it over the network. Every host on the network receives and
processes the ARP packet, but the intended recipient recognises its Internet
address and sends back its physical address.
Reverse Address Resolution Protocol (RARP)
This protocol allows a host to discover its Internet address when it known only its
physical address. RARP works much like ARP. The host wishing to retrieve its
Internet address broadcasts an RARP query packet that contains its physical
address to every host of its physical network. A server on the network recognises
the RARP packet and return the hosts Internet address.
Internet Control Massage Protocol (ICMP)
The ICMP is a mechanism used by hosts and routers to send notifications of
datagram problems back to the sender. IP is essentially an unreliable and
connectionless protocol. ICMP allows IP (Internet Protocol) to inform a sender, if a
datagram is undeliverable.
ICMP uses each test/reply to test whether a destination is reachable and
responding. It also handles both control and error massages but its sole function is
to report problems not correct them.
Internet Group Massage Protocol (IGMP)
The IP can be involved in two types of communication unitasking and multitasking.
The IGMP protocol has been designed to help a multitasking router to identify the
hosts in a LAN that are members of a multicast group.
Addressing at Network Layer
In addition to the physical addresses that identify individual devices, the Internet
requires an additional addressing connection an address that identifies the
connection of a host of its network. Every host and router on the Internet has an IP
address which encodes its network number and host number. The combination is
unique in principle; no 2 machines on the Internet have the same IP address.
Firewall
A firewall is a device that prevents unauthorized electronic access to your entire
network.
The term firewall is generic, and includes many different kinds of protective
hardware and software devices. Routers, comprise one kind of firewall.
Most firewalls operate by examining incoming or outgoing packets for information at
OSI level 3, the network addressing level.
Firewalls can be divided into 3 general categories: packet-screening firewalls, proxy
servers (or application-level gateways), and stateful inspection proxies.

 Packet-screening firewalls examine incoming and outgoing packets for their


network address information. You can use packet-screening firewalls to
restrict access to specific Web sites, or to permit access to your network only
from specific Internet sites.
 Proxy servers (also called application-level gateways) operate by examining
incoming or outgoing packets not only for their source or destination
addresses but also for information carried within the data area (as opposed to
the address area) of each network packet. The data area contains information
written by the application program that created the packet—for example, your
Web browser, FTP, or TELNET program. Because the proxy server knows
how to examine this application-specific portion of the packet, you can permit
or restrict the behavior of individual programs.
 Stateful inspection proxies monitor network signals to ensure that they are
part of a legitimate ongoing conversation (rather than malicious insertions)
Transport Layer Protocols: There are two transport layer protocols as given
below.
UDP (User Datagram Protocol)
UDP is a connection less protocol. UDP provides a way for application to send
encapsulate IP datagram and send them without having to establish a connection.

 Datagram oriented
 unreliable, connectionless
 simple
 unicast and multicast
 Useful only for few applications, e.g., multimedia applications
 Used a lot for services: Network management (SNMP), routing (RIP), naming
(DNS), etc.

UDP transmitted segments consisting of an 8 byte header followed by the payload.


The two parts serve to identify the end points within the source and destinations
machine. When UDP packets arrives, its payload is handed to the process attached
to the destination ports.

 Source Port Address (16 Bits)

Total length of the User Datagram (16 Bits)

 Destination Port Address (16 Bits)

Checksum (used for error detection) (16 Bits


TCP (Transmission Control Protocol)
TCP provides full transport layer services to applications. TCP is reliable stream
transport port-to-port protocol. The term stream in this context, means connection-
oriented, a connection must be established between both ends of transmission
before either may transmit data. By creating this connection, TCP generates a
virtual circuit between sender and receiver that is active for the duration of
transmission.
TCP is a reliable, point-to-point, connection-oriented, full-duplex protocol.
Flag bits

 URG: Urgent pointer is valid If the bit is set, the following bytes contain an
urgent message in the sequence number range “SeqNo <= urgent message
<= SeqNo + urgent pointer”
 ACK: Segment carries a valid acknowledgement
 PSH: PUSH Flag, Notification from sender to the receiver that the receiver
should pass all data that it has to the application. Normally set by sender
when the sender’s buffer is empty
 RST: Reset the connection, The flag causes the receiver to reset the
connection. Receiver of a RST terminates the connection and indicates higher
layer application about the reset
 SYN: Synchronize sequence numbers, Sent in the first packet when initiating
a connection
 FIN: Sender is finished with sending. Used for closing a connection, and both
sides of a connection must send a FIN.

TCP segment format


Each machine supporting TCP has a TCP transport entity either a library procedure,
a user process or port of kernel. In all cases, it manages TCP streams and
interfaces to the IP layer. A TCP entities accepts the user data stream from local
processes, breaks them up into pieces not exceeding 64 K bytes and sends each
piece as separate IP datagrams.
Sockets
A socket is one end of an inter-process communication channel. The two processes
each establish their own socket. The system calls for establishing a connection are
somewhat different for the client and the server, but both involve the basic construct
of a socket.
The steps involved in establishing a socket on the client side are as follows:

1. Create a socket with the socket() system call


2. Connect the socket to the address of the server using the connect() system
call
3. Send and receive data. There are a number of ways to do this, but the
simplest is to use the read() and write() system calls.

The steps involved in establishing a socket on the server side are as follows:

1. Create a socket with the socket() system call


2. Bind the socket to an address using the bind() system call. For a server
socket on the Internet, an address consists of a port number on the host
machine.
3. Listen for connections with the listen() system call
4. Accept a connection with the accept() system call. This call typically blocks
until a client connects with the server.
5. Send and receive data

When a socket is created, the program has to specify the address domain and the
socket type.

Socket Types
There are two widely used socket types, stream sockets, and datagram sockets.
Stream sockets treat communications as a continuous stream of characters, while
datagram sockets have to read entire messages at once. Each uses its own
communications protocol. Stream sockets use TCP (Transmission Control
Protocol), which is a reliable, stream oriented protocol, and datagram sockets use
UDP (Unix Datagram Protocol), which is unreliable and message oriented. A
second type of connection is a datagram socket. You might want to use a datagram
socket in cases where there is only one message being sent from the client to the
server, and only one message being sent back. There are several differences
between a datagram socket and a stream socket.

1. Datagrams are unreliable, which means that if a packet of information gets


lost somewhere in the Internet, the sender is not told (and of course the
receiver does not know about the existence of the message). In contrast, with
a stream socket, the underlying TCP protocol will detect that a message was
lost because it was not acknowledged, and it will be retransmitted without the
process at either end knowing about this.
2. Message boundaries are preserved in datagram sockets. If the sender sends
a datagram of 100 bytes, the receiver must read all 100 bytes at once. This
can be contrasted with a stream socket, where if the sender wrote a 100 byte
message, the receiver could read it in two chunks of 50 bytes or 100 chunks
of one byte.
3. The communication is done using special system calls sendto() and
receivefrom() rather than the more generic read() and write().
4. There is a lot less overhead associated with a datagram socket because
connections do not need to be established and broken down, and packets do
not need to be acknowledged. This is why datagram sockets are often used
when the service to be provided is short, such as a time-of-day service.

Application Layer Protocols (DNS, SMTP, POP, FTP, HTTP)


There are various application layer protocols as given below

 SMTP (Simple Mail Transfer Protocol): One of the most popular network
service is electronic mail (e-mail). The TCP/IP protocol that supports
electronic mail on the Internet is called Simple Mail Transfer Protocol (SMTP).
SMTP is system for sending messages to other computer. Users based on e-
mail addresses. SMTP provides services for mail exchange between users on
the same or different computers.
 TELNET (Terminal Network): TELNET is client-server application that allows
a user to log onto remote machine and lets the user to access any application
program on a remote computer. TELNET uses the NVT (Network Virtual
Terminal) system to encode characters on the local system. On the server
(remote) machine, NVT decodes the characters to a form acceptable to the
remote machine.
 FTP (File Transfer Protocol): FTP is the standard mechanism provided by
TCP/IP for copying a file from one host to another. FTP differs form other
client-server applications because it establishes 2 connections between
hosts. One connection is used for data transfer, the other for control
information (commands and responses).
 Multipurpose Internet Mail Extensions (MIME): It is an extension of SMTP
that allows the transfer of multimedia messages.
 POP (Post Office Protocol): This is a protocol used by a mail server in
conjunction with SMTP to receive and holds mail for hosts.
 HTTP (Hypertext Transfer Protocol): This is a protocol used mainly to
access data on the World Wide Web (www), a respository of information
spread all over the world and linked together. The HTIP protocol transfer data
in the form of plain text, hyper text, audio, video and so on.
 Domain Name System (DNS): To identify an entity, TCP/IP protocol uses the
IP address which uniquely identifies the connection of a host to the Internet.
However, people refer to use names instead of address. Therefore, we need
a system that can map a name to an address and conversely an address to
name. In TCP/IP, this is the domain name system.

DNS in the Internet


DNS is protocol that can be used in different platforms. Domain name space is
divided into three categories.
 Generic Domain: The generic domain defines registered hosts according, to
their generic behaviour. Each node in the tree defines a domain which is an
index to the domain name space database.

 Country Domain: The country domain section follows the same format as the
generic domain but uses 2 characters country abbreviations (e.g., US for
United States) in place of 3 characters.
 Inverse Domain: The inverse domain is used to map an address to a name.

Overview of Services
Network Security: As millions of ordinary citizens are using networks for banking,
shopping, and filling their tax returns, network security is looming on the horizon as
a potentially massive problem.
Network security problems can be divided roughly into four interwined areas:

1. Secrecy: keep information out of the hands of unauthorized users.


2. Authentication: deal with determining whom you are talking to before
revealing sensitive information or entering into a business deal.
3. Nonrepudiation: deal with signatures.
4. Integrity control: how can you be sure that a message you received was
really the one sent and not something that a malicious adversary modified in
transit or concocted?

There is no one single place -- every layer has something to contribute:

 In the physical layer, wiretapping can be foiled by enclosing transmission lines


in sealed tubes containing argon gas at high pressure. Any attempt to drill into
a tube will release some gas, reducing the pressure and triggering an alarm
(used in some military systems).
 In the data link layer, packets on a point-to-point line can be encoded.
 In the network layer, firewalls can be installed to keep packets in/out.
 In the transport layer, entire connection can be encrypted.

Model for Network Security


Network security starts with authenticating, commonly with a username and
password since, this requires just one detail authenticating the username i.e., the
password this is some times teamed one factor authentication.

Using this model require us to

 Design a suitable algorithm for the security transformation.


 Generate the secret in formations (keys) used by the algorithm.
 Develop methods to distribute and share the secret information.
 Specify a protocol enabling the principles to use the transformation and secret
information for security service.
Cryptography
It is a science of converting a stream of text into coded form in such a way that only
the sender and receiver of the coded text can decode the text. Nowadays, computer
use requires automated tools to protect files and other stored information. Uses of
network and communication links require measures to protect data during
transmission.
Symmetric / Private Key Cryptography (Conventional / Private key / Single
key)
Symmetric key algorithms are a class of algorithms to cryptography that use the
same cryptographic key for both encryption of plaintext and decryption of ciphertext.
The may be identical or there may be a simple transformation to go between the
two keys.
In symmetic private key cryptography the following key features are involved

 Sender and recipient share a common key.


 It was only prior to invention of public key in 1970.
 If this shared key is disclosed to opponent, communications are
compromised.
 Hence, does not protect sender form receiver forging a message and claiming
is sent by user.

Advantage of Secret key Algorithm: Secret Key algorithms are efficient: it takes
less time to encrypt a message. The reason is that the key is usually smaller. So it
is used to encrypt or decrypt long messages.
Disadvantages of Secret key Algorithm: Each pair of users must have a secret
key. If N people in world want to use this method, there needs to be N (N-1)/2
secret keys. For one million people to communicate, a half-billion secret keys are
needed. The distribution of the keys between two parties can be difficult.
Asymmetric / Public Key Cryptography
A public key cryptography refers to a cyptogaphic system requiring two separate
keys, one of which is secrete/private and one of which is public although different,
the two pats of the key pair are mathematically linked.

 Public Key: A public key, which may be known by anybody and can be used
to encrypt messages and verify signatures.
 Private Key: A private key, known only to the recipient, used to decrypt
messages and sign (create) signatures. It is symmetric because those who
encrypt messages or verify signature cannot decrypt messages or create
signatures. It is computationally infeasible to find decryption key knowing only
algorithm and encryption key. Either of the two related keys can be used for
encryption, with the other used for decryption (in some schemes).

In the above public key cryptography mode

 Bob encrypts a plaintext message using Alice's public key using encryption
algorithm and sends it over communication channel.
 On the receiving end side, only Alice can decrypt this text as she only is
having Alice’s private key.

Advantages of Public key Algorithm:

1. Remove the restriction of a shared secret key between two entities. Here
each entity can create a pair of keys, keep the private one, and publicly
distribute the other one.
2. The no. of keys needed is reduced tremendously. For one million users to
communicate, only two million keys are needed.

Disadvantage of Public key Algorithm: If you use large numbers the method to
be effective. Calculating the cipher text using the long keys takes a lot of time. So it
is not recommended for large amounts of text.
Message Authentication Codes (MAC)
In cryptography, a Message Authentication Code (MAC) is a short piece of
information used to authenticate a message and to provide integrity and authenticity
assurance on the message. Integrity assurance detects accidental and international
message changes, while authenticity assurance affirms the message’s origin.
A keyed function of a message sender of a message m computers MAC (m) and
appends it to the message.
Verification: The receiver also computers MAC (m) and compares it to the received
value.
Security of MAC: An attacker should not be able to generate a valid (m, MAC (m)),
even after seeing many valid messages MAC pairs, possible of his choice.
MAC from a Block Cipher
MAC from a block cipher can be obtained by using the following suggestions

 Divide a massage into blocks.


 Compute a checksum by adding (or xoring) them.
 Encrypt the checksum.
 MAC keys are symmetric. Hence, does not provide non-repudiation (unlike
digital signatures).
 MAC function does not need to be invertiable.
 A MACed message is not necessarily encrypted.

DES (Data Encryption Standard)

 The data encryption standard was developed in IBM.


 DES is a symmetric key crypto system.
 It has a 56 bit key.
 It is block cipher, encrypts 64 bit plain text to 64 bit cipher texts.
 Symmetric cipher: uses same key for encryption and decryption
 It Uses 16 rounds which all perform the identical operation.
 Different subkey in each round derived from main key
 Depends on 4 functions: Expansion E, XOR with round key, S-box
substitution, and Permutation.
 DES results in a permutation among the 264 possible arrangements of 64
bits, each of which may be either 0 or 1. Each block of 64 bits is divided into
two blocks of 32 bits each, a left half block L and a right half R. (This division
is only used in certain operations.)

DES is a block cipher--meaning it operates on plaintext blocks of a given size (64-


bits) and returns cipher text blocks of the same size. Thus DES results in
a permutation among the 264 possible arrangements of 64 bits, each of which may
be either 0 or 1. Each block of 64 bits is divided into two blocks of 32 bits each, a
left half block L and a right half R. (This division is only used in certain operations.)
Authentication Protocols
Authentication: It is the technique by which a process verifies that its
communication partner is who it is supposed to be and not an imposter. Verifying
the identity of a remote process in the face of a malicious, active intruder is
surprisingly difficult and requires complex protocols based on cryptography.
The general model that all authentication protocols use is the following:

 An initiating user A (for Alice) wants to establish a secure connection with a


second user B (for Bob). Both and are sometimes called principals.
 Starts out by sending a message either to, or to a trusted key distribution
center (KDC), which is always honest. Several other message exchanges
follow in various directions.
 As these message are being sent, a nasty intruder, T (for Trudy), may
intercept, modify, or replay them in order to trick and When the protocol has
been completed, is sure she is talking to and is sure he is talking to.
Furthermore, in most cases, the two of them will also have established a
secret session key for use in the upcoming conversation.

In practice, for performance reasons, all data traffic is encrypted using secret-key
cryptography, although public-key cryptography is widely used for the authentication
protocols themselves and for establishing the (secret) session key.
Authentication based on a shared Secret key
Assumption: and share a secret key, agreed upon in person or by phone.
This protocol is based on a principle found in many (challenge-
response)authentication protocols: one party sends a random number to the other,
who then transforms it in a special way and then returns the result.
Three general rules that often help are as follows:

1. Have the initiator prove who she is before the responder has to.
2. Have the initiator and responder use different keys for proof, even if this
means having two shared keys, and.
3. Have the initiator and responder draw their challenges from different sets.

Authentication using Public-key Cryptography


Assume that and already know each other's public keys (a nontrivial issue).
Digital Signatures: For computerized message systems to replace the physical
transport of paper and documents, a way must be found to send a “signed”
message in such a way that

1. The receiver can verify the claimed identity of the sender.


2. The sender cannot later repudiate the message.
3. The receiver cannot possibly have concocted the message himself.

Secret-key Signatures: Assume there is a central authority, Big Brother (BB), that
knows everything and whom everyone trusts.
If later denies sending the message, how could prove that indeed sent the
message?

 First points out that will not accept a message from unless it is encrypted with.
 Then produces, and says this is a message signed by which proves sent to.
 is asked to decrypt , and testifies that is telling the truth.

What happens if replays either message?

 can check all recent messages to see if was used in any of them (in the past
hour).
 The timestamp is used throughout, so that very old messages will be rejected
based on the timestamp.

Public-key Signatures: It would be nice if signing documents did not require a


trusted authority (e.g., governments, banks, or lawyers, which do not inspire total
confidence in all citizens).
Under this condition,

 sends a signed message to by transmitting .


 When receives the message, he applies his secret key to get , and saves it in
a safe place, then applies the public key to get .
 How to verify that indeed sent a message to ?
 Produces both and The judge can easily verify that indeed has a valid
message encrypted by simply applying to it. Since is private, the only way
could have acquired a message encrypted by it is if did indeed send it.

Another new standard is the Digital Signature Standard (DSS) based on the EI
Gamal public-key algorithm, which gets its security from the difficulty of computing
discrete logarithms, rather than factoring large numbers.
Message Digest

 It is easy to compute.
 No one can generate two messages that have the same message digest.
 To sign a plaintext, first computes, and performs , and then sends both and to
.
 When everything arrives, applies the public key to the signature part to yield,
and applies the well-known to see if the so computed agrees with what was
received (in order to reject the forged message).
Study Notes on Basics of Wi-Fi

Basics of Wi-Fi
WiFi stands for Wireless Fidelity. WiFi is the marketing name for IEEE standard
802.11. It is a standard for both Level 1 (physical) and Level 2 (data link) of a
wireless data transmission protocol.
It is primarily a local area networking (LAN) technology designed to provide in-
building broadband coverage.
Current WiFi systems support a peak physical-layer data rate of 54 Mbps and
typically provide indoor coverage over a distance of 100 feet.
WiFi offers remarkably higher peak data rates than do 3G systems, primarily since
it operates over a larger 20 MHz bandwidth, but WiFi systems are not designed to
support high-speed mobility.
WiFi interfaces are now also being built into a variety of devices, including
personal data assistants (PDAs), cordless phones, cellular phones, cameras,
and media players.
WiFi is Half Duplex:
All WiFi networks are contention-based TDD systems, where the access point and
the mobile stations all vie for use of the same channel. Because of the shared
media operation, all Wi-Fi networks are half duplex.
There are equipment vendors who market Wi-Fi mesh configurations, but those
implementations incorporate technologies that are not defined in the standards.
Channel Bandwidth:
The Wi-Fi standards define a fixed channel bandwidth of 25 MHz for 802.11b and
20 MHz for either 802.11a or g networks.
Wi-Fi - IEEE Standards:
The 802.11 standard is defined through several specifications of WLANs. It defines
an over-the-air interface between a wireless client and a base station or between
two wireless clients.
Specifications:
802.11 − This pertains to wireless LANs and provides 1 - or 2-Mbps transmission in
the 2.4-GHz band using either frequency-hopping spread spectrum (FHSS) or
direct-sequence spread spectrum (DSSS).
802.11a − This is an extension to 802.11 that pertains to wireless LANs and goes
as fast as 54 Mbps in the 5-GHz band. 802.11a employs the orthogonal frequency
division multiplexing (OFDM) encoding scheme as opposed to either FHSS or
DSSS.
802.11b − The 802.11 high rate WiFi is an extension to 802.11 that pertains to
wireless LANs and yields a connection as fast as 11 Mbps transmission (with a
fallback to 5.5, 2, and 1 Mbps depending on strength of signal) in the 2.4-GHz band.
The 802.11b specification uses only DSSS.
Note that 802.11b was actually an amendment to the original 802.11 standard
added in 1999 to permit wireless functionality to be analogous to hard-wired
Ethernet connections.
802.11g − This pertains to wireless LANs and provides 20+ Mbps in the 2.4-GHz
band.
Wi-Fi Concepts:
There are two general types of Wi-Fi transmission:
DCF (Distributed Coordination Function) and PCF (Point Coordination Function).
DCF is Ethernet in the air. It employs a very similar packet structure, and many of
the same concepts. There are two problems that make wireless different then wired.

 The hidden substation problem


 High error rate.

These problems demand that a DCF Wi-Fi be a CSMA/CA network (Collision


Avoidance) rather than a CSMA/CD network (Collision Detect).
The results are the following protocol elements.

 Positive Acknowledgement: Every packet sent is positively acknowledged


by the receiver. The next packet is not sent until receiving a positive
acknowledgement for the previous packet.
 Channel clearing: A transmission begins with a RTS (Request to Send) and
the destination or receiver responds with a CTS (Clear to Send). Then the
data packets flow. For the channel is cleared by these two messages.
 Channel reservation: Each packet has a NAV (Network Allocation Vector)
containing a number X. The channel is reserved to the correspondents (the
sender and receiver of this packet) for an additional X milliseconds after this
packet. Once you have the channel, you can hold it with the NAV. The last
ACK contains NAV zero, to immediately release the channel.

Thanks..!!
An operating system acts as an intermediary between the user of a computer and
the computer hardware. An Operating System (OS) is a software that manages the
computer hardware.

 Hardware: It provides the basic computing resources for the system. It


consists of CPU, memory and the input/output (I/O) devices.
 Application Programs: Define the ways in which these resources are used
to solve user's computing problems. e.g., word processors, spreadsheets,
compilers and web browsers.

Components of a Computer System

 Process Management: The operating system manages many kinds of


activities ranging from user programs to system programs like printer spooler,
name servers, file server etc.
 Main-Memory Management: Primary-Memory or Main-Memory is a large
array of words or bytes. Each word or byte has its own address. Main-
memory provides storage that can be access directly by the CPU. That is to
say for a program to be executed, it must in the main memory.
 File Management: A file is a collected of related information defined by its
creator. Computer can store files on the disk (secondary storage), which
provide long term storage. Some examples of storage media are magnetic
tape, magnetic disk and optical disk. Each of these media has its own
properties like speed, capacity, data transfer rate and access methods.
 I/O System Management: I/O subsystem hides the peculiarities of specific
hardware devices from the user. Only the device driver knows the
peculiarities of the specific device to whom it is assigned.
 Secondary-Storage Management: Secondary storage consists of tapes,
disks, and other media designed to hold information that will eventually be
accessed in primary storage (primary, secondary, cache) is ordinarily divided
into bytes or words consisting of a fixed number of bytes. Each location in
storage has an address; the set of all addresses available to a program is
called an address space.
 Protection System: Protection refers to mechanism for controlling the
access of programs, processes, or users to the resources defined by a
computer systems.
 Networking: generalizes network access
 Command-Interpreter System: interface between the user and the OS.

Functions of Operating System


• Memory Management
• Processor Management
• Device Management
• Storage Management
• Application Interface
• User Interface
• Security
Operating System Services
Many services are provided by OS to the user's programs.

 Program Execution: The operating system helps to load a program into


memory and run it.
 I/O Operations: Each running program may request for I/O operation and for
efficiency and protection the users cannot control I/O devices directly. Thus,
the operating system must provide some means to do I/O operations.
 File System Manipulation: Files are the most important part which is needed
by programs to read and write the files and files may also be created and
deleted by names or by the programs. The operating system is responsible
for the file management.
 Communications : Many times, one process needs to exchange information
with another process, this exchange of information can takes place between
the processes executing on the same computer or the exchange of
information may occur between the process executing on the different
computer systems, tied together by a computer network. All these things are
taken care by operating system.
 Error Detection: It is necessary that the operating system must be aware of
possible errors and should take the appropriate action to ensure correct and
consistent computing.

Some important tasks that Operating System handles are:


The Operating system can perform a Single Operation and also Multiple Operations
at a Time. So there are many types of Operating systems those are organized by
using their Working Techniques.

1. Serial Processing: The Serial Processing Operating Systems are those


which Performs all the instructions into a Sequence Manner or the
Instructions those are given by the user will be executed by using the FIFO
Manner means First in First Out. Mainly the Punch Cards are used for this. In
this all the Jobs are firstly Prepared and Stored on the Card and after that
card will be entered in the System and after that all the Instructions will be
executed one by One. But the Main Problem is that a user doesn’t interact
with the System while he is working on the System, means the user can not
be able to enter the data for Execution.
2. Batch Processing: The Batch Processing is same as the Serial Processing
Technique. But in the Batch Processing Similar Types of jobs are Firstly
Prepared and they are Stored on the Card and that card will be Submit to the
System for the Processing. The Main Problem is that the Jobs those are
prepared for Execution must be the Same Type and if a job requires for any
type of Input then this will not be Possible for the user. The Batch Contains
the Jobs and all those jobs will be executed without the user Intervention.

3. Multi-Programming: Execute Multiple Programs on the System at a Time and


in the Multi-programming the CPU will never get idle, because with the help of Multi-
Programming we can Execute Many Programs on the System and When we are
Working with the Program then we can also Submit the Second or Another Program
for Running and the CPU will then Execute the Second Program after the
completion of the First Program. And in this we can also specify our Input means a
user can also interact with the System.
4. Real Time
System: In this Response Time is already fixed. Means time to Display the Results
after Possessing has fixed by the Processor or CPU. Real Time System is used at
those Places in which we Requires higher and Timely Response.
 Hard Real Time System: In the Hard Real Time System, Time is fixed and
we can’t Change any Moments of the Time of Processing. Means CPU will
Process the data as we Enters the Data.
 Soft Real Time System: In the Soft Real Time System, some Moments can
be Change. Means after giving the Command to the CPU, CPU Performs the
Operation after a Microsecond.

5. Distributed Operating System: Distributed Means Data is Stored and


Processed on Multiple Locations. When a Data is stored on to the Multiple
Computers, those are placed in Different Locations. Distributed means In the
Network, Network Collections of Computers are connected with Each other. Then if
we want to Take Some Data From other Computer, Then we uses the Distributed
Processing System. And we can also Insert and Remove the Data from out
Location to another Location. In this Data is shared between many users. And we
can also Access all the Input and Output Devices are also accessed by Multiple
Users.
6. Multiprocessing: In the Multi Processing there are two or More CPU in a Single
Operating System if one CPU will fail, then other CPU is used for providing backup
to the first CPU. With the help of Multi-processing, we can Execute Many Jobs at a
Time. All the Operations are divided into the Number of CPU’s. if first CPU
Completed his Work before the Second CPU, then the Work of Second CPU will be
divided into the First and Second.
7. Parallel operating systems: These are used to interface multiple networked
computers to complete tasks in parallel. Parallel operating systems are able to use
software to manage all of the different resources of the computers running in
parallel, such as memory, caches, storage space, and processing power. A parallel
operating system works by dividing sets of calculations into smaller parts and
distributing them between the machines on a network.
Process:
A process can be defined in any of the following ways

 A process is a program in execution.


 It is an asynchronous activity.
 It is the entity to which processors are assigned.
 It is the dispatchable unit.
 It is the unit of work in a system.

A process is more than the program code. It also includes the current activity
as represented by following:

 Current value of Program Counter (PC)


 Contents of the processors registers
 Value of the variables
 The process stack which contains temporary data such as subroutine
parameter, return address, and temporary variables.
 A data section that contains global variables.

Process in Memory:
Each process is represented in the as by a Process Control Block (PCB) also called
a task control block.
PCB: A process in an operating system is represented by a data structure known as
a process control block (PCB) or process descriptor.

The PCB contains important information about the specific process including

 The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
 Unique identification of the process in order to track "which is which"
information.
 A pointer to parent process.
 Similarly, a pointer to child process (if it exists).
 The priority of process (a part of CPU scheduling information).
 Pointers to locate memory of processes.
 A register save area.
 The processor it is running on.
Process State Model

Process state: The process state consist of everything necessary to resume the
process execution if it is somehow put aside temporarily.
The process state consists of at least following:

 Code for the program.


 Program's static data.
 Program's dynamic data.
 Program's procedure call stack.
 Contents of general purpose registers.
 Contents of program counter (PC)
 Contents of program status word (PSW).
 Operating Systems resource in use.

A process goes through a series of discrete process states.

 New State: The process being created.


 Running State: A process is said to be running if it has the CPU, that is,
process actually using the CPU at that particular instant.
 Blocked (or waiting) State: A process is said to be blocked if it is waiting for
some event to happen such that as an I/O completion before it can proceed.
Note that a process is unable to run until some external event happens.
 Ready State: A process is said to be ready if it use a CPU if one were
available. A ready state process is runable but temporarily stopped running to
let another process run.
 Terminated state: The process has finished execution.

Dispatcher:
 It is the module that gives control of the CPU to the process selected by the
short term scheduler.
 Functions of Dispatcher: Switching context, Switching to user mode,
and Jumping to the proper location in the user program to restart that
program.

Thread:
A thread is a single sequence stream within in a process. Because threads have
some of the properties of processes, they are sometimes called lightweight
processes. An operating system that has thread facility, the basic unit of CPU
utilization is a thread.

 A thread can be in any of several states (Running, Blocked, Ready or


Terminated).
 Each thread has its own stack.
 A thread has or consists of a program counter (PC), a register set, and a
stack space. Threads are not independent of one other like processes as a
result threads shares with other threads their code section, data section, OS
resources also known as task, such as open files and signals.

Multi threading:
An application typically is implemented as a separate process with several threads
of control.
There are two types of threads.

1. User threads: They are above the kernel and they are managed without
kernel support. User-level threads implement in user-level libraries, rather
than via systems calls, so thread switching does not need to call operating
system and to cause interrupt to the kernel. In fact, the kernel knows nothing
about user-level threads and manages them as if they were single-threaded
processes.
2. Kernel threads: Kernel threads are supported and managed directly by the
operating system. Instead of thread table in each process, the kernel has a
thread table that keeps track of all threads in the system.

Advantages of Thread
• Thread minimizes context switching time.
• Use of threads provides concurrency within a process.
• Efficient communication.
• Economy- It is more economical to create and context switch threads.
• Utilization of multiprocessor architectures to a greater scale and efficiency.
Difference between Process and Thread:
Inter-Process Communication:

 Processes executing concurrently in the operating system may be either


independent or cooperating processes.
 A process is independent, if it can’t affect or be affected by the other
processes executing in the system.
 Any process that shares data with other processes is a cooperating process.

There are two fundamental models of IPC:

 Shared memory: In the shared memory model, a region of memory that is


shared by cooperating process is established. Process can then exchange
information by reading and writing data to the shared region.
 Message passing: In the message passing model, communication takes place
by means of messages exchanged between the cooperating processes.
CPU Scheduling:
CPU Scheduling is the process by which an Operating System decides which
programs get to use the CPU. CPU scheduling is the basis of
MULTIPROGRAMMED operating systems. By switching the CPU among
processes, the operating system can make the computer more productive.
CPU Schedulers: Schedulers are special system software’s which handles process
scheduling in various ways. Their main task is to select the jobs to be submitted into
the system and to decide which process to run.
CPU Scheduling algorithms:
1. First Come First Serve (FCFS)
• Jobs are executed on first come, first serve basis.
• Easy to understand and implement.
• Poor in performance as average wait time is high.
2. Shortest Job First (SJF)
• Best approach to minimize waiting time.
• Impossible to implement
• Processer should know in advance how much time process will take.
3. Priority Based Scheduling
• Each process is assigned a priority. Process with highest priority is to be
executed first and so on.
• Processes with same priority are executed on first come first serve basis.
• Priority can be decided based on memory requirements, time requirements or
any other resource requirement.
4. Round Robin Scheduling
• Each process is provided a fix time to execute called quantum.
• Once a process is executed for given time period. Process is preempted and
other process executes for given time period.
• Context switching is used to save states of preempted processes.
5. Multi Queue Scheduling
• Multiple queues are maintained for processes.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
Synchronization:

 Currency arises in three different contexts:


o Multiple applications – Multiple programs are allowed to dynamically
share processing time.
o Structured applications – Some applications can be effectively
programmed as a set of concurrent processes.
o Operating system structure – The OS themselves are implemented as
set of processes.
 Concurrent processes (or threads) often need access to shared data and
shared resources.
o Processes use and update shared data such as shared variables, files,
and data bases.
 Writing must be mutually exclusive to prevent a condition leading to
inconsistent data views.
 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes.

Race Condition

 The race condition is a situation where several processes access (read/write)


shared data concurrently and the final value of the shared data depends upon
which process finishes last
o The actions performed by concurrent processes will then depend on the
order in which their execution is interleaved.
 To prevent race conditions, concurrent processes must be coordinated or
synchronized.
o It means that neither process will proceed beyond a certain point in the
computation until both have reached their respective synchronization
point.

Critical Section/Region

1. Consider a system consisting of n processes all competing to use some


shared data.
2. Each process has a code segment, called critical section, in which the shared
data is accessed.

The Critical-Section Problem

1. The critical-section problem is to design a protocol that the processes can


cooperate. The protocol must ensure that when one process is executing in
its critical section, no other process is allowed to execute in its critical section.
2. The critical section problem is to design a protocol that the processes can use
so that their action will not depend on the order in which their execution is
interleaved (possibly on many processors).

Deadlock:
A deadlock situation can arise, if the following four conditions hold simultaneously in
a system.

 Mutual Exclusion: Resources must be allocated to processes at any time in


an exclusive manner and not on a shared basis for a deadlock to be possible.
If another process requests that resource, the requesting process must be
delayed until the resource has been released.
 Hold and Wait Condition: Even if a process holds certain resources at any
moment, it should be possible for it to request for new ones. It should not give
up (release) the already held resources to be able to request for new ones. If
it is not true, a deadlock can never take place.
 No Preemption Condition: Resources can't be preempted. A resource can be
released only voluntarily by the process holding it, after that process has
completed its task.
 Circular Wait Condition: There must exist a set = {Po, P1, P2, ... , Pn} of
waiting processes such that Po is waiting for a resource that is held by P1, P1
is waiting for a resource that is held by P2, ... , Pn - 1 is waiting for a resource
that is held by Pn and Pn is waiting for a resource that is held by P0.
Resource Allocation Graph: The resource allocation graph consists of a set of
vertices V and a set of edges E. Set of vertices V is partitioned into two types

1. P = {Pl, P2, ... ,Pn}, the set consisting of all the processes in the system.
2. R = {Rl, R2, ... , Rm}, the set consisting of all resource types in the system.

 Directed Edge Pi → Pj is known as request edge.


 Directed Edge Pj → Pi is known as assignment edge.

Resource Instance

 One instance of resource type R1.


 Two instances of resource type R2.
 One instance of resource type R3.
 Three instances of resource type R4
Process States

 Process P1 is holding an instance of resource type R2 and is waiting for an


instance of resource type Rl.
 Process P2 is holding an instance of R1 and R2 is waiting for an instance of
resource type R3.
 Process P3 is holding an instance of R3.
 Basic facts related to resource allocation graphs are given below

Note: If graph consists no cycle it means there is no deadlock in the system.


If graph contains cycle

1. If only one instance per resource type, then deadlock.


2. If several instances per resource type, then there mayor may not be deadlock.

Deadlock Handling Strategies


In general, there are four strategies of dealing with deadlock problem:

1. The Ostrich Approach: Just ignore the deadlock problem altogether.


2. Deadlock Detection and Recovery: Detect deadlock and, when it occurs, take
steps to recover.
3. Deadlock Avoidance: Avoid deadlock by careful resource scheduling.
4. Deadlock Prevention: Prevent deadlock by resource scheduling so as to
negate at least one of the four conditions.

Deadlock Prevention
Deadlock prevention is a set of methods for ensuring that atleast one of the
necessary conditions can't hold.

 Elimination of “Mutual Exclusion” Condition


 Elimination of “Hold and Wait” Condition
 Elimination of “No-preemption” Condition
 Elimination of “Circular Wait” Condition

Deadlock Avoidance
This approach to the deadlock problem anticipates deadlock before it actually
occurs.
A deadlock avoidance algorithm dynamically examines the resource allocation state
to ensure that a circular wait condition can never exist. The resource allocation state
is defined by the number of available and allocated resources and the maximum
demands of the processes.
Safe State: A state is safe, if the system can allocate resources to each process
and still avoid a deadlock.

A system is in safe state, if there exists a safe sequence of all processes. A


deadlock state is an unsafe state. Not all unsafe states cause deadlocks. It is
important to note that an unsafe state does not imply the existence or even the
eventual existence a deadlock. What an unsafe state does imply is simply that
some unfortunate sequence of events might lead to a deadlock.
Address Binding: Binding of instructions and data to memory addresses.

1. Compile time: if process location is known then absolute code can be


generated.
2. Load time: Compiler generates relocatable code which is bound at load time.
3. Execution time: If a process can be moved from one memory segment to
another then binding must be delayed until run time.

Dynamic Loading:

 Routine is not loaded until it is called.


 Better memory-space utilization;
 Unused routine is never loaded.
 Useful when large amounts of code are needed to handle infrequently
occurring cases.
 No special support from the operating system is required; implemented
through program design.

Dynamic Linking:

 Linking postponed until execution time.


 Small piece of code, stub, used to locate the appropriate memory-resident
library routine.
 Stub replaces itself with the address of the routine, and executes the routine.
 Operating system needed to check if routine is in processes’ memory address

Overlays: This techniques allow to keep in memory only those instructions and
data, which are required at given time. The other instruction and data is loaded into
the memory space occupied by the previous ones when they are needed.
Swapping: Consider an environment which supports multiprogramming using say
Round Robin (RR) CPU scheduling algorithm. Then, when one process has finished
executing for one time quantum, it is swapped out of memory to a backing store.
The memory manager then picks up another process from the backing store and
loads it into the memory occupied by the previous process. Then, the scheduler
picks up another process and allocates the .CPU to it.
Memory Management Techniques
Memory management is the functionality of an operating system which handles or
manages primary memory. Memory management keeps track of each and every
memory location either it is allocated to some process or it is free.
There are two ways for memory allocation as given below
Single Partition Allocation: The memory is divided into two parts. One to be used
by as and the other one is for user programs. The as code and date is protected
from being modified by user programs using a base register.
Multiple Partition Allocation: The multiple partition allocation may be further
classified as
Fixed Partition Scheme: Memory is divided into a number of fixed size partitions.
Then, each partition holds one process. This scheme supports multiprogramming as
a number of processes may be brought into memory and the CPU can be switched
from one process to another.
When a process arrives for execution, it is put into the input queue of the smallest
partition, which is large enough to hold it.
Variable Partition Scheme: A block of available memory is designated as a hole at
any time, a set of holes exists, which consists of holes of various sizes scattered
throughout the memory.
When a process arrives and needs memory, this set of holes is searched for a hole
which is large enough to hold the process. If the hole is too large, it is split into two
parts. The unused part is added to the set of holes. All holes which are adjacent to
each other are merged.
There are different ways of implementing allocation of partitions from a list of free
holes, such as:

 first-fit: allocate the first hole that is big enough


 best-fit: allocate the smallest hole that is small enough; the entire list of holes
must be searched, unless it is ordered by size
 next-fit: scan holes from the location of the last allocation and choose the
next available block that is large enough (can be implemented using a circular
linked list)

Instructions and data to memory addresses can be done in following ways


• Compile time: When it is known at compile time where the process will reside,
compile time binding is used to generate the absolute code.
• Load time: When it is not known at compile time where the process will
reside in memory, then the compiler generates re-locatable code.
• Execution time: If the process can be moved during its execution from one
memory segment to another, then binding must
be delayed to be done at run time

Paging

It is a memory management technique, which allows the memory to be allocated to


the process wherever it is available. Physical memory is divided into fixed size
blocks called frames. Logical memory is broken into blocks of same size called
pages. The backing store is also divided into same size blocks.
When a process is to be executed its pages are loaded into available page frames.
A frame is a collection of contiguous pages. Every logical address generated by the
CPU is divided into two parts. The page number (P) and the page offset (d). The
page number is used as an index into a page table.
Each entry in the page table contains the base address of the page in physical
memory (f). The base address of the Pth entry is then combined with the offset (d)
to give the actual address in memory.
Virtual Memory
Separation of user logical memory from physical memory. It is a technique to run
process size more than main memory. Virtual memory is a memory management
scheme which allows the execution of a partially loaded process.
Advantages of Virtual Memory

 The advantages of virtual memory can be given as


 Logical address space can therefore be much larger than physical address
space.
 Allows address spaces to be shared by several processes.
 Less I/O is required to load or swap a process in memory, so each user can
run faster.

Segmentation

 Logical address is divided into blocks called segment i.e., logical address
space is a collection of segments. Each segment has a name and length.
 Logical address consists of two things < segment number, offset>.
 Segmentation is a memory-management scheme that supports the following
user view of memory. All the location within a segment are placed in
contiguous location in primary storage.

The file system consists of two parts:

1. A collection of files
2. A directory structure

The file management system can be implemented as one or more layers of the
operating system.
The common responsibilities of the file management system includes the following

 Mapping of access requests from logical to physical file address space.


 Transmission of file elements between main and secondary storage.
 Management of the secondary storage such as keeping track of the status
allocation and deallocation of space.
 Support for protection and sharing of files and the recovery and possible
restoration of the files after system crashes.

File Attributes
Each file is referred to by its name. The file is named for the convenience of the
users and when a file is named, it becomes independent of the user and the
process. Below are file attributes

 Name
 Type
 Location
 Size
 Protection
 Time and date

Disk Scheduling
One of the responsibilities of the OS is to use the hardware efficiently. For the disk
drives, meeting this responsibility entails having fast access time and large disk
bandwidth.
Access time has two major components

 Seek time is the time for the disk arm to move the heads to the cylinder
containing the desired sector.
 The rotational latency is the additional time for the disk to rotate the desired
sector to the disk head. It is not fixed, so we can take average value.

Disk bandwidth is the total number of bytes transferred, divided by the total time
between the first for service and the completion of last transfer.
FCFS Scheduling: This is also known as First In First Out (FIFO) simply queues
processes in the order that they arrive in the ready queue.
The following features which FIFO scheduling have.

 First come first served scheduling.


 Processes request sequentially.
 Fair to all processes, but it generally does not provide the fastest service.
 Consider a disk queue with requests for 110to blocks on cylinder.

Shortest Seek Time First (SSTF) Scheduling: It selects the request with the
minimum seek time from the current head position. SSTF scheduling is a form of
SJF scheduling may cause starvation of some requests. It is not an optimal
algorithm but its improvement over FCFS
SCAN Scheduling: In the SCAN algorithm, the disk arm starts at one end of the
disk and moves toward the other end, servicing requests as it reaches each cylinder
until it gets to the other end of the disk. At the other end, the direction of head
movement is reversed and servicing continues. The head continuously scans back
and forth across the disk. The SCAN algorithm is sometimes called the elevator
algorithm, since the disk arm behaves just like an elevator in a building, first
servicing all the request going up and then reversing to service requests the other
way.
C-SCAN Scheduling: Circular SCAN is a variant of SCAN, which is designed to
provide a more uniform wait time. Like SCAN, C-SCAN moves the head from one
end of the disk to the other, servicing requests along the way. When the head
reaches the other end, however it immediately returns to the beginning of the disk
without servicing any requests on the return trip. The C-SCAN scheduling algorithm
essentially treats the cylinders as a circular list that wraps around from the final
cylinder to the first one.
"Life is not about finding yourself. Life is about creating yourself"
Thanks !!!

You might also like