Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

MODULE: Distributed Operating Systems

CODE: COM 3121


ASSESSMENT: Quiz

Surname And Initials Student Number


Group Leader
Chauke K 22008395
Makhado RE 21000909
Mambani K 21004053
Mkhabela SR 21006083
Sabela P 21011545
Lebopa B 21014142
Makananise N 21014094
Manobi XG 21010605
Singo AP 22011530
Ralufhe M 22009778

DUE DATE: 04 March 2024


EXERCISE PART A

1. The World Wide Web is an evolving System for publishing and accessing resources
and services across the internet through web browsers. WWW allows users to
navigate and interact with content such as text, images and multimedia linked together
through hyperlinks. The web facilitates the sharing and retrieval of information across
the globe.
2. A web browser is a software application that enables users to access, communicate
and share resources on the World Wide Web. An executing web browser is an
example of a client. The web browser communicates with a web server, to request
web pages from it.
Examples of Web browsers include Microsoft Edge, Mozilla Firefox, and Google
Chrome.
3. A web server is a computer program or device that stores and delivers web pages and
other content to users who request them. Web severs enable users to access websites
using a web browser. Web severs can be used to manage web services.
Examples of web servers include Apache, Nginx, Microsoft IIS and other commercial
web servers such as Amazon and Yahoo.
4. A web service provides a service interface enabling clients to interact with servers in a
more general way than web browsers do. Clients access the operations in the interface
of a web service by means of requests and replies formatted in XML and usually
transmitted over HTTP.
Example: associates.amazon.com

EXERCISE PART B

1. - Computational services
Refers to services that provide computational resources, tools, and capabilities over
the internet to support various computing tasks. These services are often delivered
through cloud computing platforms and can range from basic processing and storage
to more advanced data analysis and machine learning capabilities.
- Application services
Refers to a broad range of services and functionalities that are provided by cloud
service providers to support the development, deployment, and management of cloud-
based applications. These services can help simplify and accelerate the development
process, enhance the performance and scalability of applications, and improve overall
functionality.
- Storage services
In cloud computing refer to the provision of scalable, flexible, and reliable storage
solutions by cloud service providers to store and manage data and files. Cloud storage
services allow users to store data in the cloud, access it from anywhere, and ensure
high availability, durability, and security of their stored information.
2. (i) Peer-to-Peer computing
Is a decentralized model of computer networking in which individual computers,
referred to as peers, communicate and collaborate with each other directly without the
need for centralized servers. In a P2P network, each peer can act as both a client and a
server, sharing resources, files, or services with other peers in the network.
Advantages include the reduction of loads and cost of servers while increasing the
availability of shared resources such as DVDs and digital files.
(ii) Cluster Computing
Is a set of interconnected computers that cooperate closely to provide a single,
integrated high-performance computing capability. The cluster is connected to the
Internet via a virtual private network (VPN) gateway and gateway IP address locates
the cluster.
(iii) Utility Computing
A service provisioning model that offers computing resources to clients as and when
they require them on an on-demand basis. Utility computing is a subset of cloud
computing, allowing users to scale up and down based on their needs. Physical
resources such as storage and processing can be made available to networked
computers, removing the need to own such resources on their own. Software services
can also be made available across the global Internet using this approach.
(iv) Cloud Computing
A cloud is defined as a set of Internet-based application, storage, and computing
services sufficient to support most users’ needs, thus enabling them to dispense with
local data storage and application software largely or totally. Clouds may be built
from physical or virtualized resources. Utility computing is a precursor to cloud
computing.
(v) Grid Computing
Is a distributed architecture that uses a group of computers to combine resources and
problem solving in dynamic, multi-institutional and virtual organizations. A
computational grid is focused on setting aside resources specifically for computing
power.
(vi) Ubiquitous Computing
Is the harnessing of many small, cheap computational devices that are present in
users’ physical environments, including the home, office and even natural settings.
The term ‘ubiquitous’ is intended to suggest that small computing devices will
eventually become so pervasive in everyday objects that they are scarcely noticed.
That is, their computational behaviour will be transparently and intimately tied up
with their physical function.
(vii) Client Server
Is a distributed architecture in which a server receives and responds to requests from
different clients for services and data. Clients are active (sending requests), servers are
passive (only wake up when they receive a request). Servers run continuously, and
clients last only as long as the applications of which they form a part. There is
centralized control, a single main computer in the network acts a server and the rest of
the computers act as clients. All communication must go through the server.
(viii) Edge Computing
Is a distributed computing paradigm that brings computation and data storage closer
to the location where it is needed to improve response times and save bandwidth. It
enables real time data processing without the need for constant communication with a
central data centre which is essential for many modern applications. Example being
autonomously vehicle and remote monitoring system.
.
3. When peer computers act autonomously in a peer-to-peer (P2P) network, each
computer operates independently, making decisions without needing centralized
control. Each peer has its own rules, protocols, and decision-making abilities. This
autonomy enables decentralized control, with peers communicating and collaborating
directly to accomplish tasks, share resources, and exchange information, rather than
relying on a central authority for management and coordination.
4. .
a. Transparency: In a distributed system, transparency is the act of hiding from the
user and application programmer the fact that different components are separated,
making the system appear as a single unit instead of as a group of separate parts.
The two most significant transparencies are location and access transparency, as
the use of distributed resources is most significantly impacted by their presence
or absence.

b. Scalability: The ability of a system to remain effective when there is a significant


increase in the number of resources and the number of users. It includes the
ability to manage increasing amounts of data or traffic without sacrificing
performance.

c. Reliability: The extent to which a system consistently performs its intended


functions accurately and predictably, without failure or errors. A reliable system
should be able to perform its intended functions without failure over a specified
period.

d. Openness: A computer system's ability to be expanded and implemented in


different ways is determined by its level of openness. The openness of distributed
systems is determined primarily by the degree to which new resource-sharing services
can be added and be made available for use by a variety of client programs.

e. Load balancing: The process of distributing incoming network traffic or workload


across multiple servers or resources to optimize performance and prevent overloading
of any single component.

f. Webcasting: Refers to the broadcasting of audio or video content over the internet.
It involves streaming media, where the content is delivered in real-time to viewers or
listeners.
g. Proxy server: Proxy servers in distributed systems act as intermediary servers that
sit between clients and servers to facilitate communication. Web proxy servers
provide a shared cache of web resources for the client machines at a site or across
several sites. The purpose of proxy servers is to increase the availability and
performance of the service by reducing the load on the wide area network and web
servers.

h. Virtualization: Is the process of creating a virtual representation of physical


computing resources, such as servers, storage devices, or networks, to optimize
resource utilization.

i. Fault tolerance: Fault tolerance refers to a system's ability to continue functioning


properly in the presence of faults or errors. In distributed systems, fault tolerance is
crucial to ensure that the system remains operational even if some components fail or
communication between nodes is disrupted. This can be achieved through techniques
such as replication, redundancy, and error recovery mechanisms.

j. Dependability: The overall quality of a system in terms of its reliability,


availability, safety, and security, ensuring that it can be trusted to perform consistently
and predictably. Ensuring dependability involves robust error handling, fault
detection, and recovery mechanisms.

k. Portability: Portability in distributed systems refers to the ease with which the
system can be moved or adapted to different environments or platforms without
significant modifications.

l. Interoperability: The ability of different software applications or systems to


communicate, exchange data, and operate cohesively with one another.
Interoperability ensures that different parts of the distributed system, such as servers,
databases, and applications, can work together effectively without compatibility
issues or communication barriers.

m. Heterogeneity: A system that consists of networks, computer hardware, operating


systems, programming languages and implementations by different developers.
Heterogenous means variety and difference. The Internet enables users to access
services and run applications over a heterogeneous collection of computers and
networks.

n. Hash function: A hash function is a mathematical function that converts input data
into an output string with a fixed number of characters.

o. Blockchain: A decentralized, distributed ledger technology that records


transactions across multiple computers, these listed records are called blocks. In terms
of a distributed system, block chain functions as a distributed database that stores a
continuously growing list of records (blocks) in a linear and chronological order.
p. Digital signature: Is an electronic, encrypted, stamp of authentication on digital
information such as email messages, macros, or electronic documents.

q. Replication: The process of creating and maintaining redundant copies of data or


resources across multiple locations or systems to improve their overall accessibility
across a network. May be used to improve the performance of resources that are very
heavily used.
Full replication: In this approach, all data is copied to every node in the system. This
ensures high availability and fault tolerance but can be resource intensive.
Partial replication: Only a subset of data is replicated across nodes based on certain
criteria. This approach strikes a balance between performance and resource
utilization.

r. Caching: Is the process of storing copies of data in a cache, or temporary storage


location, so that they can be accessed more quickly. A cache is a store of recently
used data objects that is closer to one client or a particular set of clients than the
objects themselves. When an object is needed by a client process, the caching service
first checks the cache and supplies the object from there if an up-to-date copy is
available. If not, an up-to-date copy is fetched.

s. RMI: A mechanism that allows methods to be invoked on remote objects or


systems over a network by calling objects. RMI implementations may support object
identity and the ability to pass object identifiers as parameters in remote calls.

t. RPC: A mechanism that enables procedures in processes on remote computers to


be called as if they are procedures in the local address space. RPC systems hide
important aspects of distribution, including the encoding and decoding of parameters
and results, the passing of messages and the preserving of the required semantics for
the procedure call. RPC systems offer minimum access and location transparency.

u. Middleware: A layer of software with the purposes of masking heterogeneity and


providing a convenient programming model to application programmers. Middleware
is represented by processes or objects in a set of computers that interact with each
other to implement communication and resource-sharing support for distributed
applications.

v. Pervasive system: A system where interconnected computing devices and sensors


are integrated into everyday objects and environments, enabling ubiquitous access to
information and services.

w. IoT: Refers to the networked interconnection of everyday objects, tools, devices,


or computers. It can also be viewed as a wireless network of sensors that interconnect
all things in our daily life. These things can be large or small and they vary with
respect to time and place. The idea is to tag every object using RFID or a related
sensor or electronic technology such as GPS.
EXERCISE PART C

1. One advantage of tightly coupled systems is that they are easier to program since they
have the same clock and usually shared memory.

Two advantages of loosely coupled systems over tightly coupled systems:


- Loosely coupled systems are more scalable
- Loosely coupled systems are generally more reliable/fault-tolerant since the processors do
not share memory and each has its own local memory

2. A) Transparency: Is defined as the concealment from the user and the application
programmer of the separation of components in a distributed system, so that the
system is perceived as a whole rather than as a collection of independent components.
b) Access transparency: Enables local and remote resources to be accessed using identical
operations.
c) Location transparency: Enables resources to be accessed without knowledge of their
physical or network location.
d) Replication transparency: Enables multiple instances of resources to be used to increase
reliability and performance without knowledge of the replicas by users or application
programmers.
e) Failure transparency: Enables the concealment of faults, allowing users and application
programs to complete their tasks despite the failure of hardware or software components.
f) Migration transparency: Allows the movement of resources and clients within a system
without affecting the operation of users or programs.
g) Concurrency transparency: Enables several processes to operate concurrently using
shared resources without interference between them.
EXERCISE PART D

1.URL – UNIFORM RESOURCE LOCATOR


3 Different sorts of web resources that can be named by URLs.
• Photos
• Files
• Web pages

2.Example of an HTTP URL


• http:/www.google.com/search?q = Obama
Main components of an http URL, stating how their boundaries are denoted and illustrating
each one from your example.
Components
• Scheme : http
• Server DNS name : www.google.com
• Path name : search
• Query : q = Obama
• Fragment : none
Boundaries
• Boundaries between the Scheme and the rest of the URL are denoted by ://
• The boundary between the Server DNS name and the path name /Query/Fragment
is denoted by the first/ after the server DNS name.
• The boundary between the path name and the query is denoted by?
To What extent is an HTTP URL Location-transparent?
- The Server DNS name www is location independent, so we have location transparency
in that the address of a particular is computer is not included. Therefore, the organisation
may move the web service to another computer. But if the responsibility for providing a
www-based information service moves to another organisation, the URL would need to
be changed.
3. Middleware in a distributed system acts as a bridge between different software
components, providing services such as communication, synchronization, and transaction
management. It abstracts the complexities of distributed computing, enabling
interoperability between heterogeneous systems and facilitating seamless integration of
applications across the network.

You might also like