Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Difference between authentication and authorisation

Authentication:
Identity Verification: Authentication is the process of verifying the identity of a user, device, or
system component.
Credentials: It involves the use of credentials such as usernames and passwords, biometrics, or
security tokens to confirm the user's identity.
Access Grant: Once authenticated, the system grants the user access based on the verified
identity.
Authorization:
Permission Levels: Authorization, on the other hand, is the process of determining what actions
or resources a user is allowed to access after being authenticated.
Access Control: It involves defining and enforcing access control policies, specifying what
specific operations or data the authenticated user can or cannot access.
Granularity: Authorization can be more granular, allowing administrators to set specific
permissions for different users or groups based on their roles or responsibilities.
Post-Authentication: Authorization occurs after authentication, ensuring that even authenticated
users only have access to the resources or actions they are explicitly permitted to use.

Round Robin Scheduling:


Time Slicing: Round Robin is a preemptive scheduling algorithm where each process is
assigned a fixed time unit or time slice, commonly known as a quantum.
Fairness: It ensures fairness by giving each process an equal opportunity to execute within its
allocated time slice.
Circular Queue: Processes are arranged in a circular queue, and the scheduler selects the next
process in line for execution when its time slice expires.
Low Latency for Short Jobs: Round Robin is effective for time-sharing systems and
environments with a mix of short and long-duration jobs, as it minimizes response time for
short jobs.
Least Recently Used (LRU):
LRU is a page replacement algorithm used in the context of virtual memory systems to manage
page faults efficiently.
It prioritizes keeping in memory the pages that have been most recently used.
Each page in memory has a timestamp or a usage counter indicating when it was last accessed.
When a page fault occurs and there is no free frame in memory, LRU replaces the page with the
oldest timestamp or the least recently used page.
Implementing an ideal LRU algorithm can be challenging due to the need for accurate time-
stamping or tracking of page usage.
Operating System Overview:
Definition: An operating system (OS) is system software that acts as an intermediary between
computer hardware and user applications. It provides a set of services to manage hardware
resources and facilitate efficient execution of programs.
Resource Management:
Processor Management: Allocates CPU time to processes, schedules tasks, and manages
process execution.
Memory Management: Controls and organizes system memory, allocating space to programs
and data.
File System Management: Manages file creation, deletion, and organization, ensuring data
persistence.
Device Management:
I/O Management: Controls input and output operations, managing devices like keyboards,
printers, and storage.
Device Drivers: Interface between hardware devices and the operating system, facilitating
communication.
User Interface:
Command Line Interface (CLI): Allows users to interact with the system through text commands.
Graphical User Interface (GUI): Provides a visual environment with icons and windows for user
interaction.
Security and Protection:
User Authentication: Verifies user identities to control access.
Access Control: Defines and enforces permissions for resource access, ensuring data security.
Communication and Networking:
Network Protocols: Facilitates communication between devices on a network.
Interprocess Communication: Allows processes to share data and communicate with each
other.
Examples of Operating Systems:
Windows: Linux: macOS
User View vs Kernel View in Operating Systems:
User View:
Users interact with the operating system through the user view, which includes the user
interface, such as command-line or graphical interfaces.
Users launch and execute applications within the user view without direct involvement in the
management of hardware resources.
The user view abstracts away the complexities of hardware management, allowing users to
focus on application functionality and tasks.
Users operate within a controlled environment and have limited access to system resources to
ensure system stability and security.
The user view emphasizes tasks related to applications and user interactions, providing a
simplified and task-oriented perspective.
Kernel View
The kernel view involves the core of the operating system, known as the kernel, which directly
manages hardware resources such as CPU, memory, and devices.
Applications in the user view communicate with the kernel through system calls, requesting
services such as file operations, memory allocation, or I/O.
The kernel view is responsible for allocating and deallocating resources efficiently, ensuring fair
access among competing processes.
Kernel operations involve low-level functions and direct interaction with hardware components,
optimizing resource utilization and system performance.
The kernel enforces security policies, controls access to system resources, and ensures the
isolation of processes to prevent unauthorized interference.

Multilevel Queue Scheduling:


Processes are divided into multiple queues based on priority or other characteristics, creating a
multilevel structure.
Each queue may have its scheduling algorithm and time quantum, allowing for flexibility in
managing different types of processes.
Queues are often organized with different priority levels, where processes in higher-priority
queues are scheduled before those in lower-priority queues.
Processes may move between queues based on dynamic priorities, aging, or other criteria,
adapting to changing workload characteristics.
Each queue may use a different scheduling algorithm, such as Round Robin for time-sensitive
tasks in one queue and First Come First Serve for another.
Explain any two CPU scheduling algorithms
1. First Come First Serve (FCFS):
FCFS is a non-preemptive scheduling algorithm where processes are executed in the order they
arrive in the ready queue.
The process that arrives first is the first to be executed, and subsequent processes are
scheduled in the order of their arrival.
Advantages:
Simple and easy to understand.
No starvation; every process eventually gets a chance to execute.
Disadvantages:
Poor performance in terms of turnaround time, especially if a long process arrives first (convoy
effect).
Not suitable for time-sensitive tasks.
2. Shortest Job Next (SJN) or Shortest Job First (SJF):
SJN is a non-preemptive scheduling algorithm where the process with the shortest total burst
time is scheduled first.
When a new process arrives, the scheduler compares its burst time with the remaining time of
the current executing process. If the new process has a shorter burst time, it is scheduled next.
Advantages:
Minimizes waiting time and improves turnaround time.
Efficient for minimizing the total time processes spend in the system.
Disadvantages:
Requires knowledge of burst times, which may not be known in advance.
Can lead to starvation for longer processes if consistently shorter processes arrive.

CPU I/O Burst cycle


CPU Burst:
The cycle begins with the CPU burst, during which the process performs computations or
executes instructions. This phase involves the CPU actively processing and running instructions
from the program.
I/O Burst:
Following the CPU burst, the process may enter the I/O burst phase. In this stage, the process
performs I/O operations, such as reading from or writing to external devices like disks,
keyboards, or network interfaces.
Process Control Block (PCB) in Operating Systems:
A Process Control Block (PCB), also known as a Task Control Block, is a data structure used by
the operating system to manage and store information about a process. It contains various
fields that provide details about the current state, execution context, and resources associated
with a specific process. Here are the key fields typically found in a PCB:
Process ID (PID):
A unique identifier assigned to each process. It distinguishes one process from another in the
system.
Program Counter (PC):Indicates the address of the next instruction to be executed by the
process.
Registers:
Contains the values of various CPU registers at the time of process execution, including the
accumulator, index registers, and general-purpose registers.
CPU Scheduling Information:
Includes details about the process's priority, scheduling state (e.g., ready, running, blocked), and
other scheduling-related parameters.
Memory Management Information:
Base and limit registers or page tables that define the process's memory boundaries in the
address space.
File Descriptors:
A list or table of open files associated with the process, including details like file pointer, access
mode, and file status.
Pointer to Parent Process:
A reference to the parent process, indicating which process spawned the current one.
Process State Information:
Represents the current state of the process (e.g., new, ready, running, waiting, terminated).
Priority:
Priority level assigned to the process for scheduling purposes, indicating its importance relative
to other processes.
Accounting Information:
Collects statistics such as CPU usage, execution time, and other performance-related metrics.
I/O Status Information:
Includes details about the I/O devices the process is using or waiting for, as well as the status
of pending I/O operations.
Signals and Signal Handlers:
Lists the signals the process is registered to handle, along with the corresponding signal
handler routines.

You might also like