BIT 316 Database Administration - Mmust

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Course Code and Title: BIT 316 – Database Administration

Purpose: This unit provides the knowledge and skills required to install, configure, administer
and troubleshoot the client-server based database management system.
Learning outcomes
i. Describe database server architecture
ii. Plan for a database server installation and then install an instance of the database server
iii. Manage files and databases including determining resource requirements
iv. Choose a login security plan and implement database permissions and describe how to
help protect the database server in an enterprise network
Course content
Week #1:
Database Server Fundamentals
 Architecture
 Integration
 Files and Databases
 Security

Week #2:
Database server installation
 Hardware and Software requirements
 Methods of installation
 Configuring the database server
 Troubleshooting
 Assignment 1
Week #3:
Managing database files
 Data structures
 Creating and managing databases
 Optimizing the database
 Performance considerations

Week #4:
Managing security
 Authentication
 Users and roles
 Permissions
 Application and enterprise security
 CAT 1

Week #5:
Administrative tasks
 Configuring tasks
 Scheduling routine
 Maintenance tasks
Course Code and Title: BIT 316 – Database Administration

 Assignment 2

Week #6:
Backing up databases
 Preventing data loss
 Database recovery model
 Backup and methods of backup
 Backup strategy and performance issues

Week #7:
Restore a database
 Recovery process
 Restoring a database from difficult backups
 Database server performance monitoring and tuning
 Transferring data
 Assignment 3 (Practical)

Week #8:
Maintaining High Availability
 Failover clustering
 Standby servers and log shipping
 CAT 2

Week #9:
Distributed Data
 Replication
 Replication agents and types
 Replication Models
 Fragmentation

Week #10:
 Revision of CATs and Assignments

Mode of Delivery
 Lectures
 Tutorials
 Directed reading
 Discussions
 Practical/laboratory sessions
 Assignments/projects

Assessment
 Assignments and CATS ……………...………………………………………. 30%
 EXAM ………………………….…………………………………………….. 70%
Course Code and Title: BIT 316 – Database Administration

References:
1. Stallings, W Cryptography and Networking. 4 ED. Prentice Hall
2. Cryptography and Data Security technology. HB, Denning, Do. Addison-Wesley. ISBN
0201101505
3. Fundamentals of Computer Security Technology. PB. Amoroso, Ed. Prentice Hall US.
ISBN 0131089293
4. Information Assurance for the Enterprise: A Roadmap to Information Security (McGraw-
Hill Information Assurance & Security) by Corey Schou and Daniel Shoemaker
5. Information Assurance: Managing Organizational IT Security Risks by Joseph Boyce
Employee of the Department of Defense and Daniel Jennings Information Systems
Security Manager European Command (EUCOM)

Pre-requisites:
 Database Systems
Course Code and Title: BIT 316 – Database Administration

1. DATABASE SERVER FUNDAMENTALS


Database Administration – It is the technique of or a whole set of activities undertaken by a
DBA to ensure that the database is always available as needed. Such activities are like database
security, database monitoring, troubleshooting and planning for future growth.
Database administrator – Is the person (DBA) who manages, backs-up and ensures the
availability of the data produced and consumed by organizations via their IT systems.

Introduction: Database server is both hardware and software. As software a database server is
the back-end portion (also known as instance) of a database application following the client-
server model.
As hardware, it is the physical computer that hosts the database i.e. a dedicated high-end
computer that hosts the database.

Database server is independent of the database architecture. Relational databases, flat files, non-
relational databases – all these architectures can be accommodated on database servers.

Most DBMS support/provide the database-server functionality and some like MySQL rely on the
client-server model for database access others use the embedded database.
Normally, users access the database server through a front-end running on the user‟s terminals
displaying the requested data or through the back-end usually running on the server handling
specific tasks such as data analysis and storage.
Examples of proprietary database applications are Oracle, Ms SQL Server, DB2 and Informix.
Open source databases are like PostgreSQL among others

Database server Architecture

Client – Server Architecture


Database Server

Network

Client Client
Course Code and Title: BIT 316 – Database Administration

The database application is separated into two parts, Front-End running on the Client computers
and the backend on the server portion. The client executes the database application that accesses
the database information and interacts with the user through the keyboard, screen, mouse etc.
The server executes the database software and handles the functions required for concurrent,
shared data access to the database. However, both client and server portions can still be executed
on the same computer.
Database server

Client
Client

In distributed databases, one server may need to access a database on another server. In such a
case the requesting server is the client.

NOTE
Besides the two tier architecture in the client sever model, it is possible to implement a third tier
architecture i.e. another layer between the two layers such that the client does not directly
communicate with the server instead, they interact with the applications server which further
communicates with the database system and then the query processing and transaction
management takes place. This three tier architecture is very common in large web applications.

Task:
I. Students to find out the advantages and disadvantages of the two-tier and three-tier
architectures.
II. Explain a P2P (Peer to Peer) model(decentralized networking)

Integration
Database integration simply means that multiple different applications have their data stored in a
specific database i.e. the integration database so that data is available across all of these different
applications.
Principally, this is necessary to allow data to be shared throughout an organization without the
need for another set of integration services on each application.

Files and Databases


A data file is a collection of or related records stored on a storage medium e.g. a CD or hard disk.
A database on the other side is a collection of data organized in a manner that allows access,
retrieval and use of that data as desired.
Course Code and Title: BIT 316 – Database Administration

Security
It is the technique that protects and secures the database against intentional or accidental threats.
Because of the possible danger, database security includes hardware part, software part, human
resource and data. This will mitigate theft and fraudulent cases, loss of confidentiality and
secrecy, loss of data privacy, loss of data integrity and loss of availability of data.

It is the collective measures used to protect and secure a database or database management
software from illegitimate use and malicious threats and attacks. It includes a multitude of
processes, tools and methodologies that ensure security within a database.

Assignment: Roles of a DBA

2. DATABASE SERVER INSTALLATION

Hardware and Software requirements (issue them out depending on the version)
Key things to consider are:
 SQL server runs on NTFS or ReFS file formats because they are more secure than the
FAT32 file systems
 SQL server setup blocks installations on Read-Only, mapped or compressed drives.
 Remote desktop setup is impossible. Will fail unless the installation media is shared on a
network, mapped drive or presented as an ISO to a virtual machine.
 SQL Server Management Studio installation requires installing .NET 4.6.1 as a
prerequisite. Usually carried in the package once selected.
 Some other specs are required depending on the operating system in place.

Methods of installation

Configuring the database server

Troubleshooting

Assignment 1

3. MANAGING DATABASE FILES


 Data structures
 Creating and managing databases
 Optimizing the database
 Performance considerations
Course Code and Title: BIT 316 – Database Administration

Assignment 1.
1. Describe the principle functions of a database administrator.
2. Discuss how the role of the database administrator might be partitioned among a group
of people in a larger organisation.
3. Discuss the issues in tuning database systems, and describe examples of typical tools
used in the process of database administration.

Functions of a Database Administrator


1. Management of data activity (the way data is accessed and used within an
organization) – formal policies are required to define which users should be authorized
to access which data, and what levels of access they should be given; for example, query,
update, copy, etc. Processes also need to be defined to enable users to request access to
data which they would not normally be able to use.
2. Management of database structure – the definition of the structure of the databases
within the organisation. The extent to which this is required will vary greatly depending
on how mature the organisation is in terms of existing database technology. If the
organisation is in the process of acquiring and setting up a database or databases for the
first time, many fundamental decisions need to be taken regarding how best this should
be done. Decisions are like which database best suits the organization, how many
databases required, the necessity of various environments e.g. development, production
environment, interfaces between different database systems etc.
3. Supporting application developers – The DBA plays an important role in assisting
those involved with the development and/or acquisition of software for the organisation.
E.g. Advice on the details of existing tables and tablespaces in the databases of the
organisation. Advice on the use of indexes, different index types available and indexes
currently set up, details of security standards and procedures etc
4. Information dissemination – When new releases of DBMSs are introduced within the
organisation, or new applications or upgrades come into use, it is important that the DBA
is sensitive to the needs of the user population as a whole, and produces usable
documentation to describe the changes that are taking place, including the possible side
effects on users and developers
5. Designing for the future – An important overall principle to be applied when trying to
estimate the requirements for storage and performance tuning is to design for the future.
It will be part of the DBA‟s role to collect information from those responsible for the
introduction of new database applications, about the volumes of data and processing
involved
6. Data fragmentation - Over a period of time during which a database table is being used,
it is likely to experience a significant number of inserts, updates and deletions of data.
Because it is not always possible to fit new data into the gaps left by previously removed
data, the net effect of these changes is that the storage area used to contain the table is left
rather fragmented, containing a number of small areas which are hard to insert new data
into them. In other DBMSs, where no defragmentation utility is available, it may be
necessary to export the table to an operating system file, and re-import it into the
database, so that the pockets of unusable free space are removed.
7. Tables and tablespaces – A „tablespace‟ is a unit of disk space which contains a number
of database tables. Usually each tablespace is allocated to data of a particular type; for
Course Code and Title: BIT 316 – Database Administration

example, there may be a tablespace established to contain user data, and another one to
contain temporary data, i.e. data that is only in existence for a short time and is usually
required in order to enable specific processes such as data sorting to take place.
8. Use of the data dictionary - Database administrators should develop a good knowledge
of the most commonly used tables in the dictionary, and reference it regularly to retrieve
information about the current state of the system. The way in which dictionaries are
organised varies greatly between different DBMSs, and may change significantly with
different releases of the same DBMS.
9. Security - the DBA is the foremost person with responsibility for ensuring the day-to-day
security of the stored data. This process begins with the DBA working in conjunction
with managers, key users and owners of the organisations data, to establish appropriate
security mechanisms and procedures.

General duties of a DBA are:


1. Database design – Plays a key role in both logical and physical design phase
2. Implementation and operation of the database – Guides database usage on a daily basis,
adds and deletes data etc
3. Coordination with users – receives user requests and resolves all conflicting requests
4. Backup and recovery – Plans for backup periodically and establishes procedure for
recovering from failure
5. Monitoring performance – Uses specialized software to calculate and record operating
statistics with the aim of improving performance
6. System security – responsible for allocating user rights and privileges

Roles of a DBA to the organization


1. Data policy – establishing policy for data use
2. Marketing
3. Data standards enforcement
4. Return on organizational data investment
5. Forum for data conflict resolution
6. Data availability
Qualities of and roles of the DBA function
 Trustworthy - clearly a major part of the security of the organisation is in the hands of the
DBA.
 Cool under pressure. Should disasters arise, for example at the database application or
whole-DBMS level, it is likely that the DBA will be involved in trying to resolve the
situation, with minimum disruption to the users and clients of the organisation.
 Flexible. It is possible that years of hard-won knowledge relating to a specific DBMS
may from time to time become obsolete, either because that system is replaced by a
substantial new release, or because for business reasons, the organisation decides to
migrate to a totally different DBMS.
 Respected by application development staff and management.
 Good communication – Communications with high-level management is required, in
order that the DBA function can be sufficiently informed about strategic directions, so
Course Code and Title: BIT 316 – Database Administration

that the database strategy for the organisation can be closely aligned with the business
strategy.
 Technical knowledge. In addition to the need to have a sound knowledge of the various
utilities and database languages being used to administer user privileges and monitor
database activity, a detailed understanding of the interfaces to other database systems is
often required.
 Good understanding of the organisation and its priorities; ability to liaise with
management.
 Good arbitrator, in situations where it is necessary to make decisions regarding disputes
over access to data or processes.

Security
Security is a major issue in database systems. It is process of determining how to make the most
appropriate use within the organisation of the security mechanisms provided by the DBMS and
other software in use, plus a clear definition and documentation of supporting policies and
processes to ensure that data and programs are properly protected. Typical issues that should be
addressed include:
 Procedures for the allocation of passwords. Many database systems provide considerable
flexibility in the use of passwords, enabling them to be set from the database level right
down to the individual attribute level.
 Procedures for the administration of database privileges, such as the granting and
revocation of access to tables, query and update transactions.
 The use of encryption techniques for encoding data while it is being transmitted over
networks, including any intranet and the Internet.
 The establishment of procedures for transaction logging and recovery from a range of
different failures.
Course Code and Title: BIT 316 – Database Administration

DATABASE SECURITY
 It deals with all various aspects of protecting the database content, its owners, and its
users. It ranges from protection from intentional unauthorized database uses to
unintentional database accesses by unauthorized entities (e.g., a person or a computer
program).
 Database access control deals with controlling who (a person or a certain computer
program) is allowed to access what information in the database. The information may
comprise specific database objects (e.g., record types, specific records, data structures),
certain computations over certain objects (e.g., query types, or specific queries), or using
specific access paths to the former (e.g., using specific indexes or other data structures to
access information).
 Database access controls are set by special authorized (by the database owner) personnel,
that uses dedicated protected security DBMS interfaces.

Database server installation


 The job of the DBA is to decide what database system and architecture is suitable for the
company.
 The DBA must understand the business strategies and objectives, and how the database
architecture impacts the development and priorities of the organization‟s information
systems.
 He/she is the person to establish policies for maintaining and dealing with database
systems in the organisation.
 He/she is also responsible for ensuring that the system operates with adequate
performance to meet the demands of the organisation.
 Faced with such great responsibility, the DBA needs to know the various issues of client-
server architecture and what impact it has on the organisation.

NOTE: The Client-Server Architecture is preferred because of its potential in portability,


scalability and interconnectability of clients and servers using various network configurations.
In addition when evaluating RDBMS on Client–Server systems the DBA must consider many
factors besides the architectural model-transaction control, performance, security, integrity,
procedure logic and other issues also figure prominently

Tools used in DBA administration


The DBA uses the following tools to support the client-server system:
1. Data dictionary – a set of tables and views. Data dictionary is a set of information
describing the contents, format, and structure of a database and the relationship between
its elements, used to control access to and manipulation of the database. When
responding to a client request, the server can find any data required about the data it
needs in the data dictionary. It can use the same mechanisms it employs on behalf of
clients to read its own data. These are commonly known as recursive requests because the
server generates them automatically.
2. Stored procedures - are groups of SQL statements which are stored in the server. They
can be run like procedures written in standard programming languages, and allow
portions of application code (normally commonly executed tasks or transactions) to move
from client to server.
Course Code and Title: BIT 316 – Database Administration

3. Data buffering - During execution of statements that query or update the database,
certain data (sent from or requested by the client) is placed in memory. The server tries to
keep the data in memory to save disk input/output (I/O) should the next request require
data that‟s still in the buffer. The buffer size should be configured so the DBA can
optimize the memory-versus-speed trade-off.
4. Asynchronous data writing - This feature is used to try to smooth out peaks in I/O that
may arise during database processing. If I/O slows down, the server starts writing data
blocks from the buffers to disk. Since this write activity is scheduled during periods of
otherwise low I/O, there is no degradation in performance, even if the data in the buffer is
changed later. If an urgent need for buffer space develops, the disk write is already done;
the new request can be serviced without a time-consuming disk wait.
5. Data integrity enforcement on a server - With the client-server architecture, all
database processing can be consolidated on a server machine. Such consolidation
provides an opportunity to achieve a high degree of data integrity. Since every database
request is processed by a server, if database constraints are defined at the server level,
they can be consistently applied. Server-based enforcement of data integrity guarantees
data correctness and integrity by having the server enforce constraints on any changes or
updates to the database. As such constraints are held centrally, they cannot be bypassed
as the data can only be accessed through the server software.
6. Concurrency control features – this allows multiple clients to access the server and still
preserve the integrity of shared data. Updates by users are controlled and isolated to
prevent one‟s changes disrupting or overwriting another‟s. Concurrency control is one of
the challenges in the development of client-server applications is to gain the maximum
degree of parallelism on the client computers while providing protection against
problems such as lost updates and inconsistent reads.
7. Communications and connectivity - A characteristic of client-server architecture is that
a client application and server software are on different computers. The protocol used to
pass messages, SQL and data between them is therefore of crucial importance. As there is
no single standard protocol for computer networks, the server has to offer tools to handle
the complexities of multivendor networks

CLIENT-SERVER SECURITY
Server security
The built-in security mechanism of the database server provides central data access control,
thereby reducing the need for security measures in the client applications. The server normally
provides three basic levels of security i.e.
1. User enrolment. This involves granting a user access to the database server itself.
2. Access privileges. This grants users access and privileges to individual database objects.
3. Resource allocation. This controls the amount of disk space allocated on the server to
each user‟s database objects.

Client security
As the general administration of security in a client-server system is handled by the DBA at the
server end, the client needs only to be concerned with errors returned from the server when an
unauthorized operation is detected.
Course Code and Title: BIT 316 – Database Administration

Network security
The distribution of a system, be it as a client-server or truly distributed database system, requires
the additional issues associated with the protection of the data as it is transmitted across the
network to be handled. This is most often dealt with by encryption algorithms, which encode the
data, rendering it useless if it is intercepted during transmission. Following reception of
encrypted data, the receiver of the data will run the decryption algorithm to re-establish the
original data values.

Checking of security violations


Journal logging and other facilities are used to locate security breaches or violations in the
server. The reason for a user failing action should also be logged so that the DBA can distinguish
between a simple human error and attempted security violations.

DATABASE TUNING
This is the process performed by database administrators of optimizing performance of
a database. In the enterprise, this usually means the maintenance of a large database management
system (DBMS) such as Oracle or MySQL. Some of the major considerations in database tuning
are:
1. Tuning SQL – aims at making the SQL more efficient. The DBA activity in tuning specific
transactions should be focused on those which:
• Are run sufficiently often to have a noticeable impact on performance;
• Access sufficient numbers of records (including intermediate results obtained during the
evaluation of the query) to provide scope for transaction tuning.
Guidelines in tuning SQL transactions are:
 Use indexes on primary and foreign keys
 Unique indexes are faster than non-unique indexes
 Use of short table aliases in queries can improve performance
TASK: Students to discuss the guidelines applied when tuning SQL transactions
2. Tuning disk I/O - disks are slow when compared with solid-state devices. For this reason,
most data structures and design options are geared around minimizing disk input and output.
Guidelines in tuning disk I/O
 Reduce disk contention. Contention occurs when several users try to access the
same disk, at the same time. If contention is noticeable on a particular disk and
queues are visible, then distribute the I/O by moving heavily accessed files onto a
less active disk. Distribute tables and indexes on different disks if resources are
available.
 Allocate free space in data blocks (i.e. space in a block is used by INSERTS and
also UPDATES which increase the size of a row).
 Seek to minimize dynamic expansion. For example, with Oracle, set up storage
parameters in the CREATE table and CREATE tablespace statements so that Oracle
will allocate enough space for the maximum size of the object. Hence space will not
need to be extended later.
 Tune the database writer DBWR (an Oracle-specific process which writes out data
from the buffer cache to the database files).
Course Code and Title: BIT 316 – Database Administration

3. Tuning memory – accessing disk is very expensive in terms of performance, whereas


access to memory is much faster. Hence, we want to make the majority of accesses to be
to memory rather than to disk.

NOTE: Memory is also required for operating system use, hence other factors need to be taken
into account, such as memory allocation for paging and virtual memory. For example, if the
system is multi-user, then an increase in the number of users currently online could alter the
performance on the machine quite dramatically.

4. Tuning contention – The term „contention‟ refers to a problem which can arise in most areas
of computing. It occurs when several processes make an attempt to gain access to the same
resource at the same time. This will obviously result in a performance degradation, as one or
more processes will need to wait until the resource is available. There are three main areas
concerning memory contention in Oracle:
 Data blocks. Usually occurs when users attempt to update the same block.
 Rollback segments. All transactions use the rollback segments, so if there is only a
small number of segments, contention is quite likely. A guideline given by Oracle is
to use one rollback segment per five active users.
 Redo log buffers. Any block modification will write data to this buffer. The „redo
space waits‟ statistic can be used to provide information on contention for this
buffer.

Tools to assist performance tuning


Depending on the type of database, there will be a selection of tools available to monitor the
system, allowing the DBA (or similar) to see the effects of tweaking the system. Monitors can be
broken down into two types i.e. Software monitors and Hardware monitors.
Software monitors These are programs which can be called up when necessary to provide
statistics on the state of the system. These tools are flexible and may even be specially written
by the DBA, although most vendors now supply these. Unfortunately, as these tools actually
run on the system, they themselves apply an additional performance overhead, requiring CPU
time, etc, in order to execute.
Hardware monitors - Hardware monitors consist of electronic devices which record data
collected by probes, where each probe is connected to circuitry in the machine and/or
peripherals. A major advantage of these is that they do not interfere with the operation of the
system.

Assignment: Students to find out other performance tools


Administrative tasks
 Configuring tasks
 Scheduling routine
 Maintenance tasks
 Assignment 2

Week #6:
Backing up databases
 Preventing data loss
Course Code and Title: BIT 316 – Database Administration

 Database recovery model


 Backup and methods of backup
 Backup strategy and performance issues

Backing up databases
Database backup is the process of backing up the operational state, architecture and stored data
of the database software. It enables the creation of a duplicate instance or copy of a database in
case the primary database crushes, is corrupted or is lost.
Preventing data loss
Course Code and Title: BIT 316 – Database Administration

Data recovery model


A recovery model is a database configuration option that determines the type of backup that one
could perform, and provides the ability to restore the data or recover it from a failure
Course Code and Title: BIT 316 – Database Administration
Course Code and Title: BIT 316 – Database Administration
Course Code and Title: BIT 316 – Database Administration
Course Code and Title: BIT 316 – Database Administration
Course Code and Title: BIT 316 – Database Administration

Backup and methods of backup


Database backup is the process of backing up the operational state, architecture and stored data
of the database software. It enables the creation of a duplicate instance or copy of a database in
case the primary database crushes, is corrupted or is lost.
Methods of data backup
1. Full backups – (also called image) all data is backed up to a target drive or disk with
each backup. All documents and files are stored in one file, making working with the
backups simple.
Pros – one, creating such backups is quicker than incremental or differential backups.
Two, managing them is easier as only one file needs to be restored.
Cons – Regular full backup calls for more space than differential or incremental backup.
2. Incremental backups – backups up new or changed documents as it bases the changes
on the previous incremental backup as opposed to the initial full backup.
Pros – Requires much less space than differential or full backups
Cons – slower to restore than the full and differential backups; also managing the
backups is more complex as the files from a backup “chain” are required for restoration.
3. Differential backups – only the changed or new data since the last full backup is backed
up. When restoring, the base backup and the differential backup files need to be restored.
Pros – calls for much less space than full backup
Cons – slower in restoring such a backup than a full backup; also managing such a
backup is harder as two files are required
4. Virtual full backups – these uses a database to track and manage backup data. Helps
avoid some of the pitfalls of other backup methods. A full copy or replica is taken only
once and does not need to be taken again as long as the storage medium – typically a
network-attached storage location remains unchanged. The virtual full backup
periodically synchronizes backup data to the database. Virtual full backups are generally
performed automatically by the backup software. Just like full backup, restoring such a
backup one chooses the preferred recovery point and the file(s) to recover.

Backup strategy and performance issues


To prepare for a disaster, an organization needs to have a solid backup strategy. Regular backups
are vital insurance against data-loss catastrophe.
1. Plan your backup strategy – develop a written backup plan that tells you what is being
backed up, where it is being backed up, how often backup will occur, who is in charge of
backing up and who is in charge of monitoring the success of backups.
2. Think beyond just your office and its computers – e.g. get to back up data in mobile
devices, emails and hard copies that may require scanning and storing in electronic form
or safe cabinets.
3. Give highest priority to crucial data – e.g. database and accounting files are most critical
data assets this can be backed up daily, emails are crucial but can be backed up weekly
etc, others which can be loaded from a CD e.g. music files can be ignored.
4. Storing and protecting your backups – ensure your local and remote backup solutions
won‟t be hit by the same disaster at the same time. Replicate copies on machines or two
different locations
Course Code and Title: BIT 316 – Database Administration

5. Think about how you will access critical data and files – consider what data would be
most essential to have in an unexpected scenario e.g. lack of internet connectivity/access
to be available as you wait for the connectivity. No deadlock
6. Test your backups before you need them – ensure your backup has full read-back
verification. Design a recovery plan, and try restoring a few files to a different computer
at a different location so you can test your plan before you actually need it.

Maintaining High Availability


This is determined by your speed of fixing any problems that happen. Minimizing system
downtime calls for planning, documentation, scripting, testing and drill. To achieve high
availability, one need to:
i. Be clear on likely points of failure, the patterns and volumes of use, business
requirements, strengths and weaknesses of the system architecture
ii. Be methodical in reducing risks and both scripting and rehearsing disaster recovery
iii. Build resilience and „pain reporting‟ into both the software and hardware
iv. Be able to fix problems rapidly.

Minimizing unplanned downtime


 Keep an up-to-date list of people who will ensure the server is available with contact
numbers etc.
 Keep information on who is offsite or onsite up to date.
 Keep passwords and software keys up to date.
 Do occasional disaster drills.
 Do a range of backup types to cope with eventualities
 Speed up the time it takes to have a replacement server ready to switch in, by providing
standby servers. Ensure they are pre-configured with at least the correct OS and hotfixes,
and be sufficient in size to hold the production systems.
Cold standby – spare server, of the same specification of the production server, which is
configured and ready to receive a copy of the database taken from the backups of the production
server
Warm standby – this is a redundant server with a mirrored, or a log-shipped, copy of the
database that requires only a manual intervention to failover, and promote it to being the
production server.
Hot standby – This is a server kept in sync with the production server and able to detect failure
of the production server and able to detect failure of the production server and automatically
failover without the need for manual intervention. The ideal is geographically dispersed failover
cluster.
Server synchronization for warm or hot standby can be achieved by:
a) Log shipping – Is the process of automating the backup of transaction log files on a
primary (production) database server and then restoring them onto a standby server. This
technique is supported by Microsoft SQL server, MySQL, PostgreSQL and 4D Server. It
simple, cheap and dependable. It can maintain a warm standby but not hot standby at a
distance but you will need to script the role-switching, login synchronization and client
redirects, in order to minimize downtime.
b) Mirroring – it is the creation and maintenance of redundant copies of a database. Is used
in SQL server database where two copies of a single database reside on different
Course Code and Title: BIT 316 – Database Administration

computers called server instances. The principal server provides database to clients, the
mirror (secondary) server instance acts as standby that can take over in case of a problem
with the principal server instance.
c) Failover clustering – is a group of servers that work together to maintain high availability
of applications and services. If one nodes/servers fails, another node in the cluster can
take over its workload without any downtime.
d) Synchronization – effective in small databases.
e) Replication – an old technology but very much vulnerable if maintaining high
availability. It is the frequent copying of data from a database in one computer or server
to a database in another so that all users share same level of information. The result is a
distributed database in which users can quickly access data relevant to their tasks without
interfering with the work of others. Replication uses a number of standalone programs
called agents to carry out tasks associated with tracking changes and distributing data. By
default replication agents run as jobs scheduled under SQL Server agent, from command
line and by applications that use Replication Management Objects (RMO)
f) Standby servers – is a server that can be brought online in a situation where the primary
server goes offline.
g) Fragmentation – is a database feature that allows you (DBA) to control where data is
stored at the table level. It is the task of dividing a table into a set of smaller tables i.e.
subsets/fragments of the table.
 Replication Models

Task: students to:


I. find out more on agents like Snapshot Agent, SQL server Agent, Log reader Agent,
Distribution Agent, Merge Agent, Queue Reader Agent
II. Give the advantages/disadvantages of the above methods of maintaining high availability
III. Discuss two replication models distributed systems – i.e. Passive and Active replication
model

You might also like