Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

CMT 109 Database Systems

Chapter Two: Database Environment 2.2 The Three-Schema Architecture (ANSI-SPARC)


2.0.Chapter objectives The major aim of a database system is to abstract the users
By the end of this chapter, student should viewing the data from the way the database is actually
 Exhibit thorough understanding of the ANSI-SPARC implemented and manipulated. The ANSI-SPARC (an
(Three schema) architecture of database design acronym for the American National Standard Institute
 Demonstrate thorough understanding of database standard planning and Requirements Committee) helps us to
independence achieve this abstraction by defining three levels in which the
 Exhibit understanding of the role played by different database design has been agreed upon.
database systems languages These levels are external, conceptual and the internal level as
 Exhibit a flawless understanding of data models, their depicted by fig 2.0 below
classification in database context
 Exhibit proper understanding of the concept of The overall intention of the architecture is to separate the

schema, mapping and database mapping user’s view of the database from the way the database is
physically designed. There are a number of rationales behind
2.1 Introduction
In this chapter, we aim at exploring the ANSI-SPARC this separation such as
architecture of database design. We shall further look at the 1. Each user should be able to access the same data but
different types of database languages that are used by the have a customized view of that data i.e the user can
DBMS. In addition, we shall look at the concept of data change the way he/she views the data but that
models which we shall look into depth in other subsequent shouldn’t affect the other users
chapters. Last but not least, we shall examine the different
multiple user architectures for DBMS

23
CMT 109 Database Systems
2. The database administrator (DBA) should be able to
User 1 User n
change the conceptual schema e.g. adding a new entity
or attribute without affecting all users View 1 View n External Level
n
3. The DBA should also be able to change the storage
structures without affecting the users view
4. Users interaction with the database should be
Conceptual Schema Conceptual level
independent of the storage considerations
5. Changes to the internal structure of the database
should be unaffected by the changes to the physical Internal Schema Internal level

aspects of storage such migrating to a new storage


device e.g. moving the database from one server to
Database Physical Data organization
another one with different storage device should not
affect the database internal structure Fig 2.0: ANSI-SPARC architecture of database design
2.2.0 External level (user view)
The level deals with the user’s view of the database and it
describes part of the database that is relevant to each user. It
consists of a number of different views of the database
representing a view of the real world. It only includes
attributes, relations and entities which the user is interested
in. i.e. it excludes irrelevant data as well which the user is not
to have access to.

24
CMT 109 Database Systems
2.2.1 Conceptual level Basically, the level is concerned with:-
It is a way of describing what data is stored within the whole  How data is stored in the database
database and how the data is inter-related. The level is a  Storage space allocation for data and indexes.
complete view of the data requirements of the organization. It  Record descriptions for storage
contains logical structure of the entire database as seen by the  Record placement.
Database Administrator (DBA). The data is hardware,  Interfaces with the Operating system access methods
software and storage independent. (file management techniques for sorting and retrieving
data)
Ideally, the level contains the logical structure of the entire  Data compression and data encryption techniques.
database as seen by the DBA and  Record description for storage
 All the constraints of the data.
2.3 Database Schemas/intension
 All the entities, their attributes and relationships Database Schema refers to the overall description of the
 Security and integrity information database containing a collection of named objects i.e. tables,
 Semantic information about the data views, aliases, indexes, triggers, database structure, data types
The level supports each external view i.e. each data available and the constraints on the data.
to a user must be contained/derived from this level.
2.2.2 Internal level The schema is usually specified during database design and is
The last level of abstraction covers the physical not usually expected to change frequently. Each component of
implementation of the database so as to achieve optimal the schema is referred to as a schema construct e.g. student,
runtime performance as well as storage space utilization. course e.t.c are schema constructs in a university context.

25
CMT 109 Database Systems
Course schemas also known as subschemas each
Course_Code Course_Name Credit_Hrs department corresponding to each user’s view of the data.
Student  Conceptual schemas: - which are found at the second
Name Regno Gender CAddress level of abstraction. Conceptual schemas describe all
the entities, attributes and relationships together with
2.4 Database State/Instance/Occurrence/snapshot/extension
A database extension refers to the data/content stored in a integrity constraints.
database at a particular moment in time. The extension cannot  Internal schema Finally, at the lowest level of abstraction
be determined at any point of time since it keeps on will i.e. the internal level, we got the Internal schema which is a
changing every time because of the insert and delete complete description of the internal model containing
operations definitions of stored record, the methods of representations,

Example of database state data fields and the indexes and storage structures used.

Student It’s worth noting that we only have one conceptual and

RegNo Name Gender internal schema and multiple external schemas per database

001 James Male 2.6 Data Independence


002 Cynthia Female Data independence means that the upper levels are immune
to changes done at lower levels. There are two types of data
2.5 Levels of schema
The ANSI-SPARC architecture defines three levels of schemas independence; Logical and Physical data independence.
each corresponding to each level of the architecture. 2.6.1 Logical data independence
 External schemas- they are found at the external level This type of independence means the immunity of the

of the abstraction. We could have t multiple external external schema to any changes done in the conceptual
schema. That means the DBA should be able to change the
conceptual schema such as adding or removing entities,
26
CMT 109 Database Systems
attributes and relationships without having to change the efficient mapping, the ANSI-SPARC usually allows direct
existing external schemas or having to rewrite the mapping of the external schemas on to the internal schema
applications again. Explicitly, the users who have been bypassing the conceptual schema. Consequently, this reduces
affected the changes should be notified, but what is vital is the independence since anytime the internal schema changes,
that the other users should not be. the external schema and any dependent application program
2.6.2 Physical data independence may also need to change.
This type of independence means the immunity of the
External External External
conceptual schema to changes done in the internal schema. schema Schema Schema
Changes such as using different file organizations or storage
structures, using different storage devices, modifying indexes External/Co Logical data
or hashing algorithms should be possible without having to nceptual independence
mapping
change the conceptual or external schemas. Conceptual
Schema

Conceptual Physical Data


Ideally, from user’s point of view, the only noticeable change /Internal independence
should be change in performance. In most cases, mapping Internal Schema

deterioration in performance is usually the most rationale


Fig 2.2 data independence and ANSI-SPARC
why DBA engineer internal schema changes. rchitecture

2.7 Database Language


Fig 2.1 below shows the two stage mapping as achieved by A data sublanguage consists of two languages; Data

the ANSI-SPARC architecture. It may be insufficient though it Definition Language (DDL) and Data Manipulation Language

promotes a great deal of data independence. However for (DML). The reasons for these languages been referred to as

27
CMT 109 Database Systems
sublanguages is because they do not have constructs for 2.7.2 Data Manipulation Language (DML)
computing needs such as conditionals or iterative statements DML provides a set of operations to support basic data

provided by High level programming languages. Many manipulations on the data held in the database i.e.

DBMS will have the facility to embed the sublanguages in  Insertion of new data in the database.

High Level Languages (HLL) such as Java, C ++, Visual Basic  Modifying the existing data in the database.

e.t.c. in this scenario, we refer to the HLL as the host language  Retrieval of data contained in the database.
2.7.1 Data Definition Language (DDL)  Deletion of the data from the database.
It is usually used by the DBA or database users to specify the The part of the DML that involves data retrieval is called the
database schema i.e. define or modify existing schema. DDL query language which is a high level language used satisfy
can be used to describe and name the entities, attributes and the diverse requests for the retrieval of data in the database
relationships required for the application together with any Types of DMLs
associated integrity and security constraints. DMLs can be distinguished by their underlying retrieval
constructs. Two types of DMLs exists i.e. procedural and non-
After compiling DDL statements, we always end up with set procedural DML.
of tables stored in special files collectively called the system  Procedural DML which usually specifies what data is
catalog. Though at theoretical level we could identify different needed and how to obtain that data. This means the
DDLs for each schema in the three levels of the ANSI-SPARC user must express all the data access operations that
architecture i.e. a DDL for external s, conceptual schema and are to be used by calling appropriate procedures to
internal schemas, in actual practice, there only one obtain information required. This type of DML, you
comprehensive DDL that allows specification of at least the can only retrieve a single record, process it, retrieve
external and conceptual schema. another record that would be processed similarly and

28
CMT 109 Database Systems
so on. Procedural DMLs are usually embedded in HLL concurrently e.g. the servers, mainframes and
that contains constructs for iteration and supercomputers.
 Non procedural DML- it’s also known as declarative 2.8.3 Number of sites over which the database is distributed
DML and allows users to state what data is needed Under this classification, we have can have centralized DBMS

rather than how it will be obtained. Relational DBMS whereby the data resides on a single machine but can be

(RDBMS) include some form of this type of DMLs accessed by multiple users via terminals or we can have

normally SQL (Structured Query Language) or Query Distributed DBMS (DDBMS) whereby we have copies of the

By Example (QBE). database and the DBMS software distributed over many sites
and linked via a computer Network.
2.8 Classification of DBMS
There a couple of ways through which we can classify, some General purpose vs. special purpose
of these are discussed below By general purpose DBMS, we mean the DBMS is designed to
2.8.1 Data model perform a variety task while a special purpose DBMS is one
The DBMS itself is based on data models and we have a that performs specific task.
number of data models. Classical examples include some
2.9 Fourth Generation Languages (4 GLS)
traditional models such relational, network, hierarchical data There is no clear consensus on what the 4GLs constitute. A

models. Emerging data models include models like object- number of programmers refer to them as shorthand

oriented, object-relational based models. programming languages i.e. you require less lines of code to
2.8.2 Number of users supported by the system achieve the equivalent in 3GLs. Good examples of 4GL
Under this classification we have two ways i.e. a single user include SQL and QBE.
system which supports only one user at a time e.g. PC ors
multiple user system which supports multiple users

29
CMT 109 Database Systems
4GLS is a non-procedural language i.e. the users just define operators or aggregates, and the specification of validation
what is to be done without minding about the how it will be checks of the data input.
done. Report generators
Fourth generation languages encompass the following: It’s a facility that enables users to create a report from stored

 Presentation language such as query language and data in the database. It’s similar to query language since it

report generators allows users to ask questions to the database and retrieves to

 Speciality languages such as spread sheets and form a report;

database languages There are two types of report generators;

 Application generators that spell out the insert, delete  Language-oriented which allow users to supply some

and retrieval operations of the data from the database commands in a sub language to define how the data to

to the application be included in the report and how the report should

 Very high-level languages used to generate the look like.

application code  Visually oriented which allows the users use a facility
2.9.1 4GL tools close to a form generator to define the how the report
Form generators shall look like
It’s an interactive facility used to rapidly create data input as Graphics generator
well as display layout for screen forms. The tool allows users It’s a 4 GL too used to retrieve data from the database and
define how the screen will look like, what information shall be display the data inform of a graph e.g. pie charts, line graph,
displayed and where on the screen this information will scatter diagrams. From the graphs, we can be able to predict
actually be displayed. A good form generator will allow users some trends and relationships in the data.
to create derived attributes most likely using arithmetic

30
CMT 109 Database Systems
Application generators A classical data model is conceived as containing the
It’s a 4GL tool used to produce program that interfaces with following three major components
the database. They help expedite program development since  Structural part: - consists of a set of rules according to
they have pre-written modules in high level languages that which the database can be constructed.
comprise fundamental functions that most program use.  Manipulative part: - defines the types of operations that
Users only need to specify what the program will do and it’s are allowed on the data (which are the operations for
upon the application generator to determine how to perform updating or retrieving data from the database as well as
the tasks. changing the database structure).

2.10 Data Models  Set of integrity constraints which contains sets of rules
Database schema is usually written using DDL language to ensures that data is accurate
which is too low level language to describe data requirements With respect to aforementioned ANSI-SPARC architecture,
for any organization in a way that could be readily we can identify three data models;
understandable to a diversity of users. We require a high level  An External data model to represent each user’s view of
description for the schema hence the data model the organization.
 A Conceptual data model representing logical view of
We can always define a data model as “An integrated collection that is DBMS independent. They usually use concepts
of concepts for describing and manipulating data, relationships such as entities, attributes and relationships.
among these data and the constraints on the data in an  Internal data model which represent the conceptual
organization” (Connolly & Begg, 2005). schema in such a way that it can be understood by the
DBMS

31
CMT 109 Database Systems
2.11 History and background of data models implemented thereafter, with first commercial products
By 1960s and 1970s, there two main approaches for
appearing in the late 1970s and early 1980s. Today as we
constructing the DBMS; hierarchical and network data
speak, we have hundreds of RDBMs in the market both for
models. Hierarchical data model was the first one which
mainframes and PC environments. RDMBS are referred to as
epitomized by IMIS from IBM in a response to the enormous
second generation DBMS.
information storage requirement generated by the Apollo
space program. The second to follow was based on the
RDBMS have their own shortcomings just to mention a few
network data model which tried to create a database and
limited modelling capabilities though there has been research
resolve the problems the predecessor was facing such as in
to address this problem. In 1976, Chen presented the Entity-
ability to represent complex relationships effectively and
Relationship model which is now a widely accepted technique
efficiently. These two models represented the first generation
for database design. In 1979, codd himself attempted to
DBMS. The model had some noticeable disadvantages as
address some of the failings in his first work with an extended
noted below:
version of the relational model called RM/T (codd, 1979) and
 Complex programs had to be written to answer even
thereafter RM/V2 (codd, 1990). Attempts to provide a data
simple queries based on the navigational record-
model that represents the ‘real world’ more closely have been
oriented access
loosely classified as semantic data modelling. Some famous
 There was minimal data independence
as:
 There was no widely accepted theoretical foundation
 The semantic data model( hammer and McLeod, 1981)
In 1970, a mathematician by the name Codd produced a
 The functional data model (shipman, 1981)
seminal paper on the relational data model limitations of the
 And lastly the Semantic Association Model (su, 1983)
former model. A number of experimental RDBMS were

32
CMT 109 Database Systems
With a need to increase complexity of database application, At present, relational object relational DBMS form the
two new data models have also emerged; the Object-oriented predominant system and OODBMS have their own practical
Data Model (OODM) and Object-Relational Data model niche in the market place. If OODBMS are to become
(ORDM) which was beforehand known as Extended dominant in they need to change their image from being
Relational Data Model (ERDM). The composition of these system designed solely for complex applications to being
data models is however no explicit. This evolution present the systems that can accommodate standard business applications
third generation DBMS with the same tool and same ease of use as their relational
counter parts. We shall devote our discussion on RDBMS in
Currently, there is a considerable debate between OODBMS the next chapter.
proponents and the relational supporters which resembles the
network/relational debate of 1970s. Both sides agree that
traditional RDBMS are inadequate for certain types of
applications. Nonetheless, the sides differ on the best solution.
The OODBMS proponents claim that RDBMS are satisfactory
for standard business applications but lack the capability to
support more complex applications. The relational supports
claim that RDBMS is necessary part of any real DBMS and
that complex applications can be handled by extension of the
relational model

33
CMT 109 Database Systems
They use concepts such as entities, attributes and relationship
Hierarchical Data Model
among object (entities). Object oriented data models extend
the definition of an entity to include both the state and
Network Data Model
behaviour.
Record based data models
Relational Data Model The database contains a fixed-format record possibly of
different types. Each of the record type defines a fixed
number of fields each typically of fixed length.
ER Data Model
There are basically three types of record based data models:
relational data model, network data models and the
Semantic Data Model
hierarchical data model. The last two were developed almost
decade before the relational data model. We term them as
legacy data models.
Object-Relational Data Object Oriented Data
Models Models Physical data models
They describe how the data is stored in the computer. They
Fig 2.3: History of data Models
do represent information such as record structures, record
Categories of Data models ordering and access paths. They are not many physical data
There are three main categories: object, record-based and models as the logical models, the common ones being the
physical data models unifying and frame memory.
Object based data models

34
CMT 109 Database Systems
2.12 Components of A DBMS Programmers users DBA
The DBMS software is a highly complex with sophisticated
Application
pieces of software that aim to provide the services discussed Program Queries Database Schema
in the previous section. It’s impossible to generalize the
structure of a DBMS since varies greatly from system to
system. However, it is useful when trying to understand DML Processor DDL Compiler
Query processor
database systems to try to view the components and the
relationships between them. In this section, we present a
possible architecture for a DBMS as shown in figure 2.4 Program Database Dictionary
Object code Manager Manager

DBMS are usually divided into several software components


(or modules), each with a specific function with some
components being implemented by the underlying operating
system. However, the operating system provides only basic Access Methods File manager

services and the DBMS must be built on top of it. Thus, the
System Buffers
design of a DBMS must take into account the interface
between the DBMS and the operating system.

Database & System catalog

Fig 2.4: Components of DBMS

35
CMT 109 Database Systems
 Query processor-component that transforms queries  File manager- manipulates the underlying storage
into a series of low-level instructions directed to the files and manages the allocation of storage space on
database manager. disk. It establishes and maintains the list of structures
 Database manager (DM) -interfaces with user- and indexes defined in the internal schema. If hashed
submitted application programs and queries. The DM files are used it calls on the hashing functions to
accepts and examines the external and conceptual generate record addresses. However, the file manager
schemas to determine what conceptual records are does not directly manage the physical input and
required to satisfy the request. The DM then places a output of data. Rather it passes the requests on to the
call to the file manager to perform the request. appropriate access methods, which either read data
 DML pre-processor- converts DML statements from or write data into the system buffer (or cache).
embedded in an application program into standard
2.13 Multi-User DBMS Architectures
function calls in the host language. The DML pre- There are a number of common architectures that are used to
processor must interact with the query processor to implement multi-user DBMS: teleprocessing, file-server, and
generate the appropriate code. client-server.
 DDL compiler- converts DDL statements into a set of 2.12.1 Teleprocessing
tables containing metadata which in turn are stored in It’s a traditional multiuser architecture whereby where there
the system catalog while control information is stored is one computer with a single central processing unit (CPU)
in data file headers. and a number of terminals. All the processing is done by that
 Catalog manager- manages access to and maintains the computer. Users used to use terminal incapable of functioning
system catalog. The system catalog is accessed by most on their own which were cabled to the central computer.
DBMS components.

36
CMT 109 Database Systems
The terminals send messages via the communications control 2.12.2 File-Server Architecture
subsystem of the operating system to the user’s application With this architecture processing is distributed across the

program, which in turn uses the services of the DBMS. In the network, normally a local area network (LAN). The file-server

same way, messages are routed back to the user’s terminal. holds the files required by the applications and the DBMS.
Each workstation runs applications and the DBMS and can

The main limitation of the architecture was placing request files from the file-server when necessary. In this case,

tremendous burden on the central computer. The central the file-server simply acts as a shared hard disk drive. The

machine had to do a number of tasks on behalf of the client DBMS on each workstation sends requests to the file-server

such as run the application programs and the DBMS as well for all data that the DBMS requires that is stored on disk.

as s formatting data for display on the screen.

Fig 2.5: Teleprocessing architecture Fig 2.6: file server architecture

37
CMT 109 Database Systems
Some of the limitations associated with the architecture The client (tier 1) is primarily responsible for the presentation
include large amount of network traffic, full copy of DBMS of the data to the user i.e. user interface actions and the main
being required in each work station. In addition Concurrency, business and data application logic. The server (tier2) is
recover, and integrity control are more complex because there primarily responsible for supplying data services to the client.
can be multiple DBMSs accessing the same files. The server will also handle server side validation i.e.
2.12.3 Traditional Two-Tier (Client-Server) Architectures validation that the client is unable to carry out due to lack of
Was developed to overcome the limitations associated with information, and access to the requested data, independent of
the first two architectures. In this architecture, we have a its location. The data can come from relational DBMSs, object-
client who requests some resource and a server which relational DBMSs, object-oriented DBMSs, legacy DBMSs, or
provides that resource. The two do not necessarily reside at proprietary data access systems.
the same machine. The client would run on end-user desktops
and interact with a centralized database server over a The flow of interaction between client and server is as follows.
network.  Step 1: The client takes the user’s request, checks the
syntax and generates database requests in SQL or
another database language appropriate to the
When dealing with data-intensive business applications, we application logic.
have four major components: database (1) transaction logic  Step 2: the client then transmits the message to the
(2), business and data application logic (3), and the user server, waits for a response, and formats the response
interface (4). This architecture provides a very basic for the end-user.
separation of these components.  Step 3: The server accepts and processes the database
requests, then transmits the results back to the client.

38
CMT 109 Database Systems
The processing involves checking authorization,  Hardware cost may be reduced since only the server
ensuring integrity, maintaining the system catalog, and requires storage and processing power sufficient to store
performing query and update processing. In addition, and manage the database.
it also provides concurrency and recovery control. Increased consistency since the server can handle
integrity checks, so that constraints need be defined and
validated only in the one place, rather than having each
application program performs its own checking.

Some database vendors have used this architecture to indicate


distributed database capability, that is a collection of multiple,
logically inter-related databases distributed over a computer
network. However, although the client-server architecture can
be used to provide distributed DBMSs, by itself it does not
constitute a distributed DBMS.
Fig 2.7: Two tier client server architecture 2.12.4 Three –Tier Client-Server Architecture
Some of the advantages of the architecture include One major problem of the two client server model is the issue
 Increased performance- if the clients and server reside on of handling enterprise scalability. As applications become
different computers then different CPUs can be more complex and potentially could be deployed to hundreds
processing applications in parallel. It should also be easier or thousands of end-users, the client side presented two
to tune the server machine if its only task is to perform problems that prevented true scalability:
database processing.

39
CMT 109 Database Systems
 A ‘fat’ client, requiring considerable resources on the (WAN). It’s possible for one application server is to serve
client’s computer to run effectively. These include disk multiple clients.
space, RAM, and CPU power.
 A significant client-side administration overhead. There are many advantages associated with this architecture
over traditional two-tier or single-tier designs, which include:
Three tier deals with the issue of scalability by including a
i The need for less expensive hardware because the
third tier, with each potentially running on a different
client is ‘thin’.
platform:
ii Application maintenance is centralized with the
 Tier 1: This includes the user interface layer running on
transfer of the business logic for many end-users into a
the end-user’s computer. The client handles simple
single application server. This eliminates the concerns
processing such as input validation
of software distribution that are problematic in the
 Tier 2: This handles the business logic and data
traditional two-tier client-server model.
processing layer running on a server known as
iii The added modularity makes it easier to modify or
application server.
replace one tier without affecting the other tiers.
 Tier 3 this contains the DBMS and database which
iv Loading balancing is easier with the separation of the
stores the data required by the middle tier. This tier
core business logic from the database functions.
may run on a separate server called the database
v The architecture maps quite naturally on the web with
server.
the web browser being the ‘thin’ client and web server
The core business logic of the application resides in its own acting as application server.
layer, physically connected to the client and database server
Fig 2.8: Three tier client server architecture
over a local area network (LAN) or wide area network

40
CMT 109 Database Systems
2.12.5 Transaction processing monitors(TPM)
In ideal situations, complex applications are often built on top
of several resource managers (such as DBMSs, operating
systems, user interfaces, and messaging software).

A TPM is a middleware component that provides access to


the services of a number of resource managers and provides a
uniform interface for programmers who are developing
transactional software. A TP Monitor forms the middle tier of
three-tier architecture, as illustrated in figure below

The architecture can be extended to n-tiers, with additional


tiers added to provide more flexibility and scalability. For
example, the middle tier of the three-tier architecture could be
split into two, with one tier for the web server and the other
for the application server.
This three-tier architecture has proved more appropriate for
some environments, such as the internet and corporate Fig 2.9: Transaction Processing Monitors
intranets where a web browser can be used as a client. It is
also an important architecture for Transaction processing TPM are typically used in environments with a very high
monitors, as we discuss next. volume of transactions, where the TP Monitor can be used to
41
CMT 109 Database Systems
offload processes from the DBMS server. Prominent examples simultaneously to the DBMS. In many cases, we would
of TP Monitors include CICS and Encina from IBM (which are find that users generally do not need continuous access
primarily used on IBM AIX or Windows NT and bundled to the DBMS. Instead of each user connecting to the
now in the IBM TXSeries) and Tuxedo from BEA Systems. DBMS, the TP Monitor can establish connections with
Advantages of TP Monitors the DBMSs as and when required, and can funnel user
i Transaction routing: requests through these connections. This allows a
TPM Monitor can increase scalability by directing larger number of users to access the available DBMSs
transactions to specific DBMSs. with a potentially much smaller number of
ii Managing distributed transactions connections, which in turn would mean less resource
TPM can manage transactions that require access to usage.
data held in multiple, possibly heterogeneous, DBMSs. v Increased reliability:
For example, a transaction may require updating data TPM acts as a transaction manager i.e performing
items held in an Oracle DBMS at site 1, an Informix necessary actions to maintain the consistency of the
DBMS at site 2, and an IMS DBMS at site 3. database, with the DBMS acting as a resource manager.
iii Load balancing: If the DBMS fails, the TPM may be able to resubmit the
TPM can balance client requests across multiple transaction to another DBMS or can hold the transaction
DBMSs on one or more computers by directing client until the DBMS becomes available again.
service calls to the least loaded server.
2.14 Chapter summary
iv Funnelling 2.15 Further reading suggestions
In environments with a large number of users, it may 2.16 Chapter exercise
sometimes be difficult for all users to be logged on

42

You might also like