Professional Documents
Culture Documents
Study Guide For Information Systems201
Study Guide For Information Systems201
Study Guide For Information Systems201
SAQA ID CREDITS
24419 10
Study Guide
In terms of the Copyright Act, no 98 of 1978, no part of this manual may be reproduced or transmitted in any
form or by any means, electronic or mechanical, including photocopying, recording or by any other
information storage and retrieval system without permission in writing from the proprietor.
OUTCOMES
Describe an information system in detail
Differentiate between the various information systems
Study and analyze a problem and create a problem statement
Analyze a company and create a summary of findings as well as create a data dictionary.
Perform logical modeling
Perform Data modeling
Differentiate between the logical and data model
Discuss the various aspects of prototyping, JAD and RAD.
Discuss object oriented programming and analysis
Define requirements and create a requirements specification using a specific standard.
Generate physical alternatives using system flowcharts
Estimate development costs
Perform cost benefit analysis
Create and design the user interface
Plan and execute the system test.
TABLE OF CONTENTS
BIBLOGRAPHY PAGE 89
CHAPTER ONE:
1.1 SYSTEMS
For a system to be effective all parts of the system must work together in an
efficient manner. If a car had no wheels then all components of the car’s system
cannot function together in an effective and meaningful manner.
Information Systems have been around ever since people began trading and
bartering – An example of such systems is ledger books.
A system begins with a user. The user needs information but often lacks
technical expertise. While programmers and technical experts know a great deal
about computers and technology, they may lack a clear understanding of the
user’s needs. This leads to the user knowing the problem but lacking the
skills to solve it and the technical personnel can solve the problem, if only they
understood it.
The person who plans and designs a system is called a systems analyst.
The systems Analyst Defines the user’s problem, deals with management to
obtain the necessary resources, translates the use’s needs into technical terms
and then develops a plan for co-coordinating the efforts of the various technical
experts assigned to the job.
The analyst acts as an intermediary between the user, technical experts and
management.
1...4... METHODOLOGY
Due to the complexity of the analyst’s job, it is easy to over look something.
That is why most analysts use a specific methodology.
A Methodology is a set of tools used in the context of clearly defined steps that
end with specific, measurable exit criteria.
1.5. SYSTEMS DEVELOPMENT LIIIFE CYCLE
The System Development Life Cycle framework provides system designers and
developers to follow a sequence of activities. It consists of a set of steps or
phases in which each phase of the SDLC uses the results of the previous one.
The basis for most systems analysis and design methodologies is the Systems
Development Life Cycle (SDLC). It is sometimes called the waterfall method
because the model visually suggests work cascading from step to step like a
series of waterfalls.
Software Prototyping
Joint Applications Design (JAD)
Rapid Application Development (RAD)
Extreme Programming (XP); extension of earlier work in Prototyping and
RAD.
Open Source Development
End-user development
Object Oriented Programming
PHASES
1.PROBLEM DEFINITION
Identify the problem, determine the causes and outline a strategy fro solving the
problem. A poor problem definition will guarantee that the system will fail to solve
the real problem
Exit criteria: 1. Problem statement
2. Feasibility Study
2.ANALYSIS
Determine what must be done to solve the problem. During analysis the
analyst works with the user to develop a logical model that identifies essential
processes, data elements, objects and other key entities. Exit criteria: 1.
Logical Model Requirements
3.DESIGN
Determine how the problem will be solved. Identify the primary physical
components and the interfaces that link them. Next the individual components
are defined at a black-box level.
You can provide inputs and predict and observe the resulting outputs, but you
cannot determine the contents of the black box except by deduction.
Then plan the contents of the black box by specifying how each component
works.
Exit criteria: 1. Physical Plan
4.DEVELOPMENT
Programs are coded, debugged, documented and tested.
New hardware is selected, ordered and installed.
Procedures are written.
End-user documentation is prepared and users are trained.
Exit criteria: 1. Code
2. Procedures
3. Manuals
7
5.TESTING
Testing begins with module tests, followed by component tests and a final
system test. A well-designed test plan endures that the systems meets the user’s
needs.
Exit criteria: 1. System Test
6.IMPLEMENTATION
After the system test is completed and any problems are corrected, the system
is then released to the user. The user has to approve the system. Exit criteria:
1. User Sign-off
2. Review
7.MAINTENANCE
After the system is released to the user, maintenance begins. The object of
this phase is to keep the systems functioning at an acceptable level.
Exit criteria: 1. Ongoing
CAUTIONS/DISADVANTAGES
Some analysts focus excessively on preparing the exit criteria instead of
actually completing the work.
Irrespective of the methodology, you will eventually encounter a problem
for which the methodology is inappropriate and it is a mistake to force the
application to fit the tool.
A good methodology makes a competent analyst productive, but
no Methodology can convert an unskilled person into an analyst.
CASE tools are a class of software that automate many of the activities
involved in various life cycle phases.
For example, when establishing the functional requirements of a proposed
application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will
look after development.
8
1. Life-Cycle Support
2. Integration Dimension
3. Construction Dimension
4. Knowledge Based CASE dimension [6]
Let us take the meaning of these dimensions along with their examples
one by one :
This dimension classifies CASE Tools on the basis of the activities they support
in the information systems life cycle. They can be classified as Upper or Lower
CASE tools.
Integration dimension
Three main CASE Integration dimensions have been proposed :[7]
1. CASE Framework
2. ICASE Tools
3. Integrated Project Support Environment(IPSE)
Workbenches
Workbenches integrate several CASE tools into one application to support
specific software-process activities. Hence they achieve:
a homogeneous and consistent interface (presentation integration).
easy invocation of tools and tool chains (control integration).
access to a common data set managed in a centralized way (data
integration).
Environments
An environment is a collection of CASE tools and workbenches that supports the
software process. CASE environments are classified based on the focus/basis
of integration[4]
1. Toolkits
2. Language-centered
3. Integrated
4. Fourth generation
5. Process-centered
Toolkits
Toolkits are loosely integrated collections of products easily extended by
aggregating different tools and workbenches.
Typically, the support provided by a toolkit is limited to programming,
configuration management and project management.
And the toolkit itself is environments extended from basic sets of operating
system tools.
In addition, toolkits' loose integration requires user to activate tools by
explicit invocation or simple control mechanisms.
The resulting files are unstructured and could be in different format, therefore
the access of file from different tools may require explicit file format
conversion.
However, since the only constraint for adding a new component is the
formats of the files, toolkits can be easily and incrementally extended.
10
Language-centered
The environment itself is written in the programming language for which it was
developed, thus enabling users to reuse, customize and extend the environment.
Integration of code in different languages is a major issue for language-centered
environments. Lack of process and data integration is also a problem. The
strengths of these environments include good level of presentation and control
integration.
Integrated
These environments achieve presentation integration by providing uniform,
consistent, and coherent tool and workbench interfaces. Data integration is
achieved through the repository concept: they have a specialized database
managing all information produced and accessed in the environment.
Fourth-generation
Fourth-generation environments were the first integrated environments. They
are sets of tools and workbenches supporting the development of a specific class
of program: electronic data processing and business-oriented applications. In
general, they include programming tools, simple configuration management
tools, document handling facilities and, sometimes, a code generator to produce
code in lower level languages.
Process-centered
Environments in this category focus on process integration with other integration
dimensions as starting points. A process-centered environment operates by
interpreting a process model created by specialized tools. They usually consist
of tools handling two functions:
Process-model execution
Process-model production
Task / Activities:
CHAPTER TWO:
A Problem is the difference between the way things are and the way the
organizational goals say they should be.
DEFINING DESIRES
In an organization before a problem can be solve, people must first agree on the
problem.
Before members of the organization can agree on the nature of the problem,
they must first share a sense offwhat the organization wants i.e. explicitly
define the organizations desires.
This can be done by identifying strategic goals.
These goals are converted to measurable objectives and a critical success factor.
The goals and objectives are the organizations official desires; they represent a
shared sense of how things should be.
If actual or expected performance is inconsistent with the goals and objectives,
then there is a problem.
Once a problem has been recognized then problem definition can begin.
Problem Definition is the act of defining a problems causes. You cannot solve a
problem unless you know what caused it.
12
For example, you may just have completed an investigation of all the
reasons recorded for goods being returned by customers and found that the
highest incidence relates to incorrect goods being sent. A CE Diagram can
be constructed to explore the possible causes for this.
.
DEFINING OBJECTIVES
After the problem has been defined, the analyst the analyst lists the causes
in terms of objectives, if they are met are likely to solve the problem.
Symptoms and causes are negative, they suggest what is wrong.
Objective are positive and reflect the causes and are measurable.
13
1. What is the problem? This should explain why the team is needed.
2. Who has the problem or who is the client/customer? This should explain
who needs the solution and who will decide the problem has been solved.
3. What form can the resolution be? What is the scope and limitations (in
time, money, resources, technologies) that can be used to solve the
problem? Does the client want a white paper? A web-tool? A new feature
for a product? A brainstorming on a topic?
VERIFICATION
To minimize errors always verify your work
Go through each symptom and identify the objective/s that solve each
problem.
If you find a symptom not addressed by the objectives then you have
overlooked something.
Go through the same procedure for the objectives.
Ask the user if solving the problem is worth the cost.
14
Economic feasibility
Economic analysis is the most frequently used method for evaluating the
effectiveness of a new system. More commonly known as cost/benefit analysis,
the procedure is to determine the benefits and savings that are expected from a
candidate system and compare them with costs. If benefits outweigh costs, then
the decision is made to design and implement the system. An entrepreneur must
accurately weigh the cost versus benefits before taking an action.
15
Cost-based study: It is important to identify cost and benefit factors, which can
be categorized as follows: 1. Development costs; and 2. Operating costs. This
is an analysis of the costs to be incurred in the system and the benefits derivable
out of the system.
Time-based study: This is an analysis of the time required to achieve a return
on investments. The future value of a project is also a factor.
Legal feasibility
Determines whether the proposed system conflicts with legal requirements, e.g.
a data processing system must comply with the local Data Protection Acts.
Operational feasibility
Operational feasibility is a measure of how well a proposed system solves the
problems, and takes advantage of the opportunities identified during scope
definition and how it satisfies the requirements identified in the requirements
analysis phase of system development.
Schedule feasibility
A project will fail if it takes too long to be completed before it is useful. Typically
this means estimating how long the system will take to develop, and if it can be
completed in a given time period using some methods like payback period.
Schedule feasibility is a measure of how reasonable the project timetable is.
Given our technical expertise, are the project deadlines reasonable? Some
projects are initiated with specific deadlines. You need to determine whether the
deadlines are mandatory or desirable.
Resource feasibility
This involves questions such as how much time is available to build the new
system, when it can be built, whether it interferes with normal business
operations, type and amount of resources required, dependencies,
Cultural feasibility
In this stage, the project's alternatives are evaluated for their impact on the local
and general culture. For example, environmental factors need to be
16
considered and these factors are to be well known. Further an enterprise's own
culture can clash with the results of the project.
Output
The feasibility study outputs the feasibility study report, a report detailing the
evaluation criteria, the study findings, and the recommendations. [2] Resource
feasibility This involves questions such as how much time is available to build
the new system, when it can be built, whether it interferes with normal business
operations, type and amount of resources required, dependencies
During the feasibility study the analyst considers several alternative solutions to
the problem. The analyst prepares the feasibility study report outlining several
alternative solutions. Some may be more feasible than others. The feasibility
study is the basis for the go/no go decision.
The basis for the GO/NO GO DECISION is the feasibility study report.
The user makes this decision but with larger organizations this task is assigned
to a steering committee made up of a representative of each department that will
use the system. Because there are not enough resources to solve all problems
the steering committee evaluates pending projects, rejects some and prioritizes
the others. As resources become available the Management Information
Systems Manager assigns people to projects in priority order.
Task / Activities:
17
CHAPTER THREE:
INFORMATION GATHERING
Information gathering is both an art and a science. It is an art because the person
who collects the information needs to be sensitive, and must have an
understanding of what to collect and what to focus on, the channels where the
source of information can be gathered. It is a science because it requires proper
methodology and the use of specific tools in order to be effective. Nonetheless,
there is always a chance that one can find oneself drowned in an ocean of
information, not knowing which specific information to collect, where to collect it
and how to collect it.
Organizational Information
The kind of information that one could collect here depending upon the need
are:
Policies of the organization - Policies are guidelines that defines the code
of business. Policies are then translated into rules and procedures for
achieving goals.
Goals of the organization - Goals describe the managements commitment
to the objectives. Objectives are milestones of accomplishments toward
achieving goals.
Organization Structure - Organization structure helps to understand the
hierarchy level in an organization and the mode of communication.
User Information
User information relates to the individuals who are using the present system.
When collecting user information one should focus on job function, information
requirements and interpersonal relationships within the organization.
18
Work Information
Work information relates to the work itself. When collecting Work information
one should focus on work flows, how the data flows between various
systems, work schedules and methods and procedures.
The word “analyzes” means to study something by breaking it down into its
constituent parts.
The analyst starts with the existing physical system, constructs a logical model,
and then manipulates the model to create a new, improved logical model. The
new model, in turn, is the basis for designing a new and improved physical
system.
The purpose of analysing a current system is find out the requirements for
any proposed new system, what data needs to be handled, how it should be
handled and who will be using the system.
19
The old physical system is the starting point for analysis.
The old physical system must have been performing a necessary function or
management would not have authorized you to fix it.
The problem with the old system is not what it does but how it works.
step in analysis is to separate what the existing system does from how it
The first
works.
Since you are dealing with a lot of details the solution is to partition the
problem into sub problems or mini problems by focusing on such logical
building blocks as data element, processes, boundaries and objective.
Start by conductinginterviews with key personnel and try it identify input and
output documents.
Find out who prepares the input and who uses the output, interview those people.
maintain the system and ask them to explain existing
Talk to people who
documentation.
Summarize all interviews and extract a list of the systems basic logical elements.
IDENTIFYING PROCESSES
A process is an activity that transforms data in some way. Within a given
process data may be collected, recorded, moved, manipulated, sorted etc.
Read through the summaries and identify the verbs (processes)
IDENTIFYING FIELDS
After listing the processes go through the documentation for reference to
data.
The best way to identify fields is to list the fields that appear on the present
systems documents, files and other sources of data entries.
20
IDENTIFYING BOUNDARIES
A systems boundaries can be defined by identifying those people,
organizations and other systems that lie just outside the target system.
IDENTIFYING OBJECTS
The basic building block for an object oriented system is call and object
An object incorporates both processes and data
Objects communicate with other objects via signals
21
KEY TERMS
An Entity is an object about which data is stored(person, group, thing
or activity)
An Occurrence is a single instance of an entity
An Attribute is a property of an entity
A data Element is an attribute that cannot be logically decomposed.
A set of related data elements forms a data structure or a composite
data elements
Task / Activities:
22
CHAPTER FOUR:
LOGICAL MODELING
DFD Principles
The general principle in Data Flow Diagramming is that a system can be
decomposed into subsystems, and subsystems can be decomposed into
lower level subsystems, and so on.
Each subsystem represents a process or activity in which data is
processed. At the lowest level, processes can no longer be decomposed.
Each 'process' (and from now on, by 'process' we mean subsystem and
activity) in a DFD has the characteristics of a system.
Just as a system must have input and output (if it is not dead), so a
process must have input and output.
Data enters the system from the environment; data flows between
processes within the system; and data is produced as output from the
system
The data store symbol represents a data storage such as a hard disk
23
The data flow symbol shows which way data moves between
processes entities and data stores
Here the entity Employee provides the hours they have worked. This,
together with their hourly pay rate from the employee data store is processed
to calculate how much they are to be paid. A pay check is then given to the
employee.
24
It is common practice to draw the context-level data flow diagram first, which
shows the interaction between the system and external agents which act as data
sources and data sinks. On the context diagram the system's interactions with
the outside world are modelled purely in terms of data flows across the system
boundary. The context diagram shows the entire system as a single process,
and gives no clues as to its internal organization.
This context-level DFD is next "exploded", to produce a Level 0 DFD that shows
some of the detail of the system being modeled. The Level 0 DFD shows how
the system is divided into sub-systems (processes), each of which deals with
one or more of the data flows to or from an external agent, and which together
provide all of the functionality of the system as a whole. It also identifies internal
data stores that must be present in order for the system to do its job, and shows
the flow of data between the various parts of the system.
Data flow diagrams were proposed by Larry Constantine, the original developer
of structured design,based on Martin and Estrin's "data flow graph" model of
computation.
Data flow diagrams are one of the three essential perspectives of the structured-
systems analysis and design method SSADM. The sponsor of a project and the
end users will need to be briefed and consulted throughout all stages of a
system's evolution. With a data flow diagram, users are able to visualize how the
system will operate, what the system will accomplish, and how the system will
be implemented. The old system's dataflow diagrams can be drawn up and
compared with the new system's data flow diagrams to draw comparisons to
implement a more efficient system. Data flow diagrams can be used to provide
the end user with a physical idea of where the data they input ultimately has an
effect upon the structure of the whole system from order to dispatch to report.
How any system is developed can be determined through a data flow diagram
model.
Data flow diagrams can be used in both Analysis and Design phase of SDLC.
25
CHAPTER FIVE:
DATA MODELING
Entities
Refers to the entity set and not to a single entity occurrence
Corresponds to a table and not to a row in the relational environment
In both the Chen and Crow’s Foot models, an entity is represented by a
rectangle containing the entity’s name
Entity name, a noun, is usually written in capital letters
Attributes
Characteristics of entities
Domain is set of possible values
Primary keys underlined
Simple
Cannot be subdivided
Age, sex, GPA
Composite
Can be subdivided
Address: street city state zip
Single-valued
Has only a single value
Social security number
Multi-valued
Can have many values
Person may have several college degrees
Derived
Can be calculated from other information
Age can be derived from D.O.B.
26
Multivalued Attributes
Although the conceptual model can handle multi valued attributes, you should
not implement them in the relational DBMS
Within original entity, create several new attributes, one for each of the
original multivalued attribute’s components
o Can lead to major structural problems in the table
o Create a new entity composed of original multivalued attribute’s
27
components
Creating New Attributes
Relationships
Associations between entities
Established by Business Rules
Connected entities termed participants
Connectivity describes relationship
classification: o 1:1, 1:M, M:N
28
Cardinality
o Number of entity occurrences associated with one occurrences of related
entity
Relationship Strength
Existence Dependent
o Entity's existence depends on existence of another related entities
o Existence-independent entities can exist apart from related entities
o Employee claims Child
Child is dependent on employee
29
Weak (non-identifying)
o One entity is existence-independent on another
o PK of dependent entity doesn't contain PK component of parent
entity o Book is somewhat confused on this
Strong (identifying)
o One entity is existence-dependent on another
o PK of related entity contains PK component of parent entity
Relationship Participation
Optional
o Entity occurrence does not require a corresponding occurrence in related
entity
o Shown by drawing a small circle on side of optional entity on
ERD Mandatory
o Entity occurrence requires corresponding occurrence in related entity
o If no optionality symbol is shown on ERD, it is mandatory
30
31
32
Degree of Relationship
33
34
Generalization Hierarchy
35
36
CHAPTER SIX:
PROTOTYPING
What is a Prototype?
A prototype is a model for an intended system.
Prototyping is therefore, the development and improvement of a prototype.
By using prototyping there is the potential for early amendments to
weaknesses inthe designed system and, in extreme circumstances its
abandonment.
These possibilities exist because prototyping involves the close participation
of prospective users and consequently their reactions are readily perceived.
Prototyping is particularly beneficial in situations where the application is not
clearly defined. On the other hand, for a well understood, fully definable
application, prototyping is probably not worthwhile.
An example of the former situation could be a firm of estate agents setting
up a system to hold and interrogate a database concerning the properties on
their books. Although the estate agents may previously have done exactly
the same thing using, for example a card-filing system, they may still find it
of
difficult to envisage a computerised system. A hands-on demonstration
such a system through prototyping can be reassuring for such users.
An instance of a well defined application could very well be sales invoicing.
The work is well understood, as it has been regularly carried out for a long
period of time and there is little to be gained from changes. Thus the new
system will replicate the previous system from input/output aspects so
prototyping is unnecessary.
37
a) Screen Generators
A screen generator is a tool for creating displays on V.D.U. screens quickly and
easily. The principle is that the analyst 'draws' or 'paints' the required layout on
the screen by means of a mouse, pointer, palette and keyboard. Also available
are skeletons of menus and forms for the subsequent entry of data and icons for
helping with the choice of requirements.
b) Report Generators
A report generator is software for quickly creating a report from data in a
database. The content and format of the report are specified through use of a
non-procedural language o alternatively by filling in screen forms.
The report generator retrieves, sorts and summarises the appropriate records
and is also capable of performing rudimentary processing e.g. calculating
percentages.
Tool Selection
As the technical environment may not be known at this point, the target
implementation environment is obviously not a candidate for use in the
38
prototyping session. Many tools exist on the market, sharing several essential
features: screen painters, data dictionary etc.
Input to the prototyping activities include some SSADM products such as the
LDM for the required system. If a CASE tool is being used for project
development, that tool may be used for prototyping.
Whatever tool is used, it should be chosen and procured early in the project life.
If the IS strategy dictates the technical environment before TSO, then the
prototyping tool should be chosen to simulate that environment as closely as
possible. If no indication has been made to environment, then that obviously
cannot be done.
This activity should be carried out at the start of the whole project. The first thing
to do is to identify any need to perform prototyping. If a project has one of the
following characteristics, then prototyping will probably not be appropriate:
Screen prototyping
What is the likely level of data manipulation on screen? If for one function the
activity is large, prototyping will assist in validating and collecting the User's
needs.
If the on-line interaction is poorly thought out, will that be detrimental to the
business, or will it just be inconvenient at a local and trivial level? If the former
is the case, then prototyping will be valuable.
39
If an output has to meet certain statutory requirements, tax return forms for
example, then prototyping can help validate its content and format.
If the requirements for the report have been described in vague terms,
prototyping will help define levels of accuracy, optimum format etc., as the User
sees the possible versions of the report and tries to use it.
The team to carry out the prototyping should be defined well before the activity
is due to start, so that management structures can be put in place in good time.
The team should comprise a team leader and two other analysts, who between
them will serve the roles of implementing the prototyped model and
demonstrating it to the User. There is no absolute need for two analysts, one
would be sufficient if the project were fairly small, a second team member does
however, provide an objective view of a prototype designed by someone else.
The effect should be that the analyst would be more sensitive to the User's
requirements and less defensive about the product.
The team leader's responsibility, apart from standard supervisory duties, should
be the following:
Management will normally have specified the areas, specific dialogues and
output reports to be prototyped.
The specific dialogues should be studied to determine if they are appropriate for
prototyping. These should be confirmed with the Users and they should be
consulted about further dialogues they want to see prototyped. The only
constraint on the team's agreement should be budget and timescales.
During the prototyping sessions, the output data items should be validated,
usually against the LDM and data item descriptions. Some data items will be
40
Berea College of Technology
Information Systems 201
derived from, for example calculations: the formulae should be recorded with
the prototyping documentation and also on Elementary Process Descriptions.
Types of Prototypes
a) Non-working Prototypes
With this approach the prototype is a dummy and is usable only for the
purpose of demonstrating input procedures and/or output formats. The
prototype is incapable of actually processing data but merely reproduces
results that have been pre-determined and then incorporated into the
program. Accordingly the results are unreal, i.e. false so a non-working
prototype is unsuitable for realistic user interfacing. This is a valid approach
as long as the user is aware of the truth of the matter. The main purpose is to
illustrate the layout of screens, documents and reports irrespective of their
contents. A non-working prototype is scrapped at the end of its usefulness.
c) Pilot Prototypes
A pilot prototype is applicable to situations involving a number of installations
doing the same work e.g. a point-of-sale system in a chain of supermarkets.
The principle is for the prototype to be introduced into one or a few of these
installations so weaknesses can be detected. These are removed before the
prototype is installed in further locations. Generally an application that is
suitable for pilot prototyping is fairly straightforward technically, the problems
tend to arise from the human aspect. A pilot prototype is likely to encompass
the full range of activities of the application. This means that most of the
systems design and programming will need to have been done before
initiating the prototype. It is also likely that the complete database is needed
e.g. with a point-of-sale system all the prices, descriptions etc. must be
available at the outset.
d) Staged Prototypes
Staged or incremental prototyping implies that a start is made with only certain
features of the full system. Further features are added stage by stage, and the
prototype is checked at each stage. A Stock Control application, for instance,
could start simply with the updating of the stocks in
41
e) Evolutionary Prototypes
An evolutionary prototype is in some ways similar to a staged approach. But,
whereas staged prototyping entails adding a succession of separate but
closely associated stages, evolutionary prototyping allows the one integral
application to evolve through a succession of increasingly refined phases. It
is not always practical to use evolutionary prototyping because most
applications do not lend themselves to this approach. Lets consider am
example where evolutionary prototyping is practical: an hotel reservation
system, for which a file exists holding details of the rooms and their current
bookings. The first phase is for the user merely to make enquiries regarding
room availability. The next phase could be to add a booking procedure.
Prototype Demonstrations:
1. Presentation Format - screen and report layouts are presented by the I.T.
staff to the users in a lecture style presentation.
2. Demonstration - the I.T. staff actually use the prototype to demonstrate its
features to the user staff.
3. Hands-On Session - the user staff are allowed to use the prototype under
the guidance of the I.T. staff.
The goal of rapid development of applications has been around for some time
and with good reason, as the objective of speeding up the development process
is something that has been on the agenda of both general management and
information systems management for a long time.
The need to develop information systems more quickly has been driven by rapidly
changing business needs.
The general environment of business is seen as increasingly competitive, more
customer-focused and operating in a more international context.
Such a business environment is characterised by continuous change, and
the information systems in an organisation need to be created and amended
speedily to support this change.
42
Unfortunately, information systems development in most organisations is
unable to react quickly enough and the business and systems development
the notion of rapid
cycles are substantially out of step. In such a situation,
application development (RAD) is obviously attractive.
The exact definition or nature of the term is not clear and authors and vendors
in a variety of different ways with different emphases and
use the term
meanings.
RAD is actually a combination of techniques and tools that are fairly well known.
43
Getting the right people to the meeting, i.e. everyone with a stake in the
system, including those who can make binding decisions reduces the time
taken to achieve consensus.
JAD engenders commitment. Traditional methods encourage decisions to be
taken off the cuff in small groups. Here all decisions are in the open.
the presence of a senior executive sponsor can encourage fast development
by cutting through bureaucracy and politics
the facilitator is crucial to the effort. A facilitator is able to avoid and smooth
many of the hierarchical and political issues that frequently cause problems
and will be free from organisational history and past battles.
Task / Activities:
44
CHAPTER SEVEN:
Use a Gantt chart to plan how long a project A Gantt chart lets you see immediately
should take. what should have been achieved at
any point in time.
A Gantt chart lays out the order in which the
tasks need to be carried out. A Gantt chart lets you see how
remedial action may bring the project
Early Gantt charts did not show back on course.
dependencies between tasks but modern
Gantt chart software provides this capability. Most Gantt charts include "milestones"
which are technically not available on
Gantt charts. However, for representing
deadlines and other significant events, it
is very useful to include this feature on
a Gantt chart.
Henry Laurence Gantt, an American mechanical engineer, is credited with the invention of the
Gantt chart.
45
SAMPLE
46
47
CHAPTER EIGHT:
DEFINING REQUIREMENTS
The requirements specification
is a document that clearly defines the customer’s
logical requirements.
It builds on the logical model. When the logical model is ambiguous, the
requirements, the requirements specification gives the correct interpretation.
Where the logical model is imprecise, the requirements specification adds
the necessary details. Where the model is silent, the document supplements
it.
The requirements specification states the client’s needs in such a way that it
is possible to test the finished system to verify that those needs have been
met.
The objective of the requirements specification is to ensure that the
time, money and resources
customer’s needs are correctly defined before
are wasted working on the wrong solution.
48
Standards
There are several widely used standards for writing requirements. DoD-STD-
2167A2 defines procedures for defense system software development. DoD-
STD-490 and DoD-STD-499 must be followed on most military contracts. Other
standards, such as IEEE STD-729 (a glossary) and IEEE STD-830 are defined
by civilian organizations1 (in this case, by the Institute of Electrical and
Electronics Engineers), and many companies have their own internal standards.
49
50
The process begins during the problem definition and information gathering
stage of the system development life cycle (Part II). Based on a preliminary
analysis of the problem, user experts who work for the government agency or
the customer organization that is sponsoring the project define a set of needs
and write the system/segment specifications (A-specs), which are then released
for bids.
On a major system, several firms might be awarded contracts and charged with
preparing competitive system/segment design documents (B-specs). The
completed SSDDs are submitted to the customer and evaluated. The best set is
then selected and once again released for bids. Sometimes, the firm that
prepared the system/segment design documents is prohibited from participating
in the next round.
Based on the competitive bids, a contract to generate a physical design and prepare
a set of specifications based on the system/segment design documents is
subsequently awarded to one or (perhaps) two companies. One PIDS (hardware)
or SRS (software) is prepared for each SSDD. (In other words, one physical design
specification is created for each configuration item.)
At the end of this phase, the PIDS and SRS documents are reviewed and
approved. The best design specifications are then released for a final
round of competitive procurement, with the winning firm getting a contract
to build the system. Clearly, the organization that created the final
specifications has an advantage, but there are no guarantees. Sometimes
a backup supplier is awarded a portion of the contract.
8.3 REQUIREMENTS
51
Modifiable – The analysis process might take months or even years , therefore
its unreasonable to assumed that the customers needs will not change therefore
the requirements specification must be modifiable
Traceable – for any given requirement , you must be able to trace it back to its
high level requirement
Within the requirements specification, the flow down principle requires that each
ow level requirement be linked to a single high-level parent.
52
CHAPTER NINE:
The logical requirements tell you what the system must do. The next step is to
determine exactly how the requirements must be met.
During the transitional period between the end of analysis and the
beginning of design, the analyst generates the information management
needs by:
1. Identifying several alternative high-Level physical designs fro
meeting the requirements.
2. Documenting each alternative
3. Estimating the costs and benefits for each alternative
4. Performing a cost/benefit analysis for each alternative
5. Recommending the preferred alternative, and
6. preparing a development plan for the preferred alternative
9.2 ALTERNATIVES
Generating alternatives is easy, the problem is generating realistic alternatives.
The best strategy is to give the user 3 alternatives and recommend the
alternative that best solves the problem as you understand it. A high-cost option
shows the user the additional benefits that might be realized by spending a bit
more money.
Adding a low-cost alternative gives the user the sense of the benefits that might
be lost if not enough is invested.
53
SYSTEM FLOWCHARTS
A system flow chart is another tool for documenting a physical system for
documenting a physical system.
SYMBOLS
54
EXAMPLE
COMPONENT IMPLEMENTATION
PAPER FORM DEPOSIT SLIP SUPPLIED BY LEARNERS
INPUT/OUTPUT DEPOSIT SLIP INFORMATION IS CAPTURED
ACCCOUNTS PACKAGE USED TO CAPTURE
PROCESS DEPOSIT SLIP DATA AND UPDATE LEARNERS
RECORDS.
RECEIPT IS PRINTED AFTER PAYMENT IS
PAPER FORM
PROCESSED
RECEIPT HAS TO BE STAMPED, SIGNED AND TORN
MANUAL PROCESS
IN HALF BEFORE LEARNER RECEIVES IT.
ON LINE STORAGE LEARNERS FEE RECORDS
LEARNERS DETAILS STORED IN DATABASE
DATABASE
FORMAT(RELATIONAL)
55
CHAPTER TEN:
EVALUATING ALTERNATIVES
Most organizations have a limited supply of capital and resources, and a cost
benefit analysis is the basis for allocating theses resources. Each project is
treated as a potential investment and only those projects that promise a high
return on investment are selected.
Development costs are one-time costs that occur before the system is released
to the user. They include labor and hardware associated with problem definition,
the feasibility study , analysis, design, and development and testing. Operating
costs are those costs that begin after the system is released and lasts for the
lifetime of the system. They include personnel, maintenance, utilities, insurance
and similar costs
New systems are developed to obtain benefits. Tangible benefits are mesured
in financial terms and usually imply reduced operating costs, enhanced revenue
or both.
SELECTING A PLATFORM
If the list includes a computer the next step is to select a platform. A platform is
defined by a specific set of hardware and operating system. Once you choose a
platform, your choice of software and hardware peripherals are constrained by
the platform.
56
PURCHASES SOFTWARE
The cost of software can be obtained from vendors, newspapers etc. The
problem is not finding appropriate software but choosing from the host of
alternatives available.
SOFTWARE DEVELOPMENT
This cost includes the programmers time to design, write and test the system.
Also to calculate software development costs the CO COMO formula is used.
VERIFICATION
One way of verifying costs is to use the Top-down estimating technique by
comparing the proposed estimates to a typical project.
CONTINGENCIES
Contingencies are a cost factor added to the cost estimate to overcome risks
associated with the project.
Operating costs associated with a new system are used to estimate the benefits
of a system. The key to estimating operating costs is to identify them. Go through
a checklist and identify any cost factor that is likely to change due the result of
the new system. Then estimate the costs with that factor in both the old and new
system and compute the difference.
57
EXAMPLE:
58
Given a set of alternatives, their costs, benefits and risks the analyst selects the
best option and explains why the others were rejected and prepares an estimate
of the resources needed to design and develop the recommended alternative.
The complete package is presented to a technical, format management review.
During the review the project can be killed, or postponed or carried forth
59
CHAPTER 11:
A user interface is the point in the system where a human being interacts with a
computer. The interface can incorporate hardware, software, procedures and
data. The interaction can be direct e.g a user might access a computer through
a screen and a keyboard. Printed reports and forms designed to capture data
through input are indirect user interfaces.
The first step in user interface design is to design the processes, procedures,
and other tasks the user must perform. Given a set of tasks, you can design the
necessary screens, reports and forms. Next, the dialogues that control the
exchange of information must be designed. Finally a user manual is written to
document the various procedures, screens, reports, forms and dialogues.
IDENTIFYING PROCESSES
Much of the information needed to design the user processes is collected during
analysis.
DOCUMENTING PROCESSES
As you near the end of the SDLC and start detailed design, your focus is now
on how each process should be performed.
60
DESIGNING REPORTS
Reports represent eh results to a query – output.
DESIGNING FORMS
Forms are typically used to capture data. A form image can be displayed on a
screen and used as a template for data entry.
DESIGNING SCREENS
DIALOGUES
A dialogue is the exchange of information between the computer and the user.
A dialogue defines a set of screens, menus, reports and forms and the order in
which they are accessed. The dialogue is a merger of processes and the screens
that support the processes. Dialogues must be designed efficiently and
efficiency is a function of response time. Response time is the time between the
issuing of a command and when the results appear on screen.
Response Time includes the following elements:
1. System response time is the traditional definition
2. The display rate is how quick the complete screens
3. User scan/read time is a measure of how long it takes the user to read
and understand the screen
4. User think time includes the time taken for the user to evaluate the
screen and the time during which the user decides what to do
5. User response time is the time during which the user performs a physical
action (presses a key etc) and the time during which the user waits for a
response
6. Error time is the time spent making and recovering from errors
The last step in the user interface design process is to design the user manual
to document the procedures, the reports, the forms, the screens and the
dialogues. The user manual should outline the procedures and explain how to
perform the necessary tasks.
61
ARTICLE 1
by Talin
One way around this problem is to create user models. [TOG91] has an excellent
chapter on brainstorming towards creating "profiles" of possible users. The result
of this process is a detailed description of one or more "average" users, with
specific details such as:
Armed with this information, we can then proceed to answer the question: How
do we leverage the user's strengths and create an interface that helps them
achieve their goals?
Another way of answering this question is to talk to some real users. Direct
contact between end-users and developers has often radically transformed the
development process.
62
Frequently a complex software system can be understood more easily if the user
interface is depicted in a way that resembles some commonplace system. The
ubiquitous "Desktop metaphor" is an overused and trite example. Another is the
tape deck metaphor seen on many audio and video player programs. In addition
to the standard transport controls (play, rewind, etc.), the tape deck metaphor
can be extended in ways that are quite natural, with functions such as time-
counters and cueing buttons. This concept of "extendibility" is what distinguishes
a powerful metaphor from a weak one.
63
Software developers tend to have little difficulty keeping large, complex mental
models in their heads. But not everyone prefers to "live in their heads" -- instead,
they prefer to concentrate on analyzing the sensory details of the environment,
rather than spending large amounts of time refining and perfecting abstract
models. Both type of personality (labeled "Intuitive" and "Sensable" in the Myers-
Briggs personality classification) can be equally intelligent, but focus on different
aspects of life. It is to be noted that according to some psychological studies
"Sensables" outnumber "Intuitives" in the general population by about three to
one.
Intuitives prefer user interfaces that utilize the power of abstract models --
command lines, scripts, plug-ins, macros, etc. Sensables prefer user interfaces
that utilize their perceptual abilities -- in other words, they like interfaces where
the features are "up front" and "in their face". Toolbars and dialog boxes are an
example of interfaces that are pleasing to this personality type.
Of course, there may be cases where you don't wish to expose a feature right
away, because you don't want to overwhelm the beginning user with too much
detail. In this case, it is best to structure the application like the layers of an
onion, where peeling away each layer of skin reveals a layer beneath. There are
various levels of "hiding": Here's a partial list of them in order from most exposed
to least exposed:
64
Internal consistency means that the program's behaviors make "sense" with
respect to other parts of the program. For example, if one attribute of an object
(e.g. color) is modifiable using a pop-up menu, then it is to be expected that
other attributes of the object would also be editable in a similar fashion. One
should strive towards the principle of "least surprise".
External consistency means that the program is consistent with the environment
in which it runs. This includes consistency with both the operating system and
the typical suite of applications that run within that operating system. One of the
most widely recognized forms of external coherence is compliance with user-
interface standards. There are many others, however, such as the use of
standardized scripting languages, plug-in architectures or configuration
methods.
One of the most important kinds of state is the current selection, in other words
the object or set of objects that will be affected by the next command. It is
important that this internal state be visualized in a way that is consistent, clear,
and unambiguous. For example, one common mistake seen in a number of
multi-document applications is to forget to "dim" the selection when the window
goes out of focus. The result of this is that a user, looking at several windows at
once, each with a similar-looking selection, may be confused as to exactly which
selection will be affected when they hit the "delete" key. This is especially true if
the user has been focusing on the selection highlight, and not on the window
frame, and consequently has failed to notice which window is the active one.
(Selection rules are one of those areas that are covered poorly
65
Once a user has become experienced with an application, she will start to build
a mental model of that application. She will be able to predict with high accuracy
what the results of any particular user gesture will be in any given context. At
this point, the program's attempts to make things "easy" by breaking up complex
actions into simple steps may seem cumbersome. Additionally, as this mental
model grows, there will be less and less need to look at the "in your face"
exposure of the application's feature set. Instead, pre-memorized "shortcuts"
should be available to allow rapid access to more powerful functions.
There are various levels of shortcuts, each one more abstract than its
predecessor. For example, in the emacs editor commands can be invoked
directly by name, by menu bar, by a modified keystroke combination, or by a
single keystroke. Each of these is more "accelerated" than its predecessor.
There can also be alternate methods of invoking commands that are designed
to increase power rather than to accelerate speed. A "recordable macro" facility
is one of these, as is a regular-expression search and replace. The important
thing about these more powerful (and more abstract) methods is that they should
not be the most exposed methods of accomplishing the task. This is why emacs
has the non-regexp version of search assigned to the easy-to-remember "C-s"
key.
The human eye is a highly non-linear device. For example, it possesses edge-
detection hardware, which is why we see Mach bands whenever two closely
matched areas of color come into contact. It also has motion-detection hardware.
As a consequence, our eyes are drawn to animated areas of the display more
readily than static areas. Changes to these areas will be noticed readily.
The mouse cursor is probably the most intensely observed object on the screen
-- it's not only a moving object, but mouse users quickly acquire the habit of
tracking it with their eyes in order to navigate. This is why global state
66
changes are often signaled by changes to the appearance of the cursor, such
as the well-known "hourglass cursor". It's nearly impossible to miss.
Many of the operations within a user interface require both a subject (an object
to be operated upon), and a verb (an operation to perform on the object). This
naturally suggests that actions in the user interface form a kind of grammar. The
grammatical metaphor can be extended quite a bit, and there are elements of
some programs that can be clearly identified as adverbs, adjectives and such.
The two most common grammars are known as "Action->Object" and "Object-
>Action". In Action->Object, the operation (or tool) is selected first. When a
subsequent object is chosen, the tool immediately operates upon the object. The
selection of the tool persists from one operation to the next, so that many objects
can be operated on one by one without having to re-select the tool. Action-
>Object is also known as "modality", because the tool selection is a "mode"
which changes the operation of the program. An example of this style is a paint
program -- a tool such as a paintbrush or eraser is selected, which can then
make many brush strokes before a new tool is selected.
In the Object->Action case, the object is selected first and persists from one
operation to the next. Individual actions are then chosen which operate on the
currently selected object or objects. This is the method seen in most word
processors -- first a range of text is selected, and then a text style such as bold,
italic, or a font change can be selected. Object->Action has been called "non-
modal" because all behaviors that can be applied to the object are always
available. One powerful type of Object->Action is called "direct manipulation",
where the object itself is a kind of tool -- an example is dragging the object to a
new position or resizing it.
67
one nail, put down the hammer, pick up the measuring tape, mark the position
of the next nail, pick up the drill, etc.
The essay goes on to describe in detail the different strategies for answering
these questions, and shows how each of these questions requires a different
sort of help interface in order for the user to be able to adequately phrase the
question to the application.
Ted Nelson once said "Using DOS is like juggling with straight razors. Using a
Mac is like shaving with a bowling pin."
Each human mind has an "envelope of risk", that is to say a minimum and
maximum range of risk-levels which they find comfortable. A person who finds
herself in a situation that is too risky for her comfort will generally take steps to
reduce that risk. Conversely, when a person's life becomes too safe -- in other
words, when the risk level drops below the minimum threshold of the risk
envelope -- she will often engage in actions that increase their level of risk.
This comfort envelope varies for different people and in different situations. In
the case of computer interfaces, a level of risk that is comfortable for a novice
user might make a "power-user" feel uncomfortably swaddled in safety.
It's important for new users that they feel safe. They don't trust themselves or
their skills to do the right thing. Many novice users think poorly not only of their
technical skills, but of their intellectual capabilities in general (witness the
popularity of the "...for Dummies" series of tutorial books.) In many cases these
fears are groundless, but they need to be addressed. Novice users need to be
68
assured that they will be protected from their own lack of skill. A program with
no safety net will make this type of user feel uncomfortable or frustrated to the
point that they may cease using the program. The "Are you sure?" dialog box
and multi-level undo features are vital for this type of user.
At the same time, an expert user must be able to use the program as a virtuoso.
She must not be hampered by guard rails or helmet laws. However, expert users
are also smart enough to turn off the safety checks -- if the application allows it.
This is why "safety level" is one of the more important application configuration
options.
Each user action takes place within a given context -- the current document, the
current selection, the current dialog box. A set of operations that is valid in one
context may not be valid in another. Even within a single document, there may
be multiple levels -- for example, in a structured drawing application, selecting a
text object (which can be moved or resized) is generally considered a different
state from selecting an individual character within that text object.
It's usually a good idea to avoid mixing these levels. For example, imagine an
application that allows users to select a range of text characters within a
document, and also allows them to select one or more whole documents (the
latter being a distinct concept from selecting all of the characters in a document).
In such a case, it's probably best if the program disallows selecting both
characters and documents in the same selection. One unobtrusive way to do
this is to "dim" the selection that is not applicable in the current context. In the
example above, if the user had a range of text selected, and then selected a
document, the range of selected characters could become dim, indicating that
the selection was not currently pertinent. The exact solution chosen will of course
depend on the nature of the application and the relationship between the
contexts.
It's not necessary that each program be a visual work of art. But it's important
that it not be ugly. There are a number of simple principles of graphical design
that can easily be learned, the most basic of which was coined by artist and
69
science fiction writer William Rotsler: "Never do anything that looks to someone
else like a mistake." The specific example Rotsler used was a painting of a
Conan-esque barbarian warrior swinging a mighty broadsword. In this picture,
the tip of the broadsword was just off the edge of the picture. "What that looks
like", said Rotsler, "is a picture that's been badly cropped. They should have had
the tip of the sword either clearly within the frame or clearly out of it."
In many cases a good software designer can spot fundamental defects in a user
interface. However, there are many kinds of defects which are not so easy to
spot, and in fact an experienced software designer is often less capable of
spotting them than the average person. In other cases, a bug can only be
detected while watching someone else use the program.
User-interface testing, that is, the testing of user-interfaces using actual end-
users, has been shown to be an extraordinarily effective technique for
discovering design defects. However, there are specific techniques that can be
used to maximize the effectiveness of end-user testing. These are outlined in
both [TOG91] and [LAUR91] and can be summarized in the following steps:
Set up the observation. Design realistic tasks for the users, and then
recruit end-users that have the same experience level as users of your
product (Avoid recruiting users who are familiar with your product
however).
Describe to the user the purpose of the observation. Let them know that
you're testing the product, not them, and that they can quit at any time.
Make sure that they understand if anything bad happens, it's not their
fault, and that it's helping you to find problems.
Talk about and demonstrate the equipment in the room.
Explain how to "think aloud". Ask them to verbalize what they are thinking
about as they use the product, and let them know you'll remind them to
do so if they forget.
70
Some of the most valuable insights can be gained by simply watching other
people attempt to use your program. Others can come from listening to their
opinions about the product. Of course, you don't have to do exactly everything
they say. It's important to realize that each of you, user and developer, has only
part of the picture. The ideal is to take a lot of user opinions, plus your insights
as a developer and reduce them into an elegant and seamless whole -
- a design which, though it may not satisfy everyone, will satisfy the greatest
needs of the greatest number of people.
One must be true to one's vision. A product built entirely from customer feedback
is doomed to mediocrity, because what users want most are the features that
they cannot anticipate.
But a single designer's intuition about what is good and bad in an application is
insufficient. Program creators are a small, and not terribly representative, subset
of the general computing population.
Most people have a biased idea as to the what the "average" person is
like. This is because most of our interpersonal relationships are in some
way self-selected. It's a rare person whose daily life brings them into
contact with other people from a full range of personality types and
backgrounds. As a result, we tend to think that others think "mostly like
we do." Designers are no exception.
Most people have some sort of core competancy, and can be expected
to perform well within that domain.
The skill of using a computer (also known as "computer literacy") is
actually much harder than it appears.
The lack of "computer literacy" is not an indication of a lack of basic
intelligence. While native intelligence does contribute to one's ability to
use a computer effectively, there are other factors which seem to be just
71
Bibliography
[TOG91] Tog On Interface, Bruce Tognazzini, Addison-Wesley, 1991, ISBN 0-
201-60842-1
[LAUR91] The Art of Human Computer Interface Design, Brenda Laurel,
Addison-Wesley, 1991, ISBN 0-201-51797-3
The Psychology of Everyday Things, Don Norman, Harper-Collins 1988, ISBN
0-465-06709-3
The Macintosh Human Interface Guidelines, Apple Computer Staff, Addison-
Wesley 1993, ISBN 0-201-62216-5
The Amiga User Interface Style Guide, Commodore-Amiga, Addison-Wesley
1991, ISBN 0-201-57757-7
Principles of User Interface Design are intended to improve the quality of user
interface design. According to Larry Constantine and Lucy Lockwood in their
usage-centered design, these principles are:[1]
72
The tolerance principle: The design should be flexible and tolerant, reducing
the cost of mistakes and misuse by allowing undoing and redoing, while also
preventing errors wherever possible by tolerating varied inputs and
sequences and by interpreting all reasonable actions.
The reuse principle: The design should reuse internal and external
components and behaviors, maintaining consistency with purpose rather than
merely arbitrary consistency, thus reducing the need for users to rethink and
remember.
Design a VB form that allows learners details to be captured. Use your own field
when designing this form. You need to buttons to save the form as well And
perform navigation.
73
CHAPTER 12:
The following examples are different types of testing that should be considered
during System testing:
Scalability testing
Sanity testing
Smoke testing
Exploratory testing
Ad hoc testing
Regression testing
Installation testing
Maintenance testing[
Recovery testing and failover testing.
Although different testing organizations may prescribe different tests as part of
System testing, this list serves as a general framework or foundation to begin
with.
THE TEST PLAN
A test plan is a document detailing a systematic approach to testing a system
such as a machine or software. The plan typically contains a detailed
understanding of what the eventual workflow will be.
The purpose of the test plan is to ensure that the system does what it was
designed to do. The heart of a test plan is the test data.
TESTING LEVELS
75
Berea College of Technology
Information Systems 201
INTERFACE TEST
Once the system test plan has been developed, the analyst turns to networks,
courier services and other interfaces that
files, databases, system software,
link two or more components
TESTING HARDWARE
Many large companies hire specialists to test, install and maintain their software.
components come with
Basic electronic functions are tested first. Many hardware
their own diagnostic routines which should be tested.
Many hardware tests include a burn-n period because of startup failures.
TESTING SOFTWARE
Programs and program modules are subjected to a black box test.
Given a set of input data, the expected output is predicted.
The data is then inputted and the results are captured.
If theactual results match the predicted results, then the program has passed the
test.
Each module is tested individually and then the entire program is tested.
76
A key objective of testing is to try to break the system. Good test data anticipates
everything that could possibly go wrong and provides values that potentially
cause every conceivable error. Forcing errors during the testing process
increases the likelihood that they will be discovered and corrected before the
system is released to the user.
The test data should include historical data, hypothetical data, and real data.
Historical data (previously processed data) are necessary to check old system
and new system compatibility. Hypothetical data, or simulated data, are created
specifically for testing purposes. Real data are provided by the user and reflect
the system’s actual operating environment. Some of the test data should
represent normal or typical conditions. Other data should focus on the extremes
and incorporate both legal and illegal values.
Value analysis
Value analysis generates test data based on the data values. Range constraint
analysis, or boundary analysis, suggests test data to represent such extreme
values as upper bounds, lower bounds, and other exceptional values (e.g., a
negative number or zero). Typically, both in-range and out-of-range values are
included. Format constraint analysis focuses on data type; for example, a zero
or a numeric digit might be placed in an alphabetic field, non-digits might be
inserted into a numeric field, or a value other than F or M might be recorded in a
single-character sex or gender field. Length constraint analysis generates test
data with too many or too few characters or digits; this technique is useful for
testing such fixed length fields as a social security number or a telephone
number.
Value analysis should be performed on the algorithms as well as the input data.
For example, a set of test data values might include two out of range parameters
(one high and one low) that, taken together, produce an in-range answer. Other
algorithm-based test data might reflect all possible extreme (but legal) value
combinations; for example, all parameters at the upper limit, all parameters at
the lower limit, one high and all others low, and so on.
77
Data analysis
A variation called structured data analysis testing focuses on data structures, the
relationships among the data, and such unique data as record keys. For
example, structured data analysis testing can be used to evaluate the order of
data within a file, a table, or a relation by creating test data with all possible
primary and secondary key combinations. A third variation is called data volume
analysis. Testing such parameters as response time under peak load conditions
calls for large volumes of test data. Data volume can be achieved by replicating
and reprocessing test data or by using historical data.
Volume analysis
Compatibility analysis
Some applications are designed to access data from multiple versions of a file
or a database. For example, imagine a set of old data files developed using the
COBOL delimited file format and a new database designed for SQL access.
Occasionally, the system might be asked to convert the old data file structure to
support a query or to generate a report, and some new transactions might trigger
updates to the original file. Test data are needed to force the program to obtain
input from and send output to both files.
Partition analysis
78
Different types of systems call for special test data to test system-specific
parameters. For example, symbolic data are essential for testing expert systems,
real-time systems require time-varying and environment-dependent data, data
communication systems require data to test transmission errors, and so on.
REVIEW
Review is "A process or meeting during which artifacts of software product
are examined by project stockholders, user representatives, or other
interested parties for feedback or approval”.
Software Review can be on Technical specifications, designs, source code,
user documentation, support and maintenance documentation, test plans,
test specifications, standards, and any other type of specific to work product,
it can be conducted at any stage of the software development life cycle.
Purpose of conducting review is to minimize the defect ratio as early as
possible in Software Development life cycle.
As a general principle, the earlier a document is reviewed, the greater will be
the impact of its defects on any downstream activities and their work
products. Magnitude cost of defect fixing after the release of the product is a
disadvantage.
Review can be formal or informal. Informal reviews are referred as
walkthrough and formal as Inspection.
WALKTHROUGH
Walkthrough: Method of conducting informal group/individual review is
called walkthrough, in which a designer or programmer leads members of
the development team and other interested parties through a software
product, and the participants ask questions and make comments about
possible errors, violation of development standards, and other problems or
may suggest improvement on the article, walkthrough can be pre planned or
can be conducted at need basis and generally people working on the work
product are involved in the walkthrough process.
The Purpose of walkthrough is to:
· Find problems
· Discuss alternative solutions
· Focusing on demonstrating how work product meets all requirements.
79
Recorder: who notes all anomalies (potential defects), decisions, and action
items identified during the walkthrough meeting, normally generate minutes
of meeting at the end of walkthrough session.
Author: who presents the software product in step-by-step manner at the walk-
through meeting, and is probably responsible for completing most action items.
WALKTHROUGH PROCESS
The Author describes the artifact to be reviewed to reviewers during the
meeting.
Reviewers present comments, possible defects, and improvement
suggestions to the author. Recorder records all defect, suggestion during
walkthrough meeting.
Based on reviewer comments, author performs any necessary rework of
the work product if required.
Recorder prepares minutes of meeting and sends the relevant stakeholders
and leader is normally to monitor overall walkthrough meeting activities as
per the defined company process or responsibilities for conducting the
reviews, generally performs monitoring activities, commitment against action
items etc.
INSPECTION
An inspection is a formal, rigorous, in-depth group review designed to
identify problems as close to their point of origin as possible.
Inspection is a recognized industry best practice to improve the quality of a
product and to improve productivity, Inspections is a formal review and
generally need is predefined at the start of the product planning.
The objectives of the inspection process are to 1) Find problems at the
earliest possible point in the software development process, 2)· Verify that
the work product meets its requirement, 3) · Ensure that work product has
been presented according to predefined standards, 3) · Provide data on
product quality and process effectiveness, 6) · Inspection advantages are to
build technical knowledge and skill among team members by reviewing the
output of other people, 7) Increase the effectiveness of software testing.
There are three roles in an Inspection:
Inspector Leader: The inspection leader shall be responsible for
administrative tasks pertaining to the inspection, shall be responsible for
planning and preparation, shall ensure that the inspection is conducted
in an orderly manner and meets its objectives, should be responsible for
collecting inspection data
80
Recorder: The recorder should record inspection data required for process
analysis. The inspection leader may be the recorder.
Reader: The reader shall lead the inspection team through the software
product in a comprehensive and logical fashion, interpreting sections of the
work product and highlighting important aspects
Author: The author shall be responsible for the software product meeting its
inspection entry criteria, for contributing to the inspection based on special
understanding of the software product, and for performing any rework
required to make the software product meet its inspection exit criteria.
Inspection Process:
· Planning
· Overview
· Preparation
· Examination meeting
Planning:
· Inspection Leader perform following task in planning phase
· Determine which work products need to be inspected
· Determine if a work product that needs to be inspected is ready to
be inspected
· Identify the inspection team
· Determine if an overview meeting is needed.
81
The moderator ensures that all inspection team members have had
inspection process training. The moderator obtains a commitment from
each team member to participate. This commitment means the person
agrees to spend the time required to perform his or her assigned role on
the team. Identify the review materials required for the inspection, and
distribute materials to relevant stake holders
POTENTIAL PROBLEMS
The error report compiled during the inspection process represents a point of
concern and the analyst fearing criticism or misuse of the report, postpone the
inspection process.
82
VOCABULARY LIST
The person who plans and designs a system is called a systems analyst.
A Methodology is a set of tools used in the context of clearly defined steps that end
with specific, measurable exit criteria.
The Systems Development Life Cycle (SDLC), or Software Development Life Cycle
in systems engineering, information systems and software engineering, is the process
of creating or altering systems, and the models and methodologies that people use to
develop these systems
.
CASE tools are a class of software that automate many of the activities involved in
various life cycle phases.
A Cause and EFFECT Diagram also known as the fishbone diagram(or ishikawa)
diagram after its originator is used to document possible causes and secondary
symptoms.
A process is an activity that transforms data in some way. Within a given process
data may be collected, recorded, moved, manipulated, sorted etc.
83
A set of related data elements forms a data structure or a composite data elements
The 'Context Diagram ' is an overall, simplified, view of the target system, which
contains only one process box, and the primary inputs and outputs.
Processes are 'black boxes' - we don't know what is in them until they are
decomposed . Processes transform or manipulate input data to produce output data
Data Stores are some location where data is held temporarily or permanently.
CHAPTER 6: PROTOTYPING
A prototype is a model for an intended system.
84
Black box — A routine, module, or component whose inputs and outputs are known,
but whose contents are hidden.
Configuration item — A composite entity that decomposes into specific hardware and
software components; in a data flow diagram, a functional primitive that appears at the
lowest level of decomposition.
Configuration item level — An imaginary line that links the system’s configuration
items; a system’s physical components lie just below the configuration item level.
85
Operating costs are those costs that begin after the system is released and lasts
for the lifetime of the system.
Tangible benefits are measured in financial terms and usually imply reduced
operating costs, enhanced revenue or both.
Contingencies are a cost factor added to the cost estimate to overcome risks
associated with the project.
86
A dialogue is the exchange of information between the computer and the user.
Response time is the time between the issuing of a command and when
the results appear on screen.
The system test is planned first and executed last and its purpose is to ensure
that the system meets the requirements.
87
BIBLIOGRAPHY
Davis, W. Business Systems Analysis and Design, 1995. Prentice Hall.
Reynolds G.W. , Strair R., Principles of Information Systems, 6th Edition, Thomson Course
Technology.
Bently L., Dittman K., Witten L., Systems Analysis and design methods. 6th Ed, McGraw-Hill
89