Study Guide For Information Systems201

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

[Type here]

Information Systems 201

SAQA ID CREDITS
24419 10

INFORMATION SYSTEMS 201


(SEMESTER 1)

Study Guide

In terms of the Copyright Act, no 98 of 1978, no part of this manual may be reproduced or transmitted in any
form or by any means, electronic or mechanical, including photocopying, recording or by any other
information storage and retrieval system without permission in writing from the proprietor.

Berea College of Technology


Information Systems 201

OUTCOMES

At the end of this module, the learner should be able to:

 
 Describe an information system in detail
 
 Differentiate between the various information systems
 
 Study and analyze a problem and create a problem statement
 Analyze a company and create a summary of findings as well as create a data dictionary. 

 Perform logical modeling
 
 Perform Data modeling
 
 Differentiate between the logical and data model
 
 Discuss the various aspects of prototyping, JAD and RAD.
 
 Discuss object oriented programming and analysis
 Define requirements and create a requirements specification using a specific standard.


 Generate physical alternatives using system flowcharts
 
 Estimate development costs
 
 Perform cost benefit analysis
 
 Create and design the user interface
 
Plan and execute the system test.

Berea College of Technology


Information Systems 201

TABLE OF CONTENTS

CHAPTER 1: INFORMATION SYSTEMS ANALYSIS PAGE 4

CHAPTER 2: RECOGNIZING AND DEFINING THE PROBLEM PAGE 13

CHAPTER 3: INFORMATION GATHERING PAGE 19

CHAPTER 4: LOGICAL MODELING PAGE 23

CHAPTER 5: DATA MODELING PAGE 30

CHAPTER 6: PROTOTYPING PAGE 41

CHAPTER 7: PROJECT PREPARATION PAGE 49

CHAPTER 8: DEFINING REQUIREMENTS PAGE 52

CHAPTER 9: GENERATING PHYSICAL ALTERNATIVES PAGE 57

CHAPTER 10: EVALUATING ALTERNATIVES PAGE 60

CHAPTER 11: THE USER INTERFACE PAGE 64

CHAPTER 12: DEVELOPING THE TEST PLAN PAGE 78

VOCABULARY LIST PAGE 83

BIBLOGRAPHY PAGE 89

Berea College of Technology


Information Systems 201

CHAPTER ONE:

INFORMATION SYSTEMS ANALYSIS

1.1 SYSTEMS

A System is a set of components that function together in a meaningful and


effective manner. Around us, we have many examples of systems E.g. The
Human Body, A car.

For a system to be effective all parts of the system must work together in an
efficient manner. If a car had no wheels then all components of the car’s system
cannot function together in an effective and meaningful manner.

The emphasis of this chapter is on Information Systems. An Information


System is a set of hardware, software, data, procedural, and human components
that work together to generate, collect, store, retrieve, process, analyze and/or
distribute information. The purpose of an information System is to get the right
information to the right people at the right time.

Information Systems have been around ever since people began trading and
bartering – An example of such systems is ledger books.

SYSTEM TYPE CHARACTERISTICS


1. Electronic Data Processing -Were stand-alone, serial-batch
Systems applications such as payroll, accounts
Receivable etc.
-These systems included manual
procedures for recording, verifying and
distributing information.
2. Management Information Systems -Had multiple users sharing a central
database
3. Decision Support Systems -A computer-based information
systems that added remote
Intelligence to management information systems.
4. Executive Information Systems -These Information Systems added
Such technical innovations as
enterprise modeling, parallel
processing virtual reality and multi-
media to DSS.

1.2 SYSTEMS ANALYSIS

Systems Analysis is the study of a business problem domain to recommend


improvements and specify the business requirements for a solution.

A system begins with a user. The user needs information but often lacks
technical expertise. While programmers and technical experts know a great deal
about computers and technology, they may lack a clear understanding of the
user’s needs. This leads to the user knowing the problem but lacking the

Berea College of Technology


Information Systems 201

skills to solve it and the technical personnel can solve the problem, if only they
understood it.

USER: A person who affects or is affected by the system. Can best be


described as an application expert.
END-USER: A person who utilizes all or part of the system

Successful systems require careful planning.

1...3 THE SYSTEMS ANALYST

The person who plans and designs a system is called a systems analyst.

The systems Analyst Defines the user’s problem, deals with management to
obtain the necessary resources, translates the use’s needs into technical terms
and then develops a plan for co-coordinating the efforts of the various technical
experts assigned to the job.
The analyst acts as an intermediary between the user, technical experts and
management.

THE ANALYST SHOULD HAVE THE FOLLOWING QUALITIES:

1. Have general knowledge of business


2. Have excellent problem solving skills
3. Have excellent communication skills
4. Be able to work as both a team leader and a team member.
5. Flexibility and adaptability
6. Be self-disciplined and self motivating
7. Excellent motivating and managing skills

1...4... METHODOLOGY

Due to the complexity of the analyst’s job, it is easy to over look something.
That is why most analysts use a specific methodology.

A Methodology is a set of tools used in the context of clearly defined steps that
end with specific, measurable exit criteria.
1.5. SYSTEMS DEVELOPMENT LIIIFE CYCLE

The Systems Development Life Cycle (SDLC), or Software Development Life


Cycle in systems engineering, information systems and software engineering, is
the process of creating or altering systems, and the models and methodologies
that people use to develop these systems. The concept generally refers to
computer or information systems.

Berea College of Technology


Information Systems 201

In software engineering the SDLC concept underpins many kinds of software


development methodologies. These methodologies form the framework for
planning and controlling the creation of an information system[1]: the software
development process.

The System Development Life Cycle framework provides system designers and
developers to follow a sequence of activities. It consists of a set of steps or
phases in which each phase of the SDLC uses the results of the previous one.

A Systems Development Life Cycle (SDLC) adheres to important phases that


are essential for developers, such as planning, analysis, design, and
implementation, and are explained in the section below. A number of system
development life cycle (SDLC) models have been created: waterfall, fountain,
spiral, build and fix, rapid prototyping, incremental, and synchronize and
stabilize. The oldest of these, and the best known, is the waterfall model: a
sequence of stages in which the output of each stage becomes the input for the
next.

The basis for most systems analysis and design methodologies is the Systems
Development Life Cycle (SDLC). It is sometimes called the waterfall method
because the model visually suggests work cascading from step to step like a
series of waterfalls.

Systems development life cycle topics

Management and control

Berea College of Technology


Information Systems 201

Complementary Software development methods to Systems Development Life


Cycle (SDLC) are:

Software Prototyping
Joint Applications Design (JAD)
Rapid Application Development (RAD)
Extreme Programming (XP); extension of earlier work in Prototyping and
RAD.
Open Source Development
End-user development
Object Oriented Programming
PHASES

 1.PROBLEM DEFINITION
Identify the problem, determine the causes and outline a strategy fro solving the
problem. A poor problem definition will guarantee that the system will fail to solve
 the real problem
Exit criteria: 1. Problem statement
2. Feasibility Study

 2.ANALYSIS
Determine what must be done to solve the problem. During analysis the
analyst works with the user to develop a logical model that identifies essential
processes, data elements, objects and other key entities. Exit criteria: 1.
Logical Model Requirements

 3.DESIGN
Determine how the problem will be solved. Identify the primary physical
components and the interfaces that link them. Next the individual components
are defined at a black-box level.

You can provide inputs and predict and observe the resulting outputs, but you
 cannot determine the contents of the black box except by deduction.
Then plan the contents of the black box by specifying how each component
 works.
Exit criteria: 1. Physical Plan

 4.DEVELOPMENT
Programs are coded, debugged, documented and tested.
New hardware is selected, ordered and installed.
Procedures are written.
End-user documentation is prepared and users are trained.
Exit criteria: 1. Code
2. Procedures
3. Manuals





7

Berea College of Technology


Information Systems 201

5.TESTING
Testing begins with module tests, followed by component tests and a final
system test. A well-designed test plan endures that the systems meets the user’s
needs.
Exit criteria: 1. System Test

6.IMPLEMENTATION
After the system test is completed and any problems are corrected, the system
is then released to the user. The user has to approve the system. Exit criteria:
1. User Sign-off
2. Review
7.MAINTENANCE
After the system is released to the user, maintenance begins. The object of
this phase is to keep the systems functioning at an acceptable level.
Exit criteria: 1. Ongoing

ADVANTAGES OF THE SDLC METHODOLOGY


  Acts as a memory aid.
 Communication is enhanced because this methodology imposes a
 consistent set of documentation standards.
 Management Control. The steps and the exit criteria act as checkpoints,
 this helps with the development of schedules and budgets
 Incorporates tools which make it easier for the analyst to solve the
 problem.
 Increases the likelihood that significant errors are detected early.

CAUTIONS/DISADVANTAGES
 Some analysts focus excessively on preparing the exit criteria instead of
 actually completing the work.
 Irrespective of the methodology, you will eventually encounter a problem
for which the methodology is inappropriate and it is a mistake to force the
application to fit the tool.
 A good methodology makes a competent analyst productive, but
no Methodology can convert an unskilled person into an analyst.

1...6 COMPUTER-AIIIDED SOFTWARE ENGIIINEERIIING (CASE)

CASE tools are a class of software that automate many of the activities
involved in various life cycle phases.
For example, when establishing the functional requirements of a proposed
application, prototyping tools can be used to develop graphic models of
application screens to assist end users to visualize how an application will
look after development.





8

Berea College of Technology


Information Systems 201

 Subsequently, system designers can use automated design tools to


transform the prototyped functional requirements into detailed design
 documents.
 Programmers can then use automated code generators to convert the
 design documents into code.
  Automated tools can be used collectively, as mentioned, or individually.
For example, prototyping tools could be used to define application
requirements that get passed to design technicians who convert the
requirements into detailed designs in a traditional manner using flowcharts
and narrative documents, without the assistance of automated design
software.]

Existing CASE tools can be classified along 4 different dimensions :

1. Life-Cycle Support
2. Integration Dimension
3. Construction Dimension
4. Knowledge Based CASE dimension [6]

Let us take the meaning of these dimensions along with their examples
one by one :

Life-Cycle Based CASE Tools

This dimension classifies CASE Tools on the basis of the activities they support
in the information systems life cycle. They can be classified as Upper or Lower
CASE tools.

Upper CASE Tools: support strategic, planning and construction of


conceptual level product and ignore the design aspect. They support
traditional diagrammatic languages such as ER diagrams, Data flow
diagram, Structure charts, Decision Trees, Decision tables, etc.
 Lower CASE Tools: concentrate on the back end activities of the software
life cycle and hence support activities like physical design, debugging,
construction, testing, integration of software components, maintenance,
reengineering and reverse engineering activities.

Integration dimension
Three main CASE Integration dimensions have been proposed :[7]
1. CASE Framework
2. ICASE Tools
3. Integrated Project Support Environment(IPSE)

Berea College of Technology


Information Systems 201

Workbenches
Workbenches integrate several CASE tools into one application to support
specific software-process activities. Hence they achieve:
 a homogeneous and consistent interface (presentation integration).
  easy invocation of tools and tool chains (control integration).
 access to a common data set managed in a centralized way (data
integration).

CASE workbenches can be further classified into following 8 classes:[4]

1. Business planning and modeling


2. Analysis and design
3. User-interface development
4. Programming
5. Verification and validation
6. Maintenance and reverse engineering
7. Configuration management
8. Project management

Environments
An environment is a collection of CASE tools and workbenches that supports the
software process. CASE environments are classified based on the focus/basis
of integration[4]
1. Toolkits
2. Language-centered
3. Integrated
4. Fourth generation
5. Process-centered

Toolkits
 Toolkits are loosely integrated collections of products easily extended by
 aggregating different tools and workbenches.
 Typically, the support provided by a toolkit is limited to programming,
 configuration management and project management.
And the toolkit itself is environments extended from basic sets of operating
system tools.
 In addition, toolkits' loose integration requires user to activate tools by
 explicit invocation or simple control mechanisms.
 The resulting files are unstructured and could be in different format, therefore
the access of file from different tools may require explicit file format
 conversion.
 However, since the only constraint for adding a new component is the
formats of the files, toolkits can be easily and incrementally extended.






10

Berea College of Technology


Information Systems 201

Language-centered
The environment itself is written in the programming language for which it was
developed, thus enabling users to reuse, customize and extend the environment.
Integration of code in different languages is a major issue for language-centered
environments. Lack of process and data integration is also a problem. The
strengths of these environments include good level of presentation and control
integration.

Integrated
These environments achieve presentation integration by providing uniform,
consistent, and coherent tool and workbench interfaces. Data integration is
achieved through the repository concept: they have a specialized database
managing all information produced and accessed in the environment.

Fourth-generation
Fourth-generation environments were the first integrated environments. They
are sets of tools and workbenches supporting the development of a specific class
of program: electronic data processing and business-oriented applications. In
general, they include programming tools, simple configuration management
tools, document handling facilities and, sometimes, a code generator to produce
code in lower level languages.

Process-centered
Environments in this category focus on process integration with other integration
dimensions as starting points. A process-centered environment operates by
interpreting a process model created by specialized tools. They usually consist
of tools handling two functions:
 Process-model execution
 Process-model production


Task / Activities:

1. Explain what is a System ?


2. How is a System different from an Information System ?
3. Explain System Analysis ?
4. List the 7 Qualities that a System Analyst should Posess ?
5. Explain the System Development Life Cycle ?
6. List the 7 Phases in the SDLC ?
7. List 3 Advantages of the SDLC ?
8. List 3 Disadvantages of the SDLC?

Berea College of Technology


Information Systems 201

CHAPTER TWO:

RECOGNISING AND DEFINING THE PROBLEM

2.1 PROBLEM RECOGNITION

Problem recognition is the act of identifying a problem.

A Problem is a difference between things as desired and things as perceived.


A problem exists if there is a difference between what is actually happening
and what you want to happen.

A Problem is the difference between the way things are and the way the
organizational goals say they should be.

DEFINING DESIRES

 
In an organization before a problem can be solve, people must first agree on the
problem.

Before members of the organization can agree on the nature of the problem,
 they must first share a sense offwhat the organization wants i.e. explicitly
define the organizations desires.
  
This can be done by identifying strategic goals.
  
These goals are converted to measurable objectives and a critical success factor.

 
The goals and objectives are the organizations official desires; they represent a
shared sense of how things should be.


If actual or expected performance is inconsistent with the goals and objectives,
then there is a problem.

2.2 PROBLEM DEFINITION

Once a problem has been recognized then problem definition can begin.
Problem Definition is the act of defining a problems causes. You cannot solve a
problem unless you know what caused it.

FINDING THE CAUSE


 The first step in discovering what caused a problem is to state the
symptoms in a measurable form.

 Without knowing the magnitude of the problem you cannot decide if the
problem is worth solving.

12

Berea College of Technology


Information Systems 201

 A Cause and EFFECT Diagram also known as the fishbone diagram(or


ishikawa) diagram after its originator is used to document possible causes
and secondary symptoms.

The major purpose of the CE Diagram is to act as a first step in problem


solving by generating a comprehensive list of possible causes. It can lead to
immediate identification of major causes and point to the potential remedial
actions or, failing this, it may indicate the best potential areas for further
exploration and analysis. At a minimum, preparing a CE Diagram will lead to
greater understanding of the problem.

The CE Diagram was invented by Professor Kaoru Ishikawa of Tokyo


University, a highly regarded Japanese expert in quality management. of a
fish.

Use it when you start investigating a problem

Construct a CE Diagram whenever you need to investigate the causes or


contributing factors for an effect (be it a quality characteristic or other
outcome) which is of concern to you. This will most likely be after you have
conducted a general investigation of problems for a particular function,
product, or service, and ranked them using a Pareto Chart.

For example, you may just have completed an investigation of all the
reasons recorded for goods being returned by customers and found that the
highest incidence relates to incorrect goods being sent. A CE Diagram can
be constructed to explore the possible causes for this.

Developing a CE Diagram in a team meeting is a very effective technique


for:
 concentrating team members' attention on a specific problem 
  pooling, and reflecting back, team thinking
 constructing a picture of the problem at hand without resorting to the
tight discipline of a flowchart

.
DEFINING OBJECTIVES
 After the problem has been defined, the analyst the analyst lists the causes
 in terms of objectives, if they are met are likely to solve the problem.
 Symptoms and causes are negative, they suggest what is wrong.
 Objective are positive and reflect the causes and are measurable.

13

Berea College of Technology


Information Systems 201

2.3 THE PROBLEM STATEMENT

A problem statement is a concise description of the issues that need to be


addressed by a problem solving team and should be presented to them (or
created by them) before they try to solve the problem. When bringing together a
team to achieve a particular purpose provide them with a problem statement.
A good problem statement should answer these questions:

1. What is the problem? This should explain why the team is needed.
2. Who has the problem or who is the client/customer? This should explain
who needs the solution and who will decide the problem has been solved.
3. What form can the resolution be? What is the scope and limitations (in
time, money, resources, technologies) that can be used to solve the
problem? Does the client want a white paper? A web-tool? A new feature
for a product? A brainstorming on a topic?

The primary purpose of a problem statement is to focus the attention of the


problem solving team. However, if the focus of the problem is too narrow or the
scope of the solution too limited the creativity and innovation of the solution can
be stifling.

In project management, the problem statement is part of the project charter. It


lists what's essential about the project and enables the project manager to
identify the project scope as well as the project stakeholders.

A problem statement lists the symptoms of problem, suggests the problems


likely causes and estimates the resources needed to solve the problem.
The objective of the problem statement is to communicate to the user and
management in a written format the analyst’s hypothesis and initial sense of the
problem’s resource implications.

VERIFICATION
  To minimize errors always verify your work
 Go through each symptom and identify the objective/s that solve each
 problem.
 If you find a symptom not addressed by the objectives then you have
overlooked something.
 Go through the same procedure for the objectives.
 Ask the user if solving the problem is worth the cost.

14

Berea College of Technology


Information Systems 201

2.4 USER SIGN-OFF


  The last step in problem definition process is the USER SIGN-OFF
 This usually takes the form of a letter or a notation on the problem statement
but the user indicating that the user has read it, understands it, agrees with
 it and authorizes work to begin
 The user is the one experiencing the problem therefore the user is the expert
and solving the problem without the user’s input is a mistakes. A system is
 more likely to fail if the user does not accept the system.
 The problem statement is written for the user, if the user does not
understand the problem statement then the analyst is at fault.

2.5 THE FEASIBILITY STUDY

The problem definition is followed by a feasibility study aimed at determining


quickly and at a reasonable cost if a problem can be solved and if it is worth
solving.

A feasibility study is an evaluation of a proposal designed to determine the


difficulty in carrying out a designated task. Generally, a feasibility study precedes
technical development and project implementation. In other words, a feasibility
study is an evaluation or analysis of the potential impact of a proposed project.

There are five types of feasibility. TECHNICAL, ECONOMIC, LEGAL,


OPERATIONAL, and SCHEDULE.

Five common factors (TELOS)

Technology and system feasibility


The assessment is based on an outline design of system requirements in terms
of Input, Processes, Output, Fields, Programs, and Procedures. This can be
quantified in terms of volumes of data, trends, frequency of updating, etc. in order
to estimate whether the new system will perform adequately or not.
Technological feasibility is carried out to determine whether the company has
the capability, in terms of software, hardware, personnel and expertise, to handle
the completion of the project

Economic feasibility
Economic analysis is the most frequently used method for evaluating the
effectiveness of a new system. More commonly known as cost/benefit analysis,
the procedure is to determine the benefits and savings that are expected from a
candidate system and compare them with costs. If benefits outweigh costs, then
the decision is made to design and implement the system. An entrepreneur must
accurately weigh the cost versus benefits before taking an action.

15

Berea College of Technology


Information Systems 201

Cost-based study: It is important to identify cost and benefit factors, which can
be categorized as follows: 1. Development costs; and 2. Operating costs. This
is an analysis of the costs to be incurred in the system and the benefits derivable
out of the system.
Time-based study: This is an analysis of the time required to achieve a return
on investments. The future value of a project is also a factor.

Legal feasibility
Determines whether the proposed system conflicts with legal requirements, e.g.
a data processing system must comply with the local Data Protection Acts.

Operational feasibility
Operational feasibility is a measure of how well a proposed system solves the
problems, and takes advantage of the opportunities identified during scope
definition and how it satisfies the requirements identified in the requirements
analysis phase of system development.

Schedule feasibility
A project will fail if it takes too long to be completed before it is useful. Typically
this means estimating how long the system will take to develop, and if it can be
completed in a given time period using some methods like payback period.
Schedule feasibility is a measure of how reasonable the project timetable is.
Given our technical expertise, are the project deadlines reasonable? Some
projects are initiated with specific deadlines. You need to determine whether the
deadlines are mandatory or desirable.

Other feasibility factors

Market and real estate feasibility


Market Feasibility Study typically involves testing geographic locations for a real
estate development project, and usually involves parcels of real estate land.
Developers often conduct market studies to determine the best location within a
jurisdiction, and to test alternative land uses for given parcels. Jurisdictions often
require developers to complete feasibility studies before they will approve a
permit application for retail, commercial, industrial, manufacturing, housing,
office or mixed-use project. Market Feasibility takes into account the importance
of the business in the selected area.

Resource feasibility
This involves questions such as how much time is available to build the new
system, when it can be built, whether it interferes with normal business
operations, type and amount of resources required, dependencies,

Cultural feasibility
In this stage, the project's alternatives are evaluated for their impact on the local
and general culture. For example, environmental factors need to be

16

Berea College of Technology


Information Systems 201

considered and these factors are to be well known. Further an enterprise's own
culture can clash with the results of the project.

Output
The feasibility study outputs the feasibility study report, a report detailing the
evaluation criteria, the study findings, and the recommendations. [2] Resource
feasibility This involves questions such as how much time is available to build
the new system, when it can be built, whether it interferes with normal business
operations, type and amount of resources required, dependencies

During the feasibility study the analyst considers several alternative solutions to
the problem. The analyst prepares the feasibility study report outlining several
alternative solutions. Some may be more feasible than others. The feasibility
study is the basis for the go/no go decision.

The objective/purpose of the feasibility study is to determine at a reasonable


cost, if the problem is worth solving and also helps management to make an
informed decision.

2.6 THE GO/NO GO DECISION

The basis for the GO/NO GO DECISION is the feasibility study report.

The user makes this decision but with larger organizations this task is assigned
to a steering committee made up of a representative of each department that will
use the system. Because there are not enough resources to solve all problems
the steering committee evaluates pending projects, rejects some and prioritizes
the others. As resources become available the Management Information
Systems Manager assigns people to projects in priority order.

Task / Activities:

1. Explain what is Problem Definition?


2. Explain what is Problem Recognition ?
3. Explain what is Problem Statement ?
4. What does Feasibility Study mean ?
5. Name the 5 Factors that affect Feasibility Study ?

17

Berea College of Technology


Information Systems 201

CHAPTER THREE:

INFORMATION GATHERING

Information Gathering is a key part of feasibility analysis. Information Gathering


is therefore the process of gathering information about the present system. We
must know what information to gather, where to find it, how to collect it, and
ultimately how to process the collected information.

Information gathering is both an art and a science. It is an art because the person
who collects the information needs to be sensitive, and must have an
understanding of what to collect and what to focus on, the channels where the
source of information can be gathered. It is a science because it requires proper
methodology and the use of specific tools in order to be effective. Nonetheless,
there is always a chance that one can find oneself drowned in an ocean of
information, not knowing which specific information to collect, where to collect it
and how to collect it.

The following are categories of information that can be broadly classified:

Organizational Information
The kind of information that one could collect here depending upon the need
are:
 Policies of the organization - Policies are guidelines that defines the code
of business. Policies are then translated into rules and procedures for
 achieving goals.
 Goals of the organization - Goals describe the managements commitment
to the objectives. Objectives are milestones of accomplishments toward
 achieving goals.
 Organization Structure - Organization structure helps to understand the
hierarchy level in an organization and the mode of communication.

User Information
User information relates to the individuals who are using the present system.
When collecting user information one should focus on job function, information
requirements and interpersonal relationships within the organization.

18

Berea College of Technology


Information Systems 201

Work Information
Work information relates to the work itself. When collecting Work information
one should focus on work flows, how the data flows between various
systems, work schedules and methods and procedures.

3.1 DEFINING A SYSTEM’S LOGICAL ELEMENTS

The objective of analysis is to determine what the system must do i.e. to


rigorously define and verify a systems requirements.

The word “analyzes” means to study something by breaking it down into its
constituent parts.

The analyst starts with the existing physical system, constructs a logical model,
and then manipulates the model to create a new, improved logical model. The
new model, in turn, is the basis for designing a new and improved physical
system.

STUDYING THE PRESENT SYSTEM

The purpose of analysing a current system is find out the requirements for
any proposed new system, what data needs to be handled, how it should be
handled and who will be using the system.

19

Berea College of Technology


Information Systems 201

  
The old physical system is the starting point for analysis.

 The old physical system must have been performing a necessary function or
management would not have authorized you to fix it.
  
The problem with the old system is not what it does but how it works.

 step in analysis is to separate what the existing system does from how it
The first
works.

Since you are dealing with a lot of details the solution is to partition the
 problem into sub problems or mini problems by focusing on such logical 
building blocks as data element, processes, boundaries and objective.

 Start by conductinginterviews with key personnel and try it identify input and
output documents.
  
Find out who prepares the input and who uses the output, interview those people.

  maintain the system and ask them to explain existing
Talk to people who
documentation.
 
Summarize all interviews and extract a list of the systems basic logical elements.

FACT FINDING TECHNIQUES

To find out information about a system a number of methods are available:


 Surveys and questionnaires can be used to get the opinions of lots of
people with relative ease. They are cheap but limit the amount of
 feedback a user can give.
 Observations involve spending time watching the current system in use.
This allows you to pick up on things the others may not and spot ways to
 improve the system.
 Interviews are the most useful way of fact finding, they are cheap and easy to
conduct plus give the interviewee a chance to respond to your replies to their
feedback. However they require more time and planning, a good interviewer
 needs to know what to ask, who to interview, where and when. 
 Examination of paperwork allows you to see what information, types and
amounts, the current system handles and how much the new system must
be expected to handle

IDENTIFYING PROCESSES
 A process is an activity that transforms data in some way. Within a given
 process data may be collected, recorded, moved, manipulated, sorted etc.
 Read through the summaries and identify the verbs (processes)

IDENTIFYING FIELDS
 After listing the processes go through the documentation for reference to
 data.
 The best way to identify fields is to list the fields that appear on the present
systems documents, files and other sources of data entries.

20

Berea College of Technology


Information Systems 201

IDENTIFYING BOUNDARIES
 A systems boundaries can be defined by identifying those people,
organizations and other systems that lie just outside the target system.

RELATING PROCESSES, DATA AND BOUNDARIES


 Data implies processes and processes imply data
 If a report exists then must have been generated.
 If a form exists then the data on the form must have been processed.
 If a process exists then it must have inputs and outputs
 New Boundaries imply new data and new data implies new boundaries.

IDENTIFYING OBJECTS
 The basic building block for an object oriented system is call and object
 An object incorporates both processes and data
 Objects communicate with other objects via signals

3.2 THE DATA DICTIONARY


  The data dictionary is a collection of data about data
 Its purpose is to define each and every data elements, data structure and
data transform
 The contents of a data dictionary is called Metadata
  Is used to support Database management Systems
 The data dictionary defines each data element, specifies both its logical and
physical characteristics and records information on how it is used.

21

Berea College of Technology


Information Systems 201

KEY TERMS
 An Entity is an object about which data is stored(person, group, thing
 or activity)
 An Occurrence is a single instance of an entity
 An Attribute is a property of an entity
  A data Element is an attribute that cannot be logically decomposed.
 A set of related data elements forms a data structure or a composite
data elements

FORMAT OF AN ABRIDGED DATA DICTIONARY

DATA DEFINITION ALIAS/ DATA LENGTH UNIT PICTURE


ELEMENT SYNONYMS TYPE
NAME

Task / Activities:

1. Explain what is Information Gathering?


2. Explain the Purpose of a Data Dictionary ?
3. Name 3 Categories in which Information can be classified ?
4. List 4 Fact Finding Techniques ?
5. What is the Purpose of studying the Present System ?

22

Berea College of Technology


Information Systems 201

CHAPTER FOUR:

LOGICAL MODELING

DFD Principles
 The general principle in Data Flow Diagramming is that a system can be
decomposed into subsystems, and subsystems can be decomposed into
 lower level subsystems, and so on.
 Each subsystem represents a process or activity in which data is
 processed. At the lowest level, processes can no longer be decomposed.
 Each 'process' (and from now on, by 'process' we mean subsystem and
 activity) in a DFD has the characteristics of a system.
 Just as a system must have input and output (if it is not dead), so a
 process must have input and output.
 Data enters the system from the environment; data flows between
processes within the system; and data is produced as output from the
system

DATA FLOW DIAGRAMS SHOW:


 where the data in the system originates
 what processing is performed
 who uses the data
 where the data is stored
 What is output

DATA FLOW DIAGRAMS COMPRISE OF 4 MAIN SYMBOLS:

The entity symbol represents a source or destination of data (e.g. a


customer to receives an invoice)

The data store symbol represents a data storage such as a hard disk

23

Berea College of Technology


Information Systems 201

The process symbol represents some sort of processing which is performed


on the data. Information on the process (e.g. validate inputted address) is put
in the middle box

The data flow symbol shows which way data moves between
processes entities and data stores

Here is a simple example:

Here the entity Employee provides the hours they have worked. This,
together with their hourly pay rate from the employee data store is processed
to calculate how much they are to be paid. A pay check is then given to the
employee.

24

Berea College of Technology


Information Systems 201

It is common practice to draw the context-level data flow diagram first, which
shows the interaction between the system and external agents which act as data
sources and data sinks. On the context diagram the system's interactions with
the outside world are modelled purely in terms of data flows across the system
boundary. The context diagram shows the entire system as a single process,
and gives no clues as to its internal organization.

This context-level DFD is next "exploded", to produce a Level 0 DFD that shows
some of the detail of the system being modeled. The Level 0 DFD shows how
the system is divided into sub-systems (processes), each of which deals with
one or more of the data flows to or from an external agent, and which together
provide all of the functionality of the system as a whole. It also identifies internal
data stores that must be present in order for the system to do its job, and shows
the flow of data between the various parts of the system.

Data flow diagrams were proposed by Larry Constantine, the original developer
of structured design,based on Martin and Estrin's "data flow graph" model of
computation.

Data flow diagrams are one of the three essential perspectives of the structured-
systems analysis and design method SSADM. The sponsor of a project and the
end users will need to be briefed and consulted throughout all stages of a
system's evolution. With a data flow diagram, users are able to visualize how the
system will operate, what the system will accomplish, and how the system will
be implemented. The old system's dataflow diagrams can be drawn up and
compared with the new system's data flow diagrams to draw comparisons to
implement a more efficient system. Data flow diagrams can be used to provide
the end user with a physical idea of where the data they input ultimately has an
effect upon the structure of the whole system from order to dispatch to report.
How any system is developed can be determined through a data flow diagram
model.

In the course of developing a set of levelled data flow diagrams the


analyst/designers is forced to address how the system may be decomposed into
component sub-systems, and to identify the transaction data in the data model.

Data flow diagrams can be used in both Analysis and Design phase of SDLC.

25

Berea College of Technology


Information Systems 201

CHAPTER FIVE:

DATA MODELING

Entity Relationship Model and Diagram


 ER model forms the basis of an ER diagram
 ERD represents the conceptual database as viewed by end user 
 ERDs depict the ER model’s three main components:
o Entities o
Attributes
o Relationships
 Several different diagramming conventions

Entities
 Refers to the entity set and not to a single entity occurrence 
  Corresponds to a table and not to a row in the relational environment 
 In both the Chen and Crow’s Foot models, an entity is represented by a
rectangle containing the entity’s name
 Entity name, a noun, is usually written in capital letters

Attributes
 Characteristics of entities
  Domain is set of possible values
 Primary keys underlined

Simple
Cannot be subdivided
Age, sex, GPA
Composite
Can be subdivided
Address: street city state zip
Single-valued
Has only a single value
Social security number
Multi-valued
Can have many values
Person may have several college degrees
Derived
Can be calculated from other information
Age can be derived from D.O.B.

26

Berea College of Technology


Information Systems 201

Multivalued Attributes

Resolving Multivalued Attribute Problems

Although the conceptual model can handle multi valued attributes, you should
not implement them in the relational DBMS

 Within original entity, create several new attributes, one for each of the
original multivalued attribute’s components
o Can lead to major structural problems in the table
o Create a new entity composed of original multivalued attribute’s

27

Berea College of Technology


Information Systems 201

components
Creating New Attributes

Creating New Entity Set

Relationships
 Associations between entities
 Established by Business Rules
  Connected entities termed participants
 Connectivity describes relationship
classification: o 1:1, 1:M, M:N

28

Berea College of Technology


Information Systems 201

 Cardinality
o Number of entity occurrences associated with one occurrences of related
entity

Connectivity and Cardinality in an ERD

Relationship Strength
 Existence Dependent
o Entity's existence depends on existence of another related entities
o Existence-independent entities can exist apart from related entities
o Employee claims Child
Child is dependent on employee

29

Berea College of Technology


Information Systems 201

 Weak (non-identifying)
o One entity is existence-independent on another
o PK of dependent entity doesn't contain PK component of parent
entity o Book is somewhat confused on this
 Strong (identifying)
o One entity is existence-dependent on another
o PK of related entity contains PK component of parent entity

Relationship Participation
 Optional
o Entity occurrence does not require a corresponding occurrence in related
entity
o Shown by drawing a small circle on side of optional entity on
ERD  Mandatory
o Entity occurrence requires corresponding occurrence in related entity
o If no optionality symbol is shown on ERD, it is mandatory

30

Berea College of Technology


Information Systems 201

31

Berea College of Technology


Information Systems 201

Mandatory Class Course relationship

32

Berea College of Technology


Information Systems 201

Optional Class Entity in Professor Teaches Class

Degree of Relationship

A relationships degree indicates the number of associated entities.

33

Berea College of Technology


Information Systems 201

34

Berea College of Technology


Information Systems 201

Generalization Hierarchy

 Depicts relationships between higher-level supertype and lower-level


subtype entities
 Supertype has shared attributes
 Subtypes have unique attributes
 Disjoint relationships
o Unique subtypes
o Non-overlapping
o Indicated with a `G'
 Overlapping subtypes use `Gs' Symbol

Generalization Hierarchy: Disjoint

Generalization Hierarchy: Overlapping and Disjoint

Supertype/Subtype relationship in an ERD

35

Berea College of Technology


Information Systems 201

Comparison of ER Modeling Symbols

36

Berea College of Technology


Information Systems 201

CHAPTER SIX:

PROTOTYPING

What is a Prototype?
 
 A prototype is a model for an intended system.
 
Prototyping is therefore, the development and improvement of a prototype.


By using prototyping there is the potential for early amendments to
weaknesses inthe designed system and, in extreme circumstances its
 abandonment.


These possibilities exist because prototyping involves the close participation
 of prospective users and consequently their reactions are readily perceived.


Prototyping is particularly beneficial in situations where the application is not

clearly defined. On the other hand, for a well understood, fully definable
 application, prototyping is probably not worthwhile.


An example of the former situation could be a firm of estate agents setting
up a system to hold and interrogate a database concerning the properties on
their books. Although the estate agents may previously have done exactly
the same thing using, for example a card-filing system, they may still find it
 of
difficult to envisage a computerised system. A hands-on demonstration
 such a system through prototyping can be reassuring for such users.


An instance of a well defined application could very well be sales invoicing.
The work is well understood, as it has been regularly carried out for a long
period of time and there is little to be gained from changes. Thus the new

system will replicate the previous system from input/output aspects so
prototyping is unnecessary.

37

Berea College of Technology


Information Systems 201

Lets consider the tools which can be used to develop a prototype:

a) Screen Generators
A screen generator is a tool for creating displays on V.D.U. screens quickly and
easily. The principle is that the analyst 'draws' or 'paints' the required layout on
the screen by means of a mouse, pointer, palette and keyboard. Also available
are skeletons of menus and forms for the subsequent entry of data and icons for
helping with the choice of requirements.

b) Report Generators
A report generator is software for quickly creating a report from data in a
database. The content and format of the report are specified through use of a
non-procedural language o alternatively by filling in screen forms.

The report generator retrieves, sorts and summarises the appropriate records
and is also capable of performing rudimentary processing e.g. calculating
percentages.

c) Fourth Generation Languages


A 4GL is a non-procedural programming language i.e. the programmer states
what needs to be done rather than how it is done. This implies that the 4GL
software decides the procedure, whereas the programmer merely decides the
requirements.

4gls in the broadest sense consist of the following types:

1. Spreadsheet based e.g. Lotus 1-2-3

2. Database based e.g. Access, dBase IV and INGRESS

3. Application (Code) Generators e.g.TELON. This type of 4GL


utilises form filling on screen related to systems design. They are
then compiled to generate 3GL (usually COBOL) code.

4. Information Centre based e.g. FOCUS, NOMAD. These are


capable of producing complex reports, handling sophisticated
queries and
generating intricate graphics.

Preparation for the prototyping session

Tool Selection

As the technical environment may not be known at this point, the target
implementation environment is obviously not a candidate for use in the

38

Berea College of Technology


Information Systems 201

prototyping session. Many tools exist on the market, sharing several essential
features: screen painters, data dictionary etc.

Input to the prototyping activities include some SSADM products such as the
LDM for the required system. If a CASE tool is being used for project
development, that tool may be used for prototyping.

Whatever tool is used, it should be chosen and procured early in the project life.
If the IS strategy dictates the technical environment before TSO, then the
prototyping tool should be chosen to simulate that environment as closely as
possible. If no indication has been made to environment, then that obviously
cannot be done.

Scoping the prototyping activities

This activity should be carried out at the start of the whole project. The first thing
to do is to identify any need to perform prototyping. If a project has one of the
following characteristics, then prototyping will probably not be appropriate:

If the change is merely an upgrade of hardware/software, there will be no


need to trial the Requirements Specification this way.
The nature or size of the project does not justify the cost of resources for
prototyping.

If prototyping is justified, is it screen prototyping, or report output prototyping?


Probably both will be required for the system, but the following should be
established before beginning:

Screen prototyping

What is the likely level of on-line activity? If there is a considerable interaction,


prototyping will help validate the specification requirements.

What is the likely level of data manipulation on screen? If for one function the
activity is large, prototyping will assist in validating and collecting the User's
needs.

If the on-line interaction is poorly thought out, will that be detrimental to the
business, or will it just be inconvenient at a local and trivial level? If the former
is the case, then prototyping will be valuable.

Report Output Prototyping

If an output from our target system is going to serve as input to another,


prototyping can help to ensure that the output meets the requirements.

39

Berea College of Technology


Information Systems 201

If an output has to meet certain statutory requirements, tax return forms for
example, then prototyping can help validate its content and format.

If the requirements for the report have been described in vague terms,
prototyping will help define levels of accuracy, optimum format etc., as the User
sees the possible versions of the report and tries to use it.

Setting up the team

The team to carry out the prototyping should be defined well before the activity
is due to start, so that management structures can be put in place in good time.
The team should comprise a team leader and two other analysts, who between
them will serve the roles of implementing the prototyped model and
demonstrating it to the User. There is no absolute need for two analysts, one
would be sufficient if the project were fairly small, a second team member does
however, provide an objective view of a prototype designed by someone else.
The effect should be that the analyst would be more sensitive to the User's
requirements and less defensive about the product.

The team leader's responsibility, apart from standard supervisory duties, should
be the following:

Approving the choice of dialogues and reports to be prototyped.


Agreeing feedback from the demonstration sessions.
Deciding when to close a prototyping cycle for a product.
Notifying changes to SSADM documentation, as a result of a prototyping
cycle.
Making the final report to management, summarizing the outcome of the
prototyping cycles and reasons for decisions made.

Defining the scope of prototyping

Management will normally have specified the areas, specific dialogues and
output reports to be prototyped.

The specific dialogues should be studied to determine if they are appropriate for
prototyping. These should be confirmed with the Users and they should be
consulted about further dialogues they want to see prototyped. The only
constraint on the team's agreement should be budget and timescales.

Some reports should be prototyped, particularly those using formats dictated by


external bodies, such as the Inland Revenue for their PAYE forms, or bank
automatic clearing system (BACS) for their standard form.

During the prototyping sessions, the output data items should be validated,
usually against the LDM and data item descriptions. Some data items will be

40
Berea College of Technology
Information Systems 201

derived from, for example calculations: the formulae should be recorded with
the prototyping documentation and also on Elementary Process Descriptions.

Types of Prototypes

a) Non-working Prototypes
With this approach the prototype is a dummy and is usable only for the
purpose of demonstrating input procedures and/or output formats. The
prototype is incapable of actually processing data but merely reproduces
results that have been pre-determined and then incorporated into the
program. Accordingly the results are unreal, i.e. false so a non-working
prototype is unsuitable for realistic user interfacing. This is a valid approach
as long as the user is aware of the truth of the matter. The main purpose is to
illustrate the layout of screens, documents and reports irrespective of their
contents. A non-working prototype is scrapped at the end of its usefulness.

b) Partially Working Prototypes


This method is also sometimes known as 'interfacing' prototyping because it
is intended to allow users to operate it as a system that gives responses to
certain input. Any input to which the prototype cannot give a correct response
is rejected so there is no chance of the User being misled. The prototype is
arranged to check input, giving appropriate responses and moving through a
succession of dialogue messages. As with a non-working prototype, a partially
working prototype is scrapped at the end of its usefulness.

c) Pilot Prototypes
A pilot prototype is applicable to situations involving a number of installations
doing the same work e.g. a point-of-sale system in a chain of supermarkets.
The principle is for the prototype to be introduced into one or a few of these
installations so weaknesses can be detected. These are removed before the
prototype is installed in further locations. Generally an application that is
suitable for pilot prototyping is fairly straightforward technically, the problems
tend to arise from the human aspect. A pilot prototype is likely to encompass
the full range of activities of the application. This means that most of the
systems design and programming will need to have been done before
initiating the prototype. It is also likely that the complete database is needed
e.g. with a point-of-sale system all the prices, descriptions etc. must be
available at the outset.

d) Staged Prototypes
Staged or incremental prototyping implies that a start is made with only certain
features of the full system. Further features are added stage by stage, and the
prototype is checked at each stage. A Stock Control application, for instance,
could start simply with the updating of the stocks in

41

Berea College of Technology


Information Systems 201

hand, then move on with stock evaluation, usage analysis, demand


forecasting, automatic re-ordering and so on.

At each stage of the prototyping, user interaction is important so that


weaknesses and misunderstandings are not compounded in the next stage.
The final stage, when fully accepted becomes the working system.

e) Evolutionary Prototypes
An evolutionary prototype is in some ways similar to a staged approach. But,
whereas staged prototyping entails adding a succession of separate but
closely associated stages, evolutionary prototyping allows the one integral
application to evolve through a succession of increasingly refined phases. It
is not always practical to use evolutionary prototyping because most
applications do not lend themselves to this approach. Lets consider am
example where evolutionary prototyping is practical: an hotel reservation
system, for which a file exists holding details of the rooms and their current
bookings. The first phase is for the user merely to make enquiries regarding
room availability. The next phase could be to add a booking procedure.

Prototype Demonstrations:

1. Presentation Format - screen and report layouts are presented by the I.T.
staff to the users in a lecture style presentation.
2. Demonstration - the I.T. staff actually use the prototype to demonstrate its
features to the user staff.
3. Hands-On Session - the user staff are allowed to use the prototype under
the guidance of the I.T. staff.

Rapid Application Development


The goal of rapid development of applications has been around for some time
and with good reason, as the objective of speeding up the development process
is something that has been on the agenda of both  general management and
 information systems management for a long time.



The need to develop information systems more quickly has been driven by rapidly
 changing business needs.



The general environment of business is seen as increasingly competitive, more
 customer-focused and operating in a more international context.


Such a business environment is characterised by continuous change, and

the information systems in an organisation need to be created and amended
speedily to support this change.

42

Berea College of Technology


Information Systems 201


Unfortunately, information systems development in most organisations is
unable to react quickly enough and the business and systems development
 the notion of rapid
cycles are substantially out of step. In such a situation,
 application development (RAD) is obviously attractive.


The exact definition or nature of the term is not clear and authors and vendors
 in a variety of different ways with different emphases and
use the term
 meanings.

 
RAD is actually a combination of techniques and tools that are fairly well known.

The following are the most important characteristics of RAD.


It is not based upon the traditional life-cycle but adopts an
evolutionary/prototyping approach.
It focuses upon identifying the important users and involving them via
workshops at early stages of development.
It focuses on obtaining commitment from the business users It
requires a CASE tool with a sophisticated repository.

JAD – Joint Application Developemnt

The typical characteristics of a JAD workshop are as follows:


An intensive meeting of business users (managers and end users) and
information systems people: There should be specific objectives and a
structured agenda, including rules of behaviour and protocols. The IS people
are usually there to assist on technical matters, i.e. implications, possibilities
and constraints, rather than decision-making in terms of requirements. One of
the most crucial participants is the executive owner of the system.
A defined length of meeting: This is typically one or two days, but can be up
to five. The location is usually away from the home base of the users and
away from interruptions. Telephones and e-mail are usually banned.
A structured meeting room: The layout of the room is regarded as crucial in
helping to meet objectives. Walls are usually covered with whiteboards etc.
CASE and other tools should be available in the room.
A facilitator: Who leads and manages the meeting. They are independent of
the participants and specialises in facilitation (i.e. experienced in group
dynamics etc.) A facilitator is responsible for the process and outcomes in
terms of documentation and deliverables and will control the objectives,
agenda, process and discussion.
A scribe: Responsible for documenting the discussions and outcomes of
meetings.
Key Characteristics of JAD
Intensive meetings significantly reduce the elapsed time to achieve the design
goals.

43

Berea College of Technology


Information Systems 201

Getting the right people to the meeting, i.e. everyone with a stake in the
system, including those who can make binding decisions reduces the time
taken to achieve consensus.
JAD engenders commitment. Traditional methods encourage decisions to be
taken off the cuff in small groups. Here all decisions are in the open.
the presence of a senior executive sponsor can encourage fast development
by cutting through bureaucracy and politics
the facilitator is crucial to the effort. A facilitator is able to avoid and smooth
many of the hierarchical and political issues that frequently cause problems
and will be free from organisational history and past battles.

Task / Activities:

1. Explain what Prototyping?


2. Name 4 Types of Prototypes ?
3. List 3 Types of Prototype Demonstrations ?
4. Explain the Rapid Application Development ?
5. Give 3 Key Characteristics of the JAD Development ?

44

Berea College of Technology


Information Systems 201

CHAPTER SEVEN:

2RD YEAR PROJECT PREP

A Gantt chart is a graphical representation of the duration of


tasks against the progression of time.

A Gantt chart is a useful tool for planning and scheduling projects.

A Gantt chart is helpful when monitoring a project's progress.

Learn more. Evolution of the Gantt chart.

Use a Gantt chart to plan how long a project A Gantt chart lets you see immediately
should take. what should have been achieved at
any point in time.
A Gantt chart lays out the order in which the
tasks need to be carried out. A Gantt chart lets you see how
remedial action may bring the project
Early Gantt charts did not show back on course.
dependencies between tasks but modern
Gantt chart software provides this capability. Most Gantt charts include "milestones"
which are technically not available on
Gantt charts. However, for representing
deadlines and other significant events, it
is very useful to include this feature on
a Gantt chart.

Henry Laurence Gantt, an American mechanical engineer, is credited with the invention of the
Gantt chart.

45

Berea College of Technology


Information Systems 201

SAMPLE

46

Berea College of Technology


Information Systems 201

47

Berea College of Technology


Information Systems 201

CHAPTER EIGHT:

DEFINING REQUIREMENTS

8.1 THE REQUIREMENTS SPECIFICATION


 The requirements specification
 is a document that clearly defines the customer’s
logical requirements.

 It builds on the logical model. When the logical model is ambiguous, the
requirements, the requirements specification gives the correct interpretation.

Where the logical model is imprecise, the requirements specification adds
 the necessary details. Where the model is silent, the document supplements
it.

The requirements specification states the client’s needs in such a way that it
 
is possible to test the finished system to verify that those needs have been
met.

The objective of the requirements specification is to ensure that the
 time, money and resources
customer’s needs are correctly defined before
are wasted working on the wrong solution.

8.2 COMPETITIVE PROCUREMENT

Competitive procurement is a set of procedures for subcontracting work through


a bidding process. The intent is to solicit fair, impartial, competitive bids.

The competitive procurement process was initially developed by the U.S.


Department of Defense as a means to minimize costs and ensure fair access on
major military-related projects. Today, the trend toward downsizing and
outplacement has led to an increase in the number of firms that subcontract or
outplace information system development projects. Many corporate competitive
procurement systems are modeled on the Department of Defense standard.

Strengths, weaknesses, and limitations


 Because the contract is typically awarded to the low bidder, the competitive
 bidding process tends to minimize cost.
 Multiple bidders bring different viewpoints to the process, and often suggest
 alternative solutions.
 The step-by-step nature of the process provides the sponsoring organization
with a useful structure for managing subcontracted or outplaced projects.

48

Berea College of Technology


Information Systems 201

  Preparing specifications and bids is expensive.


 It is difficult (perhaps impossible) to prespecify every detail of a complex
 system that will be developed over several years.
  The competitive bidding process tends to be rigid.
 Requirements change over time, and procedures for dealing with changes
 are a major weak point.
 Preparing and evaluating the documents submitted by numerous bidders is
 incredibly time-consuming.
 In today’s economy, firms that cannot react quickly to changing conditions
find it difficult to compete, and the delays caused by frequent bidding cycles
 are intolerable.
 Consequently, many organizations that subcontract or outsource information
system development work use a streamlined version of the competitive
bidding process that sacrifices some control to gain time.
  Low bids do not always imply high quality.
 In an effort to improve quality, many organizations have significantly reduced
the number of projects that go through the competitive procurement process
and have chosen instead to establish long-term relationships with a limited
number of subcontractors.

Standards

There are several widely used standards for writing requirements. DoD-STD-
2167A2 defines procedures for defense system software development. DoD-
STD-490 and DoD-STD-499 must be followed on most military contracts. Other
standards, such as IEEE STD-729 (a glossary) and IEEE STD-830 are defined
by civilian organizations1 (in this case, by the Institute of Electrical and
Electronics Engineers), and many companies have their own internal standards.

Department of Defense Standard 2167A

The system/segment specifications

Sometimes called the project or mission requirements, the system/segment


specifications (SSS), or A-specs, identify major systems and subsystems at a
conceptual level. The system/segment specifications define the requirements
down to, but not including, the configuration item level. (The system’s physical
components begin to appear at the configuration item level.) A high-level mission
requirement might be subdivided into several segments, and those segments
might be further subdivided, so there can be several levels of system/segment
specifications.

49

Berea College of Technology


Information Systems 201

The system/segment design documents

The system/segment design documents (SSDD), or B-specs, define, in black-


box form, the components that occupy the configuration item level. Like the
system/segment specifications, the system/segment design documents are
logical, not physical. Each one describes a discrete physical component, but
they specify what that component must do, not how it must work. For example,
an SSDD for a microcomputer might specify such things as weight and size
limitations, response time requirements, and the number of transactions that
must be processed per unit of time, but it will not specify a particular model
computer or distinguish between a Dell Dimension system and an Apple
Macintosh.

The prime item development specifications

Each subsystem that is to be implemented in hardware is called a hardware


configuration item (HWCI) and is documented in a prime item development
specification (PIDS). These documents consist of hardware design be
implemented.

50

Berea College of Technology


Information Systems 201

The process begins during the problem definition and information gathering
stage of the system development life cycle (Part II). Based on a preliminary
analysis of the problem, user experts who work for the government agency or
the customer organization that is sponsoring the project define a set of needs
and write the system/segment specifications (A-specs), which are then released
for bids.

On a major system, several firms might be awarded contracts and charged with
preparing competitive system/segment design documents (B-specs). The
completed SSDDs are submitted to the customer and evaluated. The best set is
then selected and once again released for bids. Sometimes, the firm that
prepared the system/segment design documents is prohibited from participating
in the next round.

Based on the competitive bids, a contract to generate a physical design and prepare
a set of specifications based on the system/segment design documents is
subsequently awarded to one or (perhaps) two companies. One PIDS (hardware)
or SRS (software) is prepared for each SSDD. (In other words, one physical design
specification is created for each configuration item.)

At the end of this phase, the PIDS and SRS documents are reviewed and
approved. The best design specifications are then released for a final
round of competitive procurement, with the winning firm getting a contract
to build the system. Clearly, the organization that created the final
specifications has an advantage, but there are no guarantees. Sometimes
a backup supplier is awarded a portion of the contract.

8.3 REQUIREMENTS

8.3.1 CHARACTERISTICS OF A GOOD REQUIREMENT

A good requirement is unambiguous, testable, consistent, correct,


understandable, modifiable and traceable.

Unambiguous - A key objective of a requirements specification is to clearly


define a system’s requiems in enough detail to exclude multiple interpretations.

Testable or Verifiable – when a system is completed, it must be able to


demonstrate, for a reasonable cost, that each requirement is met.

Consistent – A requirement cannot conflict with each other

Correct – Every listed requirement must actually be requirement.

51

Berea College of Technology


Information Systems 201

Understandable – Requirements must be written in such a way that it will be


understandable to all interested parties including the customer, the user and
other non-specialists

Modifiable – The analysis process might take months or even years , therefore
its unreasonable to assumed that the customers needs will not change therefore
the requirements specification must be modifiable

Traceable – for any given requirement , you must be able to trace it back to its
high level requirement

8.3.2 TYPES OF REQUIREMENTS

BEHAVIORAL REQUIREMENTS - Defines something the system does.


 Functional Requiring – identifies a task that the system or component
 must perform
 Interface Requirement – Identifies a link to another system

NON-BEHAVIOR ABLE REQUIREMENTS


Performance Requirements – Specify such requirements such as speed,
accuracy, frequency, response time etc.

Design/constraints Requirements – Is concerned with such constraints as


physical size, environmental factors etc

Quality Requirements - Often stated as an acceptable error rate, or mean time


to repair, and are sometimes grouped with performance requirements

Economic requirements – Specifies such things as performance penalties,


operating costs, limits on development etc.

8.4 THE FLOW DOWN PRINCIPLE

Within the requirements specification, the flow down principle requires that each
ow level requirement be linked to a single high-level parent.

Parent requirements can be distributed downwards to several different children,


but each child requirement can have only one parent. Tracing requirements is a
form of verification.

52

Berea College of Technology


Information Systems 201

CHAPTER NINE:

GENERATING PHYSICAL ALTERNATIVES

9.1 THE TRANSITION FORM ANALYSIS TO DESIGN

The logical requirements tell you what the system must do. The next step is to
determine exactly how the requirements must be met.

During the transitional period between the end of analysis and the
beginning of design, the analyst generates the information management
needs by:
1. Identifying several alternative high-Level physical designs fro
meeting the requirements.
2. Documenting each alternative
3. Estimating the costs and benefits for each alternative
4. Performing a cost/benefit analysis for each alternative
5. Recommending the preferred alternative, and
6. preparing a development plan for the preferred alternative

9.2 ALTERNATIVES
Generating alternatives is easy, the problem is generating realistic alternatives.
The best strategy is to give the user 3 alternatives and recommend the
alternative that best solves the problem as you understand it. A high-cost option
shows the user the additional benefits that might be realized by spending a bit
more money.
Adding a low-cost alternative gives the user the sense of the benefits that might
be lost if not enough is invested.

9.3 DEFINING ALTERNATIVES

9.3.3 DOCUMENTING ALTERNATIVES

The following are used to document alternatives:

PHYSICAL DATA FLOW DIAGRAMS


During analysis, the process, data stores and data flows were treated as logical
entities, but there is no reason why they cannot represent programs, manual
procedures, files, databases etc. Given appropriate documentation, there is no
reason why theses diagrams cannot be used to document alternative solutions.

53

Berea College of Technology


Information Systems 201

SYSTEM FLOWCHARTS
A system flow chart is another tool for documenting a physical system for
documenting a physical system.

SYMBOLS

DOCUMENT PAPER FORM -


IMPLIES A PRINTED
DOCUMENT/ A
PRINTER

PROCESS IMPLIES A PROGRAM


AND THE COMPUTER
THE PROGRAM RUNS
ON
INPUT/OUTPUT IMPLIES KEYBOARD
INPUT/SCREEN
OUTPUT AND DATA
BEING
CAPTURED AND
DISPLAYED ON
SCREEN
ON-LINE STORAGE IMPLIES SECONDARY
STORAGE/DATA
BEING STORED USING
SECONDARY S
MANUAL PROCESS IMPLIES A
PROCEDURE
CARRIED OUT BY
MANUAL MEANS
DATABASE IMPLIEAS DATA
STORED INR
ELATIONAL
FORMAT/DATABASE
STRUCTURE
FLOWLINE IMPLIES A HARWARE
INTERFACE

54

Berea College of Technology


Information Systems 201

EXAMPLE

COMPONENT IMPLEMENTATION
PAPER FORM DEPOSIT SLIP SUPPLIED BY LEARNERS
INPUT/OUTPUT DEPOSIT SLIP INFORMATION IS CAPTURED
ACCCOUNTS PACKAGE USED TO CAPTURE
PROCESS DEPOSIT SLIP DATA AND UPDATE LEARNERS
RECORDS.
RECEIPT IS PRINTED AFTER PAYMENT IS
PAPER FORM
PROCESSED
RECEIPT HAS TO BE STAMPED, SIGNED AND TORN
MANUAL PROCESS
IN HALF BEFORE LEARNER RECEIVES IT.
ON LINE STORAGE LEARNERS FEE RECORDS
LEARNERS DETAILS STORED IN DATABASE
DATABASE
FORMAT(RELATIONAL)

55

Berea College of Technology


Information Systems 201

CHAPTER TEN:

EVALUATING ALTERNATIVES

10.1 COSTS AND BENEFITS

Most organizations have a limited supply of capital and resources, and a cost
benefit analysis is the basis for allocating theses resources. Each project is
treated as a potential investment and only those projects that promise a high
return on investment are selected.

Development costs are one-time costs that occur before the system is released
to the user. They include labor and hardware associated with problem definition,
the feasibility study , analysis, design, and development and testing. Operating
costs are those costs that begin after the system is released and lasts for the
lifetime of the system. They include personnel, maintenance, utilities, insurance
and similar costs

New systems are developed to obtain benefits. Tangible benefits are mesured
in financial terms and usually imply reduced operating costs, enhanced revenue
or both.

Intangible benefits such as improved morale or employee satisfaction are more


difficult to measure but are just as important.

10.2 ESTIMATING DEVELOPMENT COSTS

IDENTIFYING SYSTEM COMPONENTS


The first step is to compile a list of the system’s physical components and then
use the list to estimate each components cost.

SELECTING A PLATFORM
If the list includes a computer the next step is to select a platform. A platform is
defined by a specific set of hardware and operating system. Once you choose a
platform, your choice of software and hardware peripherals are constrained by
the platform.

ESTIMATING HARDWARE COSTS


Given a platform, you can find reasonable current cost data for commercially
available hardware in such sources as technical periodicals etc. One frequently
overlooked aspect for estimating hardware costs is site preparation.

56

Berea College of Technology


Information Systems 201

PURCHASES SOFTWARE
The cost of software can be obtained from vendors, newspapers etc. The
problem is not finding appropriate software but choosing from the host of
alternatives available.

SOFTWARE DEVELOPMENT
This cost includes the programmers time to design, write and test the system.
Also to calculate software development costs the CO COMO formula is used.

OTHER COST FACTORS


It is always possible to miss out costs therefore checklists of all activities must
be kept.
e.g. walkthroughs and reviews etc.

VERIFICATION
One way of verifying costs is to use the Top-down estimating technique by
comparing the proposed estimates to a typical project.

Another technique is to compare projects to previously successfully completed


projects.

The bottom up method is a precise methodology that focuses on individual


modules and components but this method overlooks controls, interfaces etc.

CONTINGENCIES
Contingencies are a cost factor added to the cost estimate to overcome risks
associated with the project.

10.3 ESTIMATING OPERATING COSTS AND BENEFITS

Operating costs associated with a new system are used to estimate the benefits
of a system. The key to estimating operating costs is to identify them. Go through
a checklist and identify any cost factor that is likely to change due the result of
the new system. Then estimate the costs with that factor in both the old and new
system and compute the difference.

57

Berea College of Technology


Information Systems 201

EXAMPLE:

10.4 COST/BENEFIT ANALYSIS

Developing a new system is a form of investment. Funds must be committed


throughout the life cycle. In return future benefits are expected. If the benefits do
not exceed the costs , then the system is not worth developing. The purpose of
the cost/benefit analysis is to give management a reasonable picture of the
costs, benefits and risks associated with a given system so they can compare
one investment with another.

58

Berea College of Technology


Information Systems 201

10.5 THE MANAGEMENT REVIEW

Given a set of alternatives, their costs, benefits and risks the analyst selects the
best option and explains why the others were rejected and prepares an estimate
of the resources needed to design and develop the recommended alternative.
The complete package is presented to a technical, format management review.

During the review the project can be killed, or postponed or carried forth

59

Berea College of Technology


Information Systems 201

CHAPTER 11:

THE USER INTERFACE

11.1 DESIGNING THE USER INTERFACE

A user interface is the point in the system where a human being interacts with a
computer. The interface can incorporate hardware, software, procedures and
data. The interaction can be direct e.g a user might access a computer through
a screen and a keyboard. Printed reports and forms designed to capture data
through input are indirect user interfaces.

The first step in user interface design is to design the processes, procedures,
and other tasks the user must perform. Given a set of tasks, you can design the
necessary screens, reports and forms. Next, the dialogues that control the
exchange of information must be designed. Finally a user manual is written to
document the various procedures, screens, reports, forms and dialogues.

11.2 DESIGNING USER PROCESSES

IDENTIFYING PROCESSES
Much of the information needed to design the user processes is collected during
analysis.

DOCUMENTING PROCESSES
As you near the end of the SDLC and start detailed design, your focus is now
on how each process should be performed.

60

Berea College of Technology


Information Systems 201

1.3 REPORTS, FORMS, SCREENS

DESIGNING REPORTS
Reports represent eh results to a query – output.

DESIGNING FORMS
Forms are typically used to capture data. A form image can be displayed on a
screen and used as a template for data entry.

DESIGNING SCREENS

A display screen and a keyboard are common human/computer interface. A


screen can be used to display a report or to simulate a paper form.

DIALOGUES

A dialogue is the exchange of information between the computer and the user.
A dialogue defines a set of screens, menus, reports and forms and the order in
which they are accessed. The dialogue is a merger of processes and the screens
that support the processes. Dialogues must be designed efficiently and
efficiency is a function of response time. Response time is the time between the
issuing of a command and when the results appear on screen.
Response Time includes the following elements:
1. System response time is the traditional definition
2. The display rate is how quick the complete screens
3. User scan/read time is a measure of how long it takes the user to read
and understand the screen
4. User think time includes the time taken for the user to evaluate the
screen and the time during which the user decides what to do
5. User response time is the time during which the user performs a physical
action (presses a key etc) and the time during which the user waits for a
response
6. Error time is the time spent making and recovering from errors

11.4 THE USER MANUAL

The last step in the user interface design process is to design the user manual
to document the procedures, the reports, the forms, the screens and the
dialogues. The user manual should outline the procedures and explain how to
perform the necessary tasks.

61

Berea College of Technology


Information Systems 201

ARTICLE 1

A Summary of Principles for User-Interface Design.

by Talin

This document represents a compilation of fundamental principles for designing


user interfaces, which have been drawn from various books on interface design,
as well as my own experience. Most of these principles can be applied to either
command-line or graphical environments. I welcome suggestions for changes
and additions -- I would like this to be viewed as an "open-source" evolving
document.

1. The principle of user profiling

-- Know who your user is.

Before we can answer the question "How do we make our user-interfaces


better", we must first answer the question: Better for whom? A design that is
better for a technically skilled user might not be better for a non-technical
businessman or an artist.

One way around this problem is to create user models. [TOG91] has an excellent
chapter on brainstorming towards creating "profiles" of possible users. The result
of this process is a detailed description of one or more "average" users, with
specific details such as:

 What are the user's goals?


 What are the user's skills and experience?
 What are the user's needs?

Armed with this information, we can then proceed to answer the question: How
do we leverage the user's strengths and create an interface that helps them
achieve their goals?

In the case of a large general-purpose piece of software such as an operating


system, there may be many different kinds of potential users. In this case it may
be more useful to come up with a list of user dichotomies, such as "skilled vs.
unskilled", "young vs. old", etc., or some other means of specifying a continuum
or collection of user types.

Another way of answering this question is to talk to some real users. Direct
contact between end-users and developers has often radically transformed the
development process.

62

Berea College of Technology


Information Systems 201

2. The principle of metaphor

-- Borrow behaviors from systems familiar to your users.

Frequently a complex software system can be understood more easily if the user
interface is depicted in a way that resembles some commonplace system. The
ubiquitous "Desktop metaphor" is an overused and trite example. Another is the
tape deck metaphor seen on many audio and video player programs. In addition
to the standard transport controls (play, rewind, etc.), the tape deck metaphor
can be extended in ways that are quite natural, with functions such as time-
counters and cueing buttons. This concept of "extendibility" is what distinguishes
a powerful metaphor from a weak one.

There are several factors to consider when using a metaphor:

 Once a metaphor is chosen, it should be spread widely throughout the


interface, rather than used once at a specific point. Even better would be
to use the same metaphor spread over several applications (the tape
transport controls described above is a good example.) Don't bother
 thinking up a metaphor which is only going to apply to a single button.
 There's no reason why an application cannot incorporate several different
metaphors, as long as they don't clash. Music sequencers, for example,
 often incorporate both "tape transport" and "sheet music" metaphors. 
 Metaphor isn't always necessary. In many cases the natural function of
the software itself is easier to comprehend than any real-world analog of
it. Don't strain a metaphor in adapting it to the program's real function. Nor
should you strain the meaning of a particular program feature in order to
 adapt it to a metaphor.
 Incorporating a metaphor is not without certain risks. In particular,
whenever physical objects are represented in a computer system, we
inherit not only the beneficial functions of those objects but also the
 detrimental aspects.
 Be aware that some metaphors don't cross cultural boundaries well. For
example, Americans would instantly recognize the common U.S. Mailbox
(with a rounded top, a flat bottom, and a little red flag on the side), but
there are no mailboxes of this style in Europe.

63

Berea College of Technology


Information Systems 201

3. The principle of feature exposure

-- Let the user see clearly what functions are available

Software developers tend to have little difficulty keeping large, complex mental
models in their heads. But not everyone prefers to "live in their heads" -- instead,
they prefer to concentrate on analyzing the sensory details of the environment,
rather than spending large amounts of time refining and perfecting abstract
models. Both type of personality (labeled "Intuitive" and "Sensable" in the Myers-
Briggs personality classification) can be equally intelligent, but focus on different
aspects of life. It is to be noted that according to some psychological studies
"Sensables" outnumber "Intuitives" in the general population by about three to
one.

Intuitives prefer user interfaces that utilize the power of abstract models --
command lines, scripts, plug-ins, macros, etc. Sensables prefer user interfaces
that utilize their perceptual abilities -- in other words, they like interfaces where
the features are "up front" and "in their face". Toolbars and dialog boxes are an
example of interfaces that are pleasing to this personality type.

Of course, there may be cases where you don't wish to expose a feature right
away, because you don't want to overwhelm the beginning user with too much
detail. In this case, it is best to structure the application like the layers of an
onion, where peeling away each layer of skin reveals a layer beneath. There are
various levels of "hiding": Here's a partial list of them in order from most exposed
to least exposed:

 Toolbar (completely exposed)


 Menu item (exposed by trivial user gesture)
 Submenu item (exposed by somewhat more involved user gesture)
 Dialog box (exposed by explicit user command)
  Secondary dialog box (invoked by button in first dialog box)
 "Advanced user mode" controls -- exposed when user selects
"advanced" option
 Scripted functions


4. The principle of coherence

-- The behavior of the program should be internally and externally


consistent

There's been some argument over whether interfaces should strive to be


"intuitive", or whether an intuitive interface is even possible. However, it is

64

Berea College of Technology


Information Systems 201

certainly arguable that an interface should be coherent -- in other words logical,


consistent, and easily followed. ("Coherent" literally means "stick together", and
that's exactly what the parts of an interface design should do.)

Internal consistency means that the program's behaviors make "sense" with
respect to other parts of the program. For example, if one attribute of an object
(e.g. color) is modifiable using a pop-up menu, then it is to be expected that
other attributes of the object would also be editable in a similar fashion. One
should strive towards the principle of "least surprise".

External consistency means that the program is consistent with the environment
in which it runs. This includes consistency with both the operating system and
the typical suite of applications that run within that operating system. One of the
most widely recognized forms of external coherence is compliance with user-
interface standards. There are many others, however, such as the use of
standardized scripting languages, plug-in architectures or configuration
methods.

5. The principle of state visualization

-- Changes in behavior should be reflected in the appearance of


the program

Each change in the behavior of the program should be accompanied by a


corresponding change in the appearance of the interface. One of the big
criticisms of "modes" in interfaces is that many of the classic "bad example"
programs have modes that are visually indistinguishable from one another.

Similarly, when a program changes its appearance, it should be in response to


a behavior change; A program that changes its appearance for no apparent
reason will quickly teach the user not to depend on appearances for clues as to
the program's state.

One of the most important kinds of state is the current selection, in other words
the object or set of objects that will be affected by the next command. It is
important that this internal state be visualized in a way that is consistent, clear,
and unambiguous. For example, one common mistake seen in a number of
multi-document applications is to forget to "dim" the selection when the window
goes out of focus. The result of this is that a user, looking at several windows at
once, each with a similar-looking selection, may be confused as to exactly which
selection will be affected when they hit the "delete" key. This is especially true if
the user has been focusing on the selection highlight, and not on the window
frame, and consequently has failed to notice which window is the active one.
(Selection rules are one of those areas that are covered poorly

65

Berea College of Technology


Information Systems 201

by most UI style guidelines, which tend to concentrate on "widgets", although


the Mac and Amiga guidelines each have a chapter on this topic.)

6. The principle of shortcuts

-- Provide both concrete and abstract ways of getting a task done

Once a user has become experienced with an application, she will start to build
a mental model of that application. She will be able to predict with high accuracy
what the results of any particular user gesture will be in any given context. At
this point, the program's attempts to make things "easy" by breaking up complex
actions into simple steps may seem cumbersome. Additionally, as this mental
model grows, there will be less and less need to look at the "in your face"
exposure of the application's feature set. Instead, pre-memorized "shortcuts"
should be available to allow rapid access to more powerful functions.

There are various levels of shortcuts, each one more abstract than its
predecessor. For example, in the emacs editor commands can be invoked
directly by name, by menu bar, by a modified keystroke combination, or by a
single keystroke. Each of these is more "accelerated" than its predecessor.

There can also be alternate methods of invoking commands that are designed
to increase power rather than to accelerate speed. A "recordable macro" facility
is one of these, as is a regular-expression search and replace. The important
thing about these more powerful (and more abstract) methods is that they should
not be the most exposed methods of accomplishing the task. This is why emacs
has the non-regexp version of search assigned to the easy-to-remember "C-s"
key.

7. The principle of focus

-- Some aspects of the UI attract attention more than others do

The human eye is a highly non-linear device. For example, it possesses edge-
detection hardware, which is why we see Mach bands whenever two closely
matched areas of color come into contact. It also has motion-detection hardware.
As a consequence, our eyes are drawn to animated areas of the display more
readily than static areas. Changes to these areas will be noticed readily.

The mouse cursor is probably the most intensely observed object on the screen
-- it's not only a moving object, but mouse users quickly acquire the habit of
tracking it with their eyes in order to navigate. This is why global state

66

Berea College of Technology


Information Systems 201

changes are often signaled by changes to the appearance of the cursor, such
as the well-known "hourglass cursor". It's nearly impossible to miss.

The text cursor is another example of a highly eye-attractive object. Changing


its appearance can signal a number of different and useful state changes.

8. The principle of grammar

-- A user interface is a kind of language -- know what the rules are

Many of the operations within a user interface require both a subject (an object
to be operated upon), and a verb (an operation to perform on the object). This
naturally suggests that actions in the user interface form a kind of grammar. The
grammatical metaphor can be extended quite a bit, and there are elements of
some programs that can be clearly identified as adverbs, adjectives and such.

The two most common grammars are known as "Action->Object" and "Object-
>Action". In Action->Object, the operation (or tool) is selected first. When a
subsequent object is chosen, the tool immediately operates upon the object. The
selection of the tool persists from one operation to the next, so that many objects
can be operated on one by one without having to re-select the tool. Action-
>Object is also known as "modality", because the tool selection is a "mode"
which changes the operation of the program. An example of this style is a paint
program -- a tool such as a paintbrush or eraser is selected, which can then
make many brush strokes before a new tool is selected.

In the Object->Action case, the object is selected first and persists from one
operation to the next. Individual actions are then chosen which operate on the
currently selected object or objects. This is the method seen in most word
processors -- first a range of text is selected, and then a text style such as bold,
italic, or a font change can be selected. Object->Action has been called "non-
modal" because all behaviors that can be applied to the object are always
available. One powerful type of Object->Action is called "direct manipulation",
where the object itself is a kind of tool -- an example is dragging the object to a
new position or resizing it.

Modality has been much criticized in user-interface literature because early


programs were highly modal and had hideous interfaces. However, while non-
modality is the clear winner in many situations, there are a large number of
situations in life that are clearly modal. For example, in carpentry, its generally
more efficient to hammer in a whole bunch of nails at once than to hammer in

67

Berea College of Technology


Information Systems 201

one nail, put down the hammer, pick up the measuring tape, mark the position
of the next nail, pick up the drill, etc.

9. The principle of help

-- Understand the different kinds of help a user needs

In an essay in [LAUR91] it states that are five basic types of help,


corresponding to the five basic questions that users ask:

 1. Goal-oriented: "What kinds of things can I do with this program?"


 2. Descriptive: "What is this? What does this do?"
 3. Procedural: "How do I do this?"
 4. Interpretive: "Why did this happen?"
 5. Navigational: "Where am I?"

The essay goes on to describe in detail the different strategies for answering
these questions, and shows how each of these questions requires a different
sort of help interface in order for the user to be able to adequately phrase the
question to the application.

10. The principle of safety

-- Let the user develop confidence by providing a safety net

Ted Nelson once said "Using DOS is like juggling with straight razors. Using a
Mac is like shaving with a bowling pin."

Each human mind has an "envelope of risk", that is to say a minimum and
maximum range of risk-levels which they find comfortable. A person who finds
herself in a situation that is too risky for her comfort will generally take steps to
reduce that risk. Conversely, when a person's life becomes too safe -- in other
words, when the risk level drops below the minimum threshold of the risk
envelope -- she will often engage in actions that increase their level of risk.

This comfort envelope varies for different people and in different situations. In
the case of computer interfaces, a level of risk that is comfortable for a novice
user might make a "power-user" feel uncomfortably swaddled in safety.

It's important for new users that they feel safe. They don't trust themselves or
their skills to do the right thing. Many novice users think poorly not only of their
technical skills, but of their intellectual capabilities in general (witness the
popularity of the "...for Dummies" series of tutorial books.) In many cases these
fears are groundless, but they need to be addressed. Novice users need to be

68

Berea College of Technology


Information Systems 201

assured that they will be protected from their own lack of skill. A program with
no safety net will make this type of user feel uncomfortable or frustrated to the
point that they may cease using the program. The "Are you sure?" dialog box
and multi-level undo features are vital for this type of user.

At the same time, an expert user must be able to use the program as a virtuoso.
She must not be hampered by guard rails or helmet laws. However, expert users
are also smart enough to turn off the safety checks -- if the application allows it.
This is why "safety level" is one of the more important application configuration
options.

11. The principle of context

-- Limit user activity to one well-defined context unless there's a good


reason not to

Each user action takes place within a given context -- the current document, the
current selection, the current dialog box. A set of operations that is valid in one
context may not be valid in another. Even within a single document, there may
be multiple levels -- for example, in a structured drawing application, selecting a
text object (which can be moved or resized) is generally considered a different
state from selecting an individual character within that text object.

It's usually a good idea to avoid mixing these levels. For example, imagine an
application that allows users to select a range of text characters within a
document, and also allows them to select one or more whole documents (the
latter being a distinct concept from selecting all of the characters in a document).
In such a case, it's probably best if the program disallows selecting both
characters and documents in the same selection. One unobtrusive way to do
this is to "dim" the selection that is not applicable in the current context. In the
example above, if the user had a range of text selected, and then selected a
document, the range of selected characters could become dim, indicating that
the selection was not currently pertinent. The exact solution chosen will of course
depend on the nature of the application and the relationship between the
contexts.

12. The principle of aesthetics

-- Create a program of beauty

It's not necessary that each program be a visual work of art. But it's important
that it not be ugly. There are a number of simple principles of graphical design
that can easily be learned, the most basic of which was coined by artist and

69

Berea College of Technology


Information Systems 201

science fiction writer William Rotsler: "Never do anything that looks to someone
else like a mistake." The specific example Rotsler used was a painting of a
Conan-esque barbarian warrior swinging a mighty broadsword. In this picture,
the tip of the broadsword was just off the edge of the picture. "What that looks
like", said Rotsler, "is a picture that's been badly cropped. They should have had
the tip of the sword either clearly within the frame or clearly out of it."

An interface example can be seen in the placement of buttons -- imagine five


buttons, each with five different labels that are almost the same size. Because
the buttons are packed using an automated-layout algorithm, each button is
almost but not exactly the same size. As a result, though the author has placed
much care into his layout, it looks carelessly done. A solution would be to have
the packing algorithm know that buttons that are almost the same size look better
if they are exactly the same size -- in other words, to encode some of the rules
of graphical design into the layout algorithm. Similar arguments hold for manual
widget layout.

13. The principle of user testing

-- Recruit help in spotting the inevitable defects in your design

In many cases a good software designer can spot fundamental defects in a user
interface. However, there are many kinds of defects which are not so easy to
spot, and in fact an experienced software designer is often less capable of
spotting them than the average person. In other cases, a bug can only be
detected while watching someone else use the program.

User-interface testing, that is, the testing of user-interfaces using actual end-
users, has been shown to be an extraordinarily effective technique for
discovering design defects. However, there are specific techniques that can be
used to maximize the effectiveness of end-user testing. These are outlined in
both [TOG91] and [LAUR91] and can be summarized in the following steps:

 Set up the observation. Design realistic tasks for the users, and then
recruit end-users that have the same experience level as users of your
product (Avoid recruiting users who are familiar with your product
 however).
 Describe to the user the purpose of the observation. Let them know that
you're testing the product, not them, and that they can quit at any time.
Make sure that they understand if anything bad happens, it's not their
 fault, and that it's helping you to find problems.
  Talk about and demonstrate the equipment in the room.
 Explain how to "think aloud". Ask them to verbalize what they are thinking
about as they use the product, and let them know you'll remind them to
do so if they forget.

70

Berea College of Technology


Information Systems 201

 Explain that you will not provide help.


  Describe the tasks and introduce the product.
 Ask if there are any questions before you start; then begin the
 observation.
 Conclude the observation. Tell them what you found out and answer any
of their questions.
 Use the results.




14. The principle of humility

-- Listen to what ordinary people have to say

Some of the most valuable insights can be gained by simply watching other
people attempt to use your program. Others can come from listening to their
opinions about the product. Of course, you don't have to do exactly everything
they say. It's important to realize that each of you, user and developer, has only
part of the picture. The ideal is to take a lot of user opinions, plus your insights
as a developer and reduce them into an elegant and seamless whole -
- a design which, though it may not satisfy everyone, will satisfy the greatest
needs of the greatest number of people.

One must be true to one's vision. A product built entirely from customer feedback
is doomed to mediocrity, because what users want most are the features that
they cannot anticipate.

But a single designer's intuition about what is good and bad in an application is
insufficient. Program creators are a small, and not terribly representative, subset
of the general computing population.

Some things designers should keep in mind about their users:

 Most people have a biased idea as to the what the "average" person is
like. This is because most of our interpersonal relationships are in some
way self-selected. It's a rare person whose daily life brings them into
contact with other people from a full range of personality types and
backgrounds. As a result, we tend to think that others think "mostly like
 we do." Designers are no exception.
 Most people have some sort of core competancy, and can be expected
 to perform well within that domain.
 The skill of using a computer (also known as "computer literacy") is
 actually much harder than it appears.
 The lack of "computer literacy" is not an indication of a lack of basic
intelligence. While native intelligence does contribute to one's ability to
use a computer effectively, there are other factors which seem to be just

71

Berea College of Technology


Information Systems 201

as significant, such as a love of exploring complex systems, and an


attitude of playful experimentation. Much of the fluency with computer
interfaces derives from play -- and those who have dedicated themselves
to "serious" tasks such as running a business, curing disease, or helping
victims of tragedy may lack the time or patience to be able to devote effort
to it.
 A high proportion of programmers are introverts, compared to the
general population.

Bibliography
[TOG91] Tog On Interface, Bruce Tognazzini, Addison-Wesley, 1991, ISBN 0-
201-60842-1
[LAUR91] The Art of Human Computer Interface Design, Brenda Laurel,
Addison-Wesley, 1991, ISBN 0-201-51797-3
The Psychology of Everyday Things, Don Norman, Harper-Collins 1988, ISBN
0-465-06709-3
The Macintosh Human Interface Guidelines, Apple Computer Staff, Addison-
Wesley 1993, ISBN 0-201-62216-5
The Amiga User Interface Style Guide, Commodore-Amiga, Addison-Wesley
1991, ISBN 0-201-57757-7

Principles of User Interface Design are intended to improve the quality of user
interface design. According to Larry Constantine and Lucy Lockwood in their
usage-centered design, these principles are:[1]

 The structure principle: Design should organize the user interface


purposefully, in meaningful and useful ways based on clear, consistent
models that are apparent and recognizable to users, putting related things
together and separating unrelated things, differentiating dissimilar things and
making similar things resemble one another. The structure principle is
concerned with overall user interface architecture.

 The simplicity principle: The design should make simple, common tasks easy,
communicating clearly and simply in the user's own language, and providing
good shortcuts that are meaningfully related to longer procedures.

 The visibility principle: The design should make all needed options and
materials for a given task visible without distracting the user with extraneous
or redundant information. Good designs don't overwhelm users with
alternatives or confuse with unneeded information.

 The feedback principle: The design should keep users informed of actions or
interpretations, changes of state or condition, and errors or exceptions that
are relevant and of interest to the user through clear, concise, and
unambiguous language familiar to users.

72

Berea College of Technology


Information Systems 201

 The tolerance principle: The design should be flexible and tolerant, reducing
the cost of mistakes and misuse by allowing undoing and redoing, while also
preventing errors wherever possible by tolerating varied inputs and
sequences and by interpreting all reasonable actions.

 The reuse principle: The design should reuse internal and external
components and behaviors, maintaining consistency with purpose rather than
merely arbitrary consistency, thus reducing the need for users to rethink and
remember.

EXAMPLE OF A SAMPLE FORM:

Design a VB form that allows learners details to be captured. Use your own field
when designing this form. You need to buttons to save the form as well And
perform navigation.

73

Berea College of Technology


Information Systems 201

CHAPTER 12:

DEVELOPING A TEST PLAN

System testing of software or hardware is testing conducted on a complete,


integrated system to evaluate the system's compliance with its specified
requirements.
System testing falls within the scope of black box testing, and as such,
should require no knowledge of the inner design of the code or logic.
As a rule, system testing takes, as its input, all of the "integrated" software
components that have successfully passed integration testing and also the
software system itself integrated with any applicable hardware system(s).
 The purpose of integration testing is to detect any inconsistencies between
the software units that are integrated together (called assemblages) or
 between any of the assemblages and the hardware.
 System testing is a more limited type of testing; it seeks to detect defects
both within the "inter-assemblages" and also within the system as a whole.

Testing the whole system

System testing is performed on the entire system in the context of a
Functional Requirement Specification(s) (FRS) and/or a System
Requirement Specification (SRS).
 System testing tests not only the design, but also the behaviour and even
 the believed expectations of the customer.
 It is also intended to test up to and beyond the bounds defined in the
software/hardware requirements specification(s).

Types of tests to include in system testing

The following examples are different types of testing that should be considered
during System testing:

Graphical user interface testing


Usability testing
Software performance testing
Compatibility testing
Exception handling
Load testing
Volume testing
Stress testing
Security testing


74

Berea College of Technology


Information Systems 201

Scalability testing
Sanity testing
Smoke testing
Exploratory testing
Ad hoc testing
Regression testing
Installation testing
Maintenance testing[
Recovery testing and failover testing.

Although different testing organizations may prescribe different tests as part of
System testing, this list serves as a general framework or foundation to begin
with.


THE TEST PLAN

A test plan is a document detailing a systematic approach to testing a system
such as a machine or software. The plan typically contains a detailed
understanding of what the eventual workflow will be.

The purpose of the test plan is to ensure that the system does what it was
designed to do. The heart of a test plan is the test data.

TESTING LEVELS

THE SYSTEM TEST



 The system test is planned first and 
executed last and its purpose is to ensure that
the system meets the requirements.
  
Expert testing is performed by the people who design the stem.

 worst-case test where everything that could go wrong is
Another option is the
allowed to happen.

 
Expert testing should be followed by the user test, where the user tests the
system.
  
The final system test involves the testing of the entire system.

 is comparing the
The final system test might include a parallel run which
performance of the old system with the new system

75
Berea College of Technology
Information Systems 201

INTERFACE TEST

Once the system test plan has been developed, the analyst turns to networks,
 courier services and other interfaces that
files, databases, system software,
link two or more components

TESTING HARDWARE
  
Many large companies hire specialists to test, install and maintain their software.

  components come with
Basic electronic functions are tested first. Many hardware
their own diagnostic routines which should be tested.
 
Many hardware tests include a burn-n period because of startup failures.

TESTING SOFTWARE
  
Programs and program modules are subjected to a black box test.
  
Given a set of input data, the expected output is predicted.
  
The data is then inputted and the results are captured.

 If theactual results match the predicted results, then the program has passed the
test.
 
Each module is tested individually and then the entire program is tested.

TESTING THE PROCEDURES



The responsibility of testing procedures usually rests with the analyst. Initially
 are used and finally
a draft procedure is tested in a lab, then controlled users
procedures are tested by real life users and real data.

76

Berea College of Technology


Information Systems 201

GENERATING TEST DATA

A key objective of testing is to try to break the system. Good test data anticipates
everything that could possibly go wrong and provides values that potentially
cause every conceivable error. Forcing errors during the testing process
increases the likelihood that they will be discovered and corrected before the
system is released to the user.

The test data should include historical data, hypothetical data, and real data.
Historical data (previously processed data) are necessary to check old system
and new system compatibility. Hypothetical data, or simulated data, are created
specifically for testing purposes. Real data are provided by the user and reflect
the system’s actual operating environment. Some of the test data should
represent normal or typical conditions. Other data should focus on the extremes
and incorporate both legal and illegal values.

Listed below are several techniques for generating test data.

Value analysis

Value analysis generates test data based on the data values. Range constraint
analysis, or boundary analysis, suggests test data to represent such extreme
values as upper bounds, lower bounds, and other exceptional values (e.g., a
negative number or zero). Typically, both in-range and out-of-range values are
included. Format constraint analysis focuses on data type; for example, a zero
or a numeric digit might be placed in an alphabetic field, non-digits might be
inserted into a numeric field, or a value other than F or M might be recorded in a
single-character sex or gender field. Length constraint analysis generates test
data with too many or too few characters or digits; this technique is useful for
testing such fixed length fields as a social security number or a telephone
number.

Value analysis should be performed on the algorithms as well as the input data.
For example, a set of test data values might include two out of range parameters
(one high and one low) that, taken together, produce an in-range answer. Other
algorithm-based test data might reflect all possible extreme (but legal) value
combinations; for example, all parameters at the upper limit, all parameters at
the lower limit, one high and all others low, and so on.

77

Berea College of Technology


Information Systems 201

Data analysis

As described in Section 75.4.3.2, path testing, also called branch analysis or


loop analysis, is used to check the flow of logic through a program. The idea is
to trace the program listing, identify the branch points, and include test data to
force the program to follow each path. Generally, this technique relies on data
values near the branch values to verify the program logic.

A variation called structured data analysis testing focuses on data structures, the
relationships among the data, and such unique data as record keys. For
example, structured data analysis testing can be used to evaluate the order of
data within a file, a table, or a relation by creating test data with all possible
primary and secondary key combinations. A third variation is called data volume
analysis. Testing such parameters as response time under peak load conditions
calls for large volumes of test data. Data volume can be achieved by replicating
and reprocessing test data or by using historical data.

Volume analysis

Volume analysis, or control analysis, is intended to check the system’s behavior.


For example, control totals (# 77) might be checked by processing a set of test
data, generating the totals, and then shuffling the transaction order and
reprocessing the transactions to see if the same control total is generated.

Compatibility analysis

Some applications are designed to access data from multiple versions of a file
or a database. For example, imagine a set of old data files developed using the
COBOL delimited file format and a new database designed for SQL access.
Occasionally, the system might be asked to convert the old data file structure to
support a query or to generate a report, and some new transactions might trigger
updates to the original file. Test data are needed to force the program to obtain
input from and send output to both files.

Partition analysis

Partition analysis focuses on aggregate values. The reliability of a database is a


function of correctness and completeness. The correctness of each individual
transaction can be verified using data analysis techniques (Section 75.4.5.2)
with discrete values. Aggregate data are developed to test completeness. For
example, a type of aggregate value testing called existence testing might be
used to check a database record by simply checking its record number or
verifying that the record is referenced in the index.

78

Berea College of Technology


Information Systems 201

System-dependent test data

Different types of systems call for special test data to test system-specific
parameters. For example, symbolic data are essential for testing expert systems,
real-time systems require time-varying and environment-dependent data, data
communication systems require data to test transmission errors, and so on.

INSPECTION AND WALK THROUGH

REVIEW
 Review is "A process or meeting during which artifacts of software product
are examined by project stockholders, user representatives, or other
 interested parties for feedback or approval”.
 Software Review can be on Technical specifications, designs, source code,
user documentation, support and maintenance documentation, test plans,
test specifications, standards, and any other type of specific to work product,
 it can be conducted at any stage of the software development life cycle.
Purpose of conducting review is to minimize the defect ratio as early as
 possible in Software Development life cycle.
 As a general principle, the earlier a document is reviewed, the greater will be
the impact of its defects on any downstream activities and their work
products. Magnitude cost of defect fixing after the release of the product is a
 disadvantage.
 Review can be formal or informal. Informal reviews are referred as
walkthrough and formal as Inspection.

WALKTHROUGH
 Walkthrough: Method of conducting informal group/individual review is
called walkthrough, in which a designer or programmer leads members of
the development team and other interested parties through a software
product, and the participants ask questions and make comments about
possible errors, violation of development standards, and other problems or
may suggest improvement on the article, walkthrough can be pre planned or
can be conducted at need basis and generally people working on the work
product are involved in the walkthrough process.

 The Purpose of walkthrough is to:
· Find problems
· Discuss alternative solutions
· Focusing on demonstrating how work product meets all requirements.

79

Berea College of Technology


Information Systems 201

 There are three specialist roles in a walkthrough:

Leader: who conducts the walkthrough, handles administrative tasks,


and ensures orderly conduct (and who is often the Author)

Recorder: who notes all anomalies (potential defects), decisions, and action
items identified during the walkthrough meeting, normally generate minutes
of meeting at the end of walkthrough session.

Author: who presents the software product in step-by-step manner at the walk-
through meeting, and is probably responsible for completing most action items.

WALKTHROUGH PROCESS
 The Author describes the artifact to be reviewed to reviewers during the
 meeting.
 Reviewers present comments, possible defects, and improvement
suggestions to the author. Recorder records all defect, suggestion during
 walkthrough meeting.
 Based on reviewer comments, author performs any necessary rework of
 the work product if required.
 Recorder prepares minutes of meeting and sends the relevant stakeholders
and leader is normally to monitor overall walkthrough meeting activities as
per the defined company process or responsibilities for conducting the
reviews, generally performs monitoring activities, commitment against action
items etc.

INSPECTION
 An inspection is a formal, rigorous, in-depth group review designed to
 identify problems as close to their point of origin as possible.
 Inspection is a recognized industry best practice to improve the quality of a
product and to improve productivity, Inspections is a formal review and
 generally need is predefined at the start of the product planning.
 The objectives of the inspection process are to 1) Find problems at the
earliest possible point in the software development process, 2)· Verify that
the work product meets its requirement, 3) · Ensure that work product has
been presented according to predefined standards, 3) · Provide data on
product quality and process effectiveness, 6) · Inspection advantages are to
build technical knowledge and skill among team members by reviewing the
output of other people, 7) Increase the effectiveness of software testing.

  There are three roles in an Inspection:
Inspector Leader: The inspection leader shall be responsible for
administrative tasks pertaining to the inspection, shall be responsible for
planning and preparation, shall ensure that the inspection is conducted
in an orderly manner and meets its objectives, should be responsible for
collecting inspection data

80

Berea College of Technology


Information Systems 201

Recorder: The recorder should record inspection data required for process
analysis. The inspection leader may be the recorder.

Reader: The reader shall lead the inspection team through the software
product in a comprehensive and logical fashion, interpreting sections of the
work product and highlighting important aspects

Author: The author shall be responsible for the software product meeting its
inspection entry criteria, for contributing to the inspection based on special
understanding of the software product, and for performing any rework
required to make the software product meet its inspection exit criteria.

Inspector: Inspectors shall identify and describe anomalies in the software


product. Inspectors shall be chosen to represent different viewpoints at the
meeting (for example, sponsor, requirements, design, code, safety, test,
independent test, project management, quality management, and hardware
engineering). Only those viewpoints pertinent to the inspection of the product
should be present. Some inspectors should be assigned specific review
topics to ensure effective coverage. For example, one inspector may focus
on conformance with a specific standard or standards, another on syntax,
and another for overall coherence. These roles should be assigned by the
inspection leader when planning the inspection. All participants in the review
are inspectors. The author shall not act as inspection leader and should not
act as reader or recorder. Other roles may be shared among the team
members. Individual participants may act in more than one role. Individuals
holding management positions over any member of the inspection team shall
not participate in the inspection

THE INSPECTION PROCESS

Inspection Process:
· Planning
· Overview
· Preparation
· Examination meeting

Planning:
· Inspection Leader perform following task in planning phase
· Determine which work products need to be inspected
· Determine if a work product that needs to be inspected is ready to
be inspected
· Identify the inspection team
· Determine if an overview meeting is needed.

81

Berea College of Technology


Information Systems 201

The moderator ensures that all inspection team members have had
inspection process training. The moderator obtains a commitment from
each team member to participate. This commitment means the person
agrees to spend the time required to perform his or her assigned role on
the team. Identify the review materials required for the inspection, and
distribute materials to relevant stake holders

Overview: Purpose of the overview meeting is to educate inspectors;


meeting is lead by Inspector lead and is presented by author, overview
is presented for the inspection, this meeting normally acts as optional
meeting, purpose to sync the entire participant and the area to be
inspected.

Preparation: Objective of the preparation phase is to prepare for the


inspection meeting by critically reviewing the review materials and the
work product, participant drill down on the document distributed by the
lead inspector and identify the defect before the meeting

Examination meeting: The objective of the inspection meeting is to identify


final defect list in the work product being inspected, based on the initial list
of defects prepared by the inspectors [identified at preparation phase and
the new one found during the inspection meeting. The Lead Auditor opens
the meeting and describes the review objectives and area to be inspected.
Identify that all participants are well familiar with the content material,
Reader reads the meeting material and inspector finds out any
inconsistence, possible defects, and improvement suggestions to the
author. Recorder records all the discussion during the inspection meeting,
and mark actions against the relevant stake holders. Lead Inspector may
take decision that if there is need of follow up meeting. Author updates the
relevant document if required on the basis of the inspection meeting
discussion

Rework and Follow-up: Objective is to ensure that corrective action


has been taken to correct problems found during an inspection.

THE INSPECTION REVIEW


Following inspection the inspection team signs a document that tells
management that the projects has been reviewed and been found technically
acceptable.

POTENTIAL PROBLEMS
The error report compiled during the inspection process represents a point of
concern and the analyst fearing criticism or misuse of the report, postpone the
inspection process.

82

Berea College of Technology


Information Systems 201

VOCABULARY LIST

CHAPTER 1: INFORMATION SYSTEMS ANALYSIS


A System is a set of components that function together in a meaningful and effective
manner.

An Information System is a set of hardware, software, data, procedural, and human


components that work together to generate, collect, store, retrieve, process, analyze
and/or distribute information.

Systems Analysis is the study of a business problem domain to recommend


improvements and specify the business requirements for a solution.

The person who plans and designs a system is called a systems analyst.

A Methodology is a set of tools used in the context of clearly defined steps that end
with specific, measurable exit criteria.

The Systems Development Life Cycle (SDLC), or Software Development Life Cycle
in systems engineering, information systems and software engineering, is the process
of creating or altering systems, and the models and methodologies that people use to
develop these systems
.
CASE tools are a class of software that automate many of the activities involved in
various life cycle phases.

CHAPTER 2: RECOGNIZING AND DEFINING THE PROBLEM


Problem recognition is the act of identifying a problem.

Problem Definition is the act of defining a problems causes.

A Cause and EFFECT Diagram also known as the fishbone diagram(or ishikawa)
diagram after its originator is used to document possible causes and secondary
symptoms.

A problem statement is a concise description of the issues that need to be addressed


by a problem solving team and should be presented to them (or created by them) before
they try to solve the problem

A feasibility study is an evaluation of a proposal designed to determine the difficulty in


carrying out a designated task.

CHAPTER 3: INFORMATION GATHERING


The word “analyzes” means to study something by breaking it down into its
constituent parts

A process is an activity that transforms data in some way. Within a given process
data may be collected, recorded, moved, manipulated, sorted etc.

83

Berea College of Technology


Information Systems 201

A systems boundaries can be defined by identifying those people, organizations and


other systems that lie just outside the target system.

The data dictionary is a collection of data about data

An Entity is an object about which data is stored(person, group, thing or activity)

An Occurrence is a single instance of an entity

An Attribute is a property of an entity

A data Element is an attribute that cannot be logically decomposed.

A set of related data elements forms a data structure or a composite data elements

CHAPTER 4: LOGICAL MODELING

The 'Context Diagram ' is an overall, simplified, view of the target system, which
contains only one process box, and the primary inputs and outputs.

Processes are 'black boxes' - we don't know what is in them until they are
decomposed . Processes transform or manipulate input data to produce output data

Data Flows depict data/information flowing to or from a process.

External Entities , also known as 'External sources/recipients, are things (eg:


people, machines, organisations etc.) which contribute data or information to the
system or which receive data/information from it.

Data Stores are some location where data is held temporarily or permanently.

CHAPTER 5: DATA MODELING


ER model forms the basis of an ER diagram
Single-valued Has only a single value
Composite Can be subdivided
Simple Cannot be subdivided
Multi-valued Can have many values
Derived Can be calculated from other information
Relationships Associations between entities

Weak Entity Existence-dependent on another entity

CHAPTER 6: PROTOTYPING
A prototype is a model for an intended system.

A screen generator is a tool for creating displays on V.D.U. screens quickly


and easily.
A report generator is software for quickly creating a report from data in a database.

84

Berea College of Technology


Information Systems 201

A 4GL is a non-procedural programming language i.e. the programmer states


what needs to be done rather than how it is done

CHAPTER 7: PROJECT PREPARATION


A Gantt chart is a graphical representation of the duration of tasks against
the progression of time.

CHAPTER 8: DEFINING REQUIREMENTS

Competitive procurement— A set of procedures for subcontracting work through a


bidding process.

Behavioral requirement — A requirement that defines something the system does,


such as an input, an output, or an algorithm.

Black box — A routine, module, or component whose inputs and outputs are known,
but whose contents are hidden.

Child — A related, lower-level requirement.

Configuration item — A composite entity that decomposes into specific hardware and
software components; in a data flow diagram, a functional primitive that appears at the
lowest level of decomposition.

Configuration item level — An imaginary line that links the system’s configuration
items; a system’s physical components lie just below the configuration item level.

Design or constraint requirement — A requirement that specifies such constraints as


physical size and weight, environmental factors, ergonomic standards, and the like.

Economic requirement — A requirement that specifies such things as performance


penalties, limits on development and operating costs, the implementation schedule, and
resource restrictions.

Flowdown — A principle that requires each lower-level requirement to be linked to a


single higher-level parent.

Functional primitive — A process (or transform) that requires no further


decomposition.

Functional requirement — A requirement that identifies a task that the system or


component must perform.

Interface requirement — A requirement that identifies a link to another system


component.

Non-behavioral requirement — A requirement that defines an attribute of the system,


such as speed, frequency, response time, accuracy, precision, portability, reliability,
security, or maintainability.

85

Berea College of Technology


Information Systems 201

Parent — A related, higher-level requirement.

Performance requirement — A requirement that specifies such characteristics as


speed, frequency, response time, accuracy, precision, portability, reliability, security,
and maintainability.

Prime item development specification — A set of high-level design requirements


associated with each hardware component defined in (or implied by) a parent
system/segment design document.

Quality requirement — A requirement that specifies a measure of quality, such as an


acceptable error rate, the mean time between failures, or the mean time to repair.

Requirement — Something that must be present in the system; a user need.

Requirements specification — A document that clearly and precisely defines the


customer’s logical requirements (or needs) in such a way that it is possible to test the
finished system to verify that those needs have actually been met.

Software requirements specification — A set of high-level design requirements


associated with each software component defined in (or implied by) a parent
system/segment design document.

System/segment design document — A black-box specification defined for each


physical component at (or directly below) the configuration item level.

System/segment specifications — A hierarchy of requirements specifications that


logically defines the system from its high-level objectives down to the configuration item
level.

CHAPTER 9: GENERATING PHYSICAL ALTERNATIVES


A system flow chart is another tool for documenting a physical system for
documenting a physical system.

CHAPTER 10: EVALUATING ALTERNATIVES


Development costs are one-time costs that occur before the system is released to
the user.

Operating costs are those costs that begin after the system is released and lasts
for the lifetime of the system.

Intangible benefits such as improved morale or employee satisfaction are more


difficult to measure but are just as important.

Tangible benefits are measured in financial terms and usually imply reduced
operating costs, enhanced revenue or both.

A platform is defined by a specific set of hardware and operating system.

Contingencies are a cost factor added to the cost estimate to overcome risks
associated with the project.

86

Berea College of Technology


Information Systems 201

CHAPTER 11: THE USER INTERFACE


A user interface is the point in the system where a human being interacts
with a computer.

Reports represent eh results to a query – output.

A form image can be displayed on a screen and used as a template for


data entry.

A display screen and a keyboard are common human/computer interface.

A dialogue is the exchange of information between the computer and the user.

Response time is the time between the issuing of a command and when
the results appear on screen.

CHAPTER 12: DEVELOPING THE TEST PLAN

The system test is planned first and executed last and its purpose is to ensure
that the system meets the requirements.

The inspection team is headed by a moderator and by other


technical professionals

The inspection review - following inspection the inspection team signs a


document that tells management that the projects has been reviewed and been
found technically acceptable.

87

Berea College of Technology


Information Systems 201

Berea College of Technology


Information Systems 201

BIBLIOGRAPHY
 Davis, W. Business Systems Analysis and Design, 1995. Prentice Hall.

 Reynolds G.W. , Strair R., Principles of Information Systems, 6th Edition, Thomson Course
Technology.
 Bently L., Dittman K., Witten L., Systems Analysis and design methods. 6th Ed, McGraw-Hill

89

Berea College of Technology

You might also like