Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Unit I

1. Life cycles

The systems development life cycle (SDLC) is a conceptual model used in project management
that describes the stages involved in an information system development project, from an
initial feasibility study through maintenance of the completed application. SDLC can apply to
technical and non-technical systems. In most use cases, a system is an IT technology such as
hardware and software. Project and program managers typically take part in SDLC, along
with system and software engineers, development teams and end-users.

Every hardware or software system will go through a development process which can be
thought as an iterative process with multiple steps. SDLC is used to give a rigid structure and
framework to define the phases and steps involved in the development of a system.

SDLC is also an abbreviation for Synchronous Data Link Control and software development
life cycle. Software development life cycle is a very similar process to systems development life
cycle, but it focuses exclusively on the development life cycle of software.

SDLC models

Various SDLC methodologies have been developed to guide the processes involved, including
the original SDLC method, the Waterfall model. Other SDLC models include rapid
application development (RAD), joint application development (JAD), the fountain model, the
spiral model, build and fix, and synchronize-and-stabilize. Another common model today is
called Agile software development.

Frequently, several models are combined into a hybrid methodology. Many of these models are
shared with the development of software, such as waterfall or agile. Numerous model
frameworks can be adapted to fit into the development of software.

In SDLC, documentation is crucial, regardless of the type of model chosen for any application,
and is usually done in parallel with the development process. Some methods work better for
specific kinds of projects, but in the final analysis, the most crucial factor for the success of a
project may be how closely the particular plan was followed.
SDLC can be made up of multiple steps.

Analysis: The existing system is evaluated. Deficiencies are identified. This can be done by
interviewing users of the system and consulting with support personnel.

● Plan and requirements ● Development

● Design ● Testing
● Deployment

● Upkeep and maintenance


2. Life-cycle phases

7 Stages of the System Development Life Cycle


There are seven primary stages of the modern system development life cycle. Here’s a brief
breakdown:

● Planning Stage

● Feasibility or Requirements of Analysis Stage

● Design and Prototyping Stage

● Software Development Stage

● Software Testing Stage

● Implementation and Integration

● Operations and Maintenance Stage

Planning Stage
The planning stage (also called the feasibility stage) is exactly what it sounds like: the phase in
which developers will plan for the upcoming project.

It helps to define the problem and scope of any existing systems, as well as determine the
objectives for their new systems.

By developing an effective outline for the upcoming development cycle, they'll theoretically
catch problems before they affect development.

And help to secure the funding and resources they need to make their plan happen.

‍Analysis Stage

The analysis stage includes gathering all the specific details required for a new system as well
as determining the first ideas for prototypes.

Developers may:

● Define any prototype system requirements

● Evaluate alternatives to existing prototypes

● Perform research and analysis to determine the needs of end-users


Design Stage
The design stage is a necessary precursor to the main developer stage.

Developers will first outline the details for the overall application, alongside specific aspects,
such as its:

● User interfaces

● System interfaces

● Network and network requirements

● Databases

Development Stage
The development stage is the part where developers actually write code and build the
application according to the earlier design documents and outlined specifications.

This is where Static Application Security Testing or SAST tools come into play.

Product program code is built per the design document specifications. In theory, all of the
prior planning and outlined should make the actual development phase relatively
straightforward.

Developers will follow any coding guidelines as defined by the organization and utilize
different tools such as compilers, debuggers, and interpreters.

Testing Stage
Building software is not the end.

Now it must be tested to make sure that there aren’t any bugs and that the end-user
experience will not negatively be affected at any point.

During the testing stage, developers will go over their software with a fine-tooth comb, noting
any bugs or defects that need to be tracked, fixed, and later retested.

t’s important that the software overall ends up meeting the quality standards that were
previously defined in the SRS document.

Implementation and Integration Stage


After testing, the overall design for the software will come together. Different modules or
designs will be integrated into the primary source code through developer efforts, usually by
leveraging training environments to detect further errors or defects.
The information system will be integrated into its environment and eventually installed. After
passing this stage, the software is theoretically ready for market and may be provided to any
end-users.‍

Maintenance Stage
The SDLC doesn’t end when software reaches the market. Developers must now move into a
maintenance mode and begin practicing any activities required to handle issues reported by
end-users.

3. logical steps of systems engineering


Step One: Requirements Analysis and Management
● The first step in any successful development project is the collection, refinement, and
organization of design inputs.
● Requirements Analysis is the art of making sure design inputs can be translated into
verifiable technical requirements.
● Organization of a hierarchical requirement structure supports the management of
complex products across distributed development teams.
● Having a systems engineer in charge of design inputs helps projects move smoothly
through each stage of development.

Step Two: Functional Analysis and Allocation


● The systems engineer leads the team in developing strategies to meet the requirements.
● Formulation of these strategies will be an iterative process that leverages trade-off
studies, risk analyses, and verification considerations, as process inputs.
● Risk management is one of the activities where the systems engineer can make the most
significant contributions by increasing patient and user safety.
● Users can have many different points of contact with a device, and without the holistic
approach of a systems engineer, it becomes difficult to mitigate risk through all
interfaces.

Step Three: Design Synthesis


● During Design Synthesis, the systems engineer leads the team through a systematic
process of quantitatively evaluating each of proposed design solutions against a set of
prioritized metrics.
● This can help the team formulate questions or uncover problems that were not initially
obvious.
● When Design Synthesis is well-executed, it helps reduce the risk, cost, and time of
product development.

Step Four: Systems Analysis and Control


● Systems Analysis and Control activities enable the systems engineer to measure
progress, evaluate and select alternatives, and document decisions made during
development.
● Systems engineers help teams prioritize decisions by guiding them through trade-off
matrices, which rank many options against a range of pre-defined criteria.
● Systems engineers look at a wide range of metrics, such as cost, technical qualifications,
and interfacing parameters in order to help the team make decisions that will lead to
the most successful project.
● A Systems engineer can also provide assistance with modeling and simulation tasks.

Step Five: Verification


● Verification is the process of evaluating the finished design using traceable and
objectives methods for design confirmation.
● The goal of verification is to make sure the design outputs satisfy the design inputs.
● The systems engineer coordinates the efforts of the verification team to ensure that
feedback from Quality Engineering gets incorporated into the final product.

4. Frame works for systems engineering.

Unit II

1. Formulation of issues with a case study,


Benefits and Limitations

A case study can have both strengths and weaknesses. Researchers must consider these pros
and cons before deciding if this type of study is appropriate for their needs.

Pros

One of the greatest advantages of a case study is that it allows researchers to investigate things
that are often difficult to impossible to replicate in a lab. Some other benefits of a case study:1
​Allows researchers to collect a great deal of information
​Give researchers the chance to collect information on rare or unusual cases
​Permits researchers to develop hypotheses that can be explored in experimental research

Cons

On the negative side, a case study:


​Cannot necessarily be generalized to the larger population
​Cannot demonstrate cause and effect
​May not be scientifically rigorous
​Can lead to bias
Researchers may choose to perform a case study if they are interested in exploring a unique or
recently discovered phenomenon. The insights gained from such research can help the
researchers develop additional ideas and study questions that might be explored in future
studies.
However, it is important to remember that the insights gained from case studies cannot be used
to determine cause and effect relationships between variables. However, case studies may be
used to develop hypotheses that can then be addressed in experimental research.

2. Value system design,


3. Functional analysis,

Functional analysis is the next step in the Systems Engineering process after setting goal and
requirements. Functional analysis divides a system into smaller parts, called functional
elements, which describe what we want each part to do. We do not include the how of the
design or solution yet. At this point we don't want to limit the design choices, because it might
leave out the best answer. In later steps we will identify alternatives, optimize them, and select
the best ones to make up the complete system. The name Function comes from mathematical
functions, which act on an input value and produce a different output value. Similarly, in the
Systems Engineering method, functions transform a set of inputs to a set of outputs.

Functional Diagrams[edit | edit source]

Figure 4.1-1. A single function box with standard flow locations.

Figure 4.1-2. Top level functions for a MakerNet location.


There are different ways to record and display the functions which make up a system design. A
Functional Flow Block Diagram is a popular graphical method. It uses a rectangular box to
represent each function (Figure 4.1-1). Arrows represent flows or states of any type to and
from the function. The flows connect to other functions or to outside the system. By
convention, inputs are shown on the left, and outputs are shown on the right. The function box
itself transforms the inputs to the outputs. Mechanisms are the entities which perform the
function, but which are not themselves transformed. They are normally shown by arrows at
the bottom, and typically represent use of a device. Controls are inputs which command, limit,
or direct the operation of the function, and are normally shown at the top. Function names are
typically made up from an action verb and noun, like "chop wood", which summarizes the
task, and are shown inside the box. A function number is usually included to uniquely identify
it, since similar names may end up being used in different places in a project. The most
common numbering method takes the number of a parent function, and adds a digit to that
number for each sub-function. Thus 2.2 has sub-functions 2.2.1, 2.2.2, etc. This allows tracing
where a function is in the overall system simply by looking at its number.
The diagrams can be sequential, such as showing the steps of a production process; parallel,
where different activities can happen at the same time; or looping, where there is feedback or
iteration. More generally, the output of any function can lead to an input to any other function.

Top Level Diagram - In designing a factory, we are generally concerned with the production
function. The placement of the system boundary for design and analysis purposes, however,
can be different. Figure 4.1-2 shows a top-level block diagram for a complete location from the
MakerNet example. It includes a Production function, where most of the manufacturing takes
place, a Habitation function, which are the homes and other buildings the owners live and
work in, and a Transport function that moves physical items from place to place. In this case
the factory delivers products to a network of people who own and operate the equipment, and
their associated community. It may also take wastes from them to be reprocessed. Since the
owners of production are the same group as uses the habitation and transport, this is a
sufficiently integrated situation that it makes sense to place a system boundary around all of
the relevant functions. A factory which only produces products for sale does not have as strong
a link to the end users, so in that case you can exclude anything outside the production
function. The inputs and outputs from outside the selected system boundary are shown to the
left and right of the diagram, and are broken down by type. Logically this diagram is identical
to one at an even higher level, where all the inputs to the location are represented as a single
flow arrow, like in Figure 4.1-1, and the entire project is reduced to a single box with no
internal details. Showing all the details of a project at once in a single diagram makes it too
large and complicated to understand. Instead the complexity is reduced by presenting multiple
levels of detail, where a box at one level represents a more detailed diagram at the next level. In
this way, systems with any level of complexity can be described.
Flow Arrows - Flow arrows represent a state of an item or the movement of any kind of
resource between functions. A raw metal casting from a foundry process and a machined part
made from that casting are different states of a part being made. The casting would be an
input and the machined part would be an output from a "Machine Parts" function that
converts one to the other. Resources are things like energy and human labor which get used up
as inputs, or created as outputs from functions. Flows may divide and merge between
endpoints, but it should be understood that mathematically both sides of the junction are
equal. Otherwise you have spontaneous creation from nothing, or an unaccounted for source
or disposition of some item. When flows on lower level diagrams show more detailed
components, the combination should therefore be identical to a single flow that represents
them on a higher level diagram. Flows, like functions, are given names and unique numbers to
identify them. Both the flows and the function boxes will be further broken down into more
detail as the design progresses.

Representing Functional Relationships - Figure 4.1-2 does not show all the interior flows
connecting the function boxes for two reasons. The first is that we have not defined them at
this point of the design. The second is it would make the diagram unreadable because there
would be too many lines. Higher level diagrams tend to have more flows because they are
showing more of a system at once. One way to handle this complexity is to use large drawings
with multiple sheets. It is understood that each sheet only shows a subset of the flows to the
same functions, to make it more readable. Another way to deal with complexity is to track the
flows in spreadsheets and data tables, which can have as many entries as needed, one for each
flow. Whatever method is used to display and track the functions and flows, they represent the
same relationships between parts of a system. Block diagrams are just a convenient visual
starting point to understand and analyze a system.

Representing Time - Within a given diagram, a flow connecting two functions indicates the
second function logically follows the first one. As a minimum it means the second function
cannot start any earlier than the start of the first one. They can, however, start at the same
time. For example, a portable generator and whatever equipment it powers may be started
together, but the equipment can't be started before the generator, because one of it's necessary
inputs, namely electricity, is not available until the generator is working. Sometimes a series of
function boxes represents a strict time order, where one has to be completed before the next
can start, but this is only necessary when flows include states of the same item. For example,
for an airplane, the function "Take Off from Runway" has to be completed before "Fly to
Destination" and "Land at Destination", because the inputs and outputs are the same
airplane, in the states of "Airplane at Departure Airport", "Airplane in Flight Leaving
Departure City", and so forth. When there is a loop in flow diagrams, such as a water
treatment unit that takes dirty water from homes and sends back clean water, the start-up
sequence should be accounted for in the design. In this case, since there is no dirty water to
process at first, a starting supply of clean water must be fed into the loop at some point.
A different representation of time is required when the system as a whole evolves from one
stage to another, and new functional elements and flows are added. One way to do this is to
prepare different versions of the same diagrams for each stage, and marking the new items
that were not present in the previous stage. This can be by highlighting or coloring the new
items. More complex system representations, like simulations, can have the new items appear
at the appointed time.

Beyond Flow Diagrams - A function or flow name used in a diagram is not enough to describe
exactly what the item is supposed to be. It is merely a label for a part of the design. As
additional details are worked out, they can be included as notes to the diagram, or in a variety
of separate documents or files. Any amount of additional data about the function can be linked
to it by using the same identifying name and function number. The unique tracking label helps
organize all the data for a project in a consistent system. Functional Descriptions expand on
the diagram labels, and are often included on the diagram sheet itself, or supplementary sheets
along with the diagrams. Descriptions can be as long as necessary, but are typically a few
sentences to a few paragraphs. Mathematical models, design drawings for different options, or
any other data that relate to the function are also marked with the same name and number to
identify them.

Analysis Process[edit | edit source]

The purpose of functional analysis is to divide a complex system into smaller and simpler
parts, so that eventually they can be individually designed. One way to divide into parts is
along the time axis. A large project can be divided into phases that are sequential in time,
where each phase adds new capabilities or hardware. A common example is developing real
estate, where each phase develops different parts of a property. Technical projects are often
divided into several design phases, production, testing, and delivery, where milestones for a
given phase must be completed before going on to the next phase. Within a part of a system,
some tasks must happen in a sequence or process in time order. So each step in the sequence
can be identified as a function. Another way to divide functions up is when they do different
things at the same time, but have inputs and outputs that connect them. So the control
electronics for a robot arm may send power to the motors, and read joint angles from a sensor.
The electronics is different enough from the motor and joint bearings that they can sensibly be
treated as different functions, with different design issues to solve.
There is no single answer for how best to divide up a system. Often different alternative
designs will require different functional breakdowns. The designer should use good judgement
to break up a system into elements that have internal relatedness, and that follow a logical flow
from one to the next. A function should contribute to meeting some part of the overall system
requirements, or else you can question the need for that function. In our MakerNet example,
the three top functions serve different purposes: changing things (Production), moving things
(Transport), and using things (Habitation). The owners/end users, who are the reason the
MakerNet is built, are included in the system design because they are strongly connected. They
are expected to be the human operators of the system, their waste products are intended to be
recycled, and the homes and commercial buildings may be physically connected to power
sources and other utilities from the Production function. An Industrial Production type factory
which is only designed to make items for sale would have less interaction between production
and the customers. In that case you can reasonably draw the system boundary around only the
production portion, and leave the end users and delivery as external flows.
The functional analysis, like all parts the design process, is not done once and then finished. It
will evolve and be revised as the design progresses. Diagrams and their related records
represent the current state of the design at some point in time. Version numbers and dates help
identify the most current state, and change histories explain why it was changed. In a small
project or early stage of a larger one such formal tracking is not as necessary. In a large
project, where each person can only be familiar with the parts they are working on, formal
tracking helps keep the overall work coordinated, and is much more important.
Functions do not have a one-to-one relationship with hardware. A given hardware item may
perform multiple tasks, and a given function may need multiple hardware items in different
places to be completed. Rather, functions are mental abstractions. They represent pieces of the
intended purpose the system is built for. When carried to a sufficiently detailed level, the pieces
are then simple enough to design individually.
4. Business Process Reengineering,

Business process re-engineering is not just a change, but actually it is a dramatic change and
dramatic improvements. This is only achieved through overhaul the organization structures,
job descriptions, performance management, training and the most importantly, the use of IT
i.e. Information Technology.

Figure : Business Process Re-engineering


BPR projects have failed sometimes to meet high expectations. Many unsuccessful BPR
attempts are due to the confusion surrounding BPR and how it should be performed. It
becomes the process of trial and error.
Phases of BPR :
According to Peter F. Drucker, ” Re-engineering is new, and it has to be done.” There are 7
different phases for BPR. All the projects for BPR begin with the most critical requirement i.e.
communication throughout the organization.
1. Begin organizational change.
2. Build the re-engineering organization.
3. Identify BPR opportunities.
4. Understand the existing process.
5. Reengineer the process
6. Blueprint the new business system.
7. Perform the transformation.
Objectives of BPR :
Following are the objectives of the BPR :
1. To dramatically reduce cost.
2. To reduce time requirements.
3. To improve customer services dramatically.
4. To reinvent the basic rules of the business e.g. The airline industry.
5. Customer satisfaction.
6. Organizational learning.
Challenges faced by BPR process :
All the BPR processes are not as successful as described. The companies that have start the use
of BPR projects face many of the following challenges :
1. Resistance
2. Tradition
3. Time requirements
4. Cost
5. Job losses
Advantages of BPR :
Following are the advantages of BPR :
1. BPR offers tight integration among different modules.
2. It offers same views for the business i.e. same database, consistent reporting and
analysis.
3. It offers process orientation facility i.e. streamline processes.
4. It offers rich functionality like templates and reference models.
5. It is flexible.
6. It is scalable.
7. It is expandable.
Disadvantages of BPR :
Following are the Disadvantages of BPR :
1. It depends on various factors like size and availability of resources. So, it will not fit for
every business.
2. It is not capable of providing an immediate resolution.
5. Quality function deployment,

Quality Function Deployment (QFD) is process or set of tools used to define the customer
requirements for product and convert those requirements into engineering specifications
and plans such that the customer requirements for that product are satisfied.
● QFD was developed in late 1960s by Japanese Planning Specialist named Yoji Akao.
● QFD aims at translating Voice of Customer into measurable and detailed design
targets and then drives them from the assembly level down through sub-assembly
level, component level, and production process levels.
● QFD helps to achieve structured planning of product by enabling development team
to clearly specify customer needs and expectations of product and then evaluate
each part of product systematically.
Key steps in QFD :
1. Product planning :
○ Translating what customer wants or needs into set of prioritized design
requirements.
○ Prioritized design requirements describe looks/design of product.
○ Involves benchmarking – comparing product’s performance with
competitor’s products.
○ Setting targets for improvements and for achieving competitive edge.
2. Part Planning :
○ Translating product requirement specifications into part of characteristics.
○ For example, if requirement is that product should be portable, then
characteristics could be light-weight, small size, compact, etc.
3. Process Planning :
○ Translating part characteristics into an effective and efficient process.
○ The ability to deliver six sigma quality should be maximized.
4. Production Planning :
○ Translating process into manufacturing or service delivery methods.
○ In this step too, ability to deliver six sigma quality should be improved.
Benefits of QFD :
1. Customer-focused –
Very first step of QFD is marked by understanding and collecting all user
requirements and expectations of product. The company does not focus on what
they think customer wants, instead, they ask customers and focus on requirements
and expectations put forward by them.
2. Voice of Customer Competitor Analysis –
House of Quality is significant tool that is used to compare voice of customer with
design specifications.
3. Structure and Documentation –
Tools used in Quality Function Deployment are very well structured for capturing
decisions made and lessons learned during development of product. This
documentation can assist in development of future products.
4. Low Development Cost –
Since QFD focuses and pays close attention to customer requirements and
expectations in initial steps itself, so the chances of late design changes or
modifications are highly reduced, thereby resulting in low product development
cost.
5. Shorter Development Time –
QFD process prevents wastage of time and resources as enough emphasis is made on
customer needs and wants for the product. Since customer requirements are
understood and developed in right way, so any development of non-value-added
features or unnecessary functions is avoided, resulting in no time waste of product
development team.
6. System synthesis,

System synthesis is an activity within the systems approach that is used to describe one or
more system solutions based upon a problem context for the life cycle of the system to:
● Define options for a SoI with the required properties and behavior for an identified
problem or opportunity context.
● Provide solution options relevant to the SoI in its intended environment, such that
the options can be assessed to be potentially realizable within a prescribed time
limit, cost, and risk described in the problem context.
● Assess the properties and behavior of each candidate solution in its wider system
context.
The iterative activity of system synthesis develops possible solutions and may make some
overall judgment regarding the feasibility of said solutions. The detailed judgment on whether
a solution is suitable for a given iteration of the systems approach is made using the Analysis
and Selection between Alternative Solutions activities.
Essential to synthesis is the concept of holism(Hitchins 2009), which states that a system must
be considered as a whole and not simply as a collection of its elements. The holism of any
potential solution system requires that the behavior of the whole be determined by addressing
a system within an intended environment and not simply the accumulation of the properties of
the elements. The latter process is known as reductionism and is the opposite of holism, which
Hitchins (2009, 60) describes as the notion that “the properties, capabilities, and behavior of a
system derive from its parts, from interactions between those parts, and from interactions with
other systems.”
When the system is considered as a whole, properties called emergent properties often appear
(see Emergence). These properties are often difficult to predict from the properties of the
elements alone. They must be evaluated within the systems approach to determine the
complete set of performance levels of the system. According to Jackson (2010), these properties
can be considered in the design of a system, but to do so, an iterative approach is required.
In complex systems, individual elements will adapt to the behavior of the other elements and to
the system as a whole. The entire collection of elements will behave as an organic whole.
Therefore, the entire synthesis activity, particularly in complex systems, must itself be
adaptive.
Hence, synthesis is often not a one-time process of solution design, but is used in combination
with problem understanding and solution analysis to progress towards a more complete
understanding of problems and solutions over time (see Applying the Systems Approach topic
for a more complete discussion of the dynamics of this aspect of the approach).

7. Approaches for generation of alternatives.

Prototyping, computer-aided software engineering (CASE) tools, joint application design


(JAD), rapid application development (RAD), participatory design (PD), and the use of Agile
Methodologies represent different approaches that streamline and improve the systems
analysis and design process from different perspectives.

1.3.1 Prototyping

Designing and building a scaled-down but working version of a desired system is known as
prototyping. A prototype can be developed with a CASE tool, a software product that
automates steps in the systems development life cycle. CASE tools make prototyping easier and
more creative by supporting the design of screens and reports and other parts of a system
interface. CASE tools also support many of the diagramming techniques you will learn, such
as data-flow diagrams and entity-relationship diagrams.

The analyst works with users to determine the initial or basic requirements for the system. The
analyst then quickly builds a prototype.

When the prototype is completed, the users work with it and tell the analyst what they like and
do not like about it. The analyst uses this feedback to improve the prototype and takes the new
version back to the users. This iterative process continues until the users are relatively satisfied
with what they have seen. The key advantages of the prototyping technique are: (1) it involves
the user in analysis and design, and (2) it captures requirements in concrete, rather than
verbal or abstract, form. The following figure illustrates prototyping.
1.3.2 Computer-Aided Software Engineering (CASE) Tools

Computer-aided software engineering (CASE) refers to automated software tools used by


systems analysts to develop information systems. These tools can be used to automate or
support activities throughout the systems development process with the objective of increasing
productivity and improving the overall quality of systems. CASE helps provide an
engineering-type discipline to software development and to the automation of the entire
software life-cycle process, sometimes with a single family of integrated software tools.

In general, CASE assists systems builders in managing the complexities of information system
projects and helps ensure that high-quality systems are constructed on time and within budget.

The general types of CASE tools include:

Diagramming tools that enable system process, data, and control structures to be represented
graphically.

Computer display and report generators that help prototype how systems look and feel‖ to
users.

Display (or form) and report generators also make it easier for the systems analyst to identify
data requirements and relationships.
Analysis tools that automatically check for incomplete, inconsistent, or incorrect specifications
in diagrams, forms, and reports.

A central repository that enables the integrated storage of specifications, diagrams, reports,
and project management information.

Documentation generators that help produce both technical and user documentation in
standard formats.

Code generators that enable the automatic generation of program and database definition code
directly from the design documents, diagrams, forms, and reports.

1.3.3 Joint Application Design

In the late 1970s, systems development personnel at IBM developed a new process for
collecting information system requirements and reviewing system designs. The process is called
joint application design (JAD). The idea behind JAD is to structure the requirements
determination phase of analysis and the reviews that occur as part of the design. Users,
managers, and systems developers are brought together for a series of intensive structured
meetings run by a JAD session leader. By gathering the people directly affected by an IS in one
room at the same time to work together to agree on system requirements and design details,
time and organizational resources are better managed. Group members are more likely to
develop a shared understanding of what the IS is supposed to do.

1.3.4 Rapid Application Development

Prototyping, CASE, and JAD are key tools that support rapid application development (RAD).
The fundamental principle of any RAD methodology is to delay producing detailed system
design documents until after user requirements are clear. The prototype serves as the working
description of needs. RAD involves gaining user acceptance of the interface and developing key
system capabilities as quickly as possible. RAD is widely used by consulting firms. It is also
used as an in-house methodology by firms such as the Boeing Company. RAD sacrifices
computer efficiency for gains in human efficiency in rapidly building and rebuilding working
systems.

On the other hand, RAD methodologies can overlook important systems development
principles, which may result in problems with systems developed this way.

RAD grew out of the convergence of two trends: the increased speed and turbulence of doing
business in the late 1980s and early 1990s, and the ready availability of high-powered
computer-based tools to support systems development and easy maintenance. As the conditions
of doing business in a changing, competitive global environment became more turbulent,
management in many organizations began to question whether it made sense to wait two to
three years to develop systems that would be obsolete upon completion. On the other hand,
CASE tools and prototyping software were diffusing throughout organizations, making it
relatively easy for end users to see what their systems would look like before they were
completed. Why not use these tools to address the problems of developing systems more
productively in a rapidly changing business environment? So RAD was born. The same phases
followed in the SDLC are also followed in RAD, but the phases are combined to produce a
more streamlined development technique. Planning and design phases in RAD are shortened
by focusing work on system functional and user interface requirements at the expense of
detailed business analysis and concern for system performance issues. Also, usually RAD looks
at the system being developed in isolation from other systems, thus eliminating the
time-consuming activities of coordinating with existing standards and systems during design
and development. The emphasis in RAD is generally less on the sequence and structure of
processes in the life cycle and more on doing different tasks in parallel with each other and on
using prototyping extensively. Notice also, that the iteration in the RAD life cycle is limited to
the design and development phases, which is where the bulk of the work in a RAD approach
takes place.

1.3.5 Agile Methodologies

Many other approaches to systems analysis and design have been developed over the years.
These approaches include extreme Programming, the Crystal family of methodologies,
Adaptive Software Development, Scrum, and Feature Driven Development.

Agile Methodologies share three key principles: (1) a focus on adaptive rather than predictive
methodologies, (2) a focus on people rather than roles, and (3) a self-adaptive process.
Adopting an adaptive rather than predictive methodology refers to the observation that
engineering based methodologies work best when the process and product are predictive.

Software tends not to be as predictive as, say, a bridge, especially in today‘s turbulent business
environment.

More adaptive methodologies are needed, then, and the Agile Methodologies are based on the
ability to adapt quickly. The focus on people rather than roles is also a criticism of
engineering-based techniques, where people became interchangeable.

An Agile approach views people as talented individuals, not people filling roles, each of whom
has unique talents to bring to a development project. Finally, Agile Methodologies promote a
self-adaptive software development process. As the methodologies are applied, they should also
be adapted by a particular development team working on a particular project in a particular
context. No single monolithic methodology effectively fits all developers on all projects at all
times.
Unit III

1. Cross-impact analysis

Cross-Impact Analysis
Cross-impact analysis is the general name given to a family of techniques designed to evaluate
changes in the probability of the occurrence of a given set of events consequent on the actual
occurrence of one of them. The cross impact model was introduced as a means of accounting
for the interactions between a set of forecasts, when those interactions may not have been
taken into consideration when individual forecasts were produced.
The origin of cross-impact analysis was the problem that Delphi panellists were sometimes
asked to make forecasts about individual events, when other events in the same Delphi could
affect these events. Thus, it was recognised that there was a need take these cross impacts of
one event on another into account.
While cross-impact analysis was initially associated with the Delphi method, its use is not
restricted to Delphi forecasts. In fact, cross impact models can stand alone as a method of
futures research, or can be integrated with other method(s) to form powerful forecasting tools.
A typical output of cross-impact analysis is a list of possible future scenarios and their
interpretation.
Cross-impact analysis is mainly used in prospective and technological forecasting studies
rather than in Foresight exercises per se. In the past, this tool was used as a simulation method
and in combination with the Delphi method.
More recently, cross-impact analysis was used on a stand-alone basis or in combination with
other techniques to answer a number of research questions on different subjects such as the
future of a particular industrial sector, world geopolitical evolution, the future of corporate
activities and jobs. The target audience for this method typically comprises experts from
industry, academia, research and government.
Pros and cons
The main benefits are:
• It is relatively easy to implement a SMIC questionnaire
• Cross-impact methods forces attention into chains of causality; a affects b; b affects c.
• Estimate dependency and interdependency among events
• It can be used to clarify and increase knowledge on future developments
The following limitations need to be highlighted:
• It is very difficult to explore the future of a complex system with limited number of
hypotheses. Interactions between pairs of events: does this reflect reality?
• Difficult to understand the consistency and validity of the technique.
• As any other techniques based on eliciting experts’ knowledge, the method relies on the level
of expertise of respondents.

2. Structural modeling tools


3. System Dynamics models

Systems dynamics models are continuous simulation models using hypothesized relations
across activities and processes. Systems dynamics was developed by Forester in 1961 and was
initially applied to dealing with the complexity of industrial economic systems and world
environmental and population problems. These models are very closely related to the general
systems approach and allow modelers to insert qualitative relationships (expressed in general
quantitative forms). Like all simulation models, all results are contingent upon the assumed
inputs. General systems theory views assemblies of interrelated parts as having feedback loops
that respond to system conditions and provide a degree of self-correction and control.
Abdel-Hamid and Madnick in 1991 applied systems dynamics models in their pioneering work
on software process simulation. Their models captured relationships between personnel
availability, productivity, and work activities. Process details were not required. Systems
dynamics continues to be a very popular mode of software process modeling and has also been
applied in interactive simulations of projects for training project managers.
Continuous simulation model advantages include its ability to capture project level and
systems-related aspects of software process projects. Higher level issues such as learning curve
effects, differential production rates, and the impact of using different members of the
workforce are more naturally modeled with continuous simulation models than with discrete
event models. However, details relating to tasks and activities are more naturally dealt with
through discrete event simulation models.
4. Economic models: present value analysis – NPV

THE MODERN ECONOMY is a complex machine. Its job is to allocate limited resources and
distribute output among a large number of agents—mainly individuals, firms, and
governments—allowing for the possibility that each agent’s action can directly (or indirectly)
affect other agents’ actions.
Adam Smith labeled the machine the “invisible hand.” In The Wealth of Nations, published in
1776, Smith, widely considered the father of economics, emphasized the economy’s
self-regulating nature—that agents independently seeking their own gain may produce the best
overall result for society as well. Today’s economists build models—road maps of reality, if you
will—to enhance our understanding of the invisible hand.
As economies allocate goods and services, they emit measurable signals that suggest there is
order driving the complexity. For example, the annual output of advanced economies oscillates
around an upward trend. There also seems to be a negative relationship between inflation and
the rate of unemployment in the short term. At the other extreme, equity prices seem to be
stubbornly unpredictable.
Economists call such empirical regularities “stylized facts.” Given the complexity of the
economy, each stylized fact is a pleasant surprise that invites a formal explanation. Learning
more about the process that generates these stylized facts should help economists and
policymakers understand the inner workings of the economy. They may then be able to use
this knowledge to nudge the economy toward a more desired outcome (for example, avoiding a
global financial crisis).
5. Benefits and costs over time

A cost-benefit analysis (CBA) is a process that is used to estimate the costs and benefits of
decisions in order to find the most cost-effective alternative. A CBA is a versatile method that
is often used for the business, project and public policy decisions. An effective CBA evaluates
the following costs and benefits:
Costs
● Direct costs
● Indirect costs
● Intangible costs
● Opportunity costs
● Costs of potential risks
Benefits
● Direct
● Indirect
● Total benefits
● Net benefits
We’ll expand on these costs and benefits in our cost-benefit analysis example below.
Keeping track of all these figures is made easier with project management software. For
example, ProjectManager has a sheet view, which is exactly like a Gantt but without a visual
timeline. You can switch back and forth from the Gantt to the sheet view when you want to
just look at your costs in a spreadsheet. You can add as many columns as you like and filter the
sheet to capture only the relevant data. Keeping track of your costs and benefits is what brings
in a successful project. Use our tool to get the control you need by taking this free trial.
6. Work and Cost breakdown structure.

A good Work Breakdown Structure is created using an iterative process by following these
steps and meeting these guidelines:

1. GATHER CRITICAL DOCUMENTS


1. Gather critical project documents.
2. Identify content containing project deliverables, such as the Project
Charter, Scope Statement and Project Management Plan (PMP)
subsidiary plans.
2. IDENTIFY KEY TEAM MEMBERS
1. Identify the appropriate project team members.
2. Analyze the documents and identify the deliverables.
3. DEFINE LEVEL 1 ELEMENTS
1. Define the Level 1 Elements. Level 1 Elements are summary deliverable
descriptions that must capture 100% of the project scope.
2. Verify 100% of scope is captured. This requirement is commonly
referred to as the 100% Rule.
4. DECOMPOSE (BREAKDOWN) ELEMENTS
1. Begin the process of breaking the Level 1 deliverables into unique lower
Level deliverables. This “breaking down” technique is called
Decomposition.
2. Continue breaking down the work until the work covered in each
Element is managed by a single individual or organization. Ensure that
all Elements are mutually exclusive.
3. Ask the question, would any additional decomposition make the project
more manageable? If the answer is “no”, the WBS is done.
5. CREATE WBS DICTIONARY
1. Define the content of the WBS Dictionary. The WBS Dictionary is a
narrative description of the work covered in each Element in the WBS.
The lowest Level Elements in the WBS are called Work Packages.
2. Create the WBS Dictionary descriptions at the Work Package Level with
detail enough to ensure that 100% of the project scope is covered. The
descriptions should include information such as, boundaries, milestones,
risks, owner, costs, etc.
6. CREATE GANTT CHART SCHEDULE
1. Decompose the Work Packages to activities as appropriate.
2. Export or enter the Work Breakdown Structure into a Gantt chart for
further scheduling and project tracking.

Caution: It is possible to break the work down too much. How much is too much? Since cost
and schedule data collection, analysis and reporting are connected to the WBS, a very detailed
WBS could require a significant amount of unnecessary effort to manage.

Cost Breakdown:

A Cost Breakdown Structure (CBS) is a breakdown or hierarchical representation of the


various costs in a project. The Cost Breakdown Structure represents the costs of the
components in the Work Breakdown Structure (WBS). The CBS is a critical tool in managing
and the financial aspects of any project and creates a structure for applying measurable cost
controls.

A Cost Breakdown Structure (CBS) is a breakdown or hierarchical representation of the


various costs in a project. The Cost Breakdown Structure represents the costs of the
components in the Work Breakdown Structure (WBS). The CBS is a critical tool in managing
the project lifecycle, especially the financial aspects of any project by creating a structure for
applying measurable cost controls.
Unit IV

1. Reliability, Availability, Maintainability, and Supportability models

This section sets forth basic definitions, briefly describes probability distributions, and then
discusses the role of RAM engineering during system development and operation. The final
subsection lists the more common reliability test methods that span development and
operation.

Basic Definitions

Reliability

Reliability is defined as the probability of a product performing its intended function under
stated conditions without failure for a given period of time. (ASQ 2022) A precise definition
must include a detailed description of the function, the environment, the time scale, and
what constitutes a failure. Each can be surprisingly difficult to define precisely.

Maintainability

The probability that a given maintenance action for an item under given usage conditions
can be performed within a stated time interval when the maintenance is performed under
stated conditions using stated procedures and resources. Maintainability has two
categories: serviceability (the ease of conducting scheduled inspections and servicing) and
repairability (the ease of restoring service after a failure). (ASQ 2022)

Availability

Defined as the probability that a repairable system or system element is operational at a


given point in time under a given set of environmental conditions. Availability depends on
reliability and maintainability and is discussed in detail later in this topic. (ASQ 2011)

Failure

A failure is the event(s), or inoperable state, in which any item or part of an item does not,
or would not, perform as specified. (GEIA 2008) The failure mechanism is the physical,
chemical, electrical, thermal, or other process that results in failure (GEIA 2008). In
computerized systems, a software defect or fault can be the cause of a failure (Laprie 1992)
which may have been preceded by an error which was internal to the item. The failure
mode is the way or the consequence of the mechanism through which an item fails. (GEIA
2008, Laprie 1992) The severity of the failure mode is the magnitude of its impact. (Laprie
1992)
2. Stochastic networks and Markov models

A stochastic process, also known as a random process, is a collection of random variables that
are indexed by some mathematical set. Each probability and random process are uniquely
associated with an element in the set. The index set is the set used to index the random
variables. The index set was traditionally a subset of the real line, such as the natural numbers,
which provided the index set with time interpretation.
Stochastic Process Meaning is one that has a system for which there are observations at certain
times, and that the outcome, that is, the observed value at each time is a random variable.
Each random variable in the collection of the values is taken from the same mathematical
space, known as the state space. This state-space could be the integers, the real line, or
η-dimensional Euclidean space, for example. A stochastic process's increment is the amount
that a stochastic process changes between two index values, which are frequently interpreted
as two points in time. Because of its randomness, a stochastic process can have many outcomes,
and a single outcome of a stochastic process is known as, among other things, a sample
function or realization.

Markov Processes
Markov processes are widely used in engineering, science, and business modeling. They are
used to model systems that have a limited memory of their past. For example, in the gambler’s
ruin problem discussed earlier in this chapter, the amount of money the gambler will make
after n + 1 games is determined by the amount of money he has made after n games. Any other
information is irrelevant in making this prediction. In population growth studies, the
population of the next generation depends mainly on the current population and possibly the
last few generations.
A random process {X(t), t ∈ T} is called a first-order Markov process if for any t0 < t1 < ⋯ < tn
the conditional CDF of X(tn) for given values of X(t0), X(t1), …, X(tn − 1) depends only on
X(tn − 1). That is,
(12.23)
P[Xtn≤xn|X(tn−1)≤xn−1,X(tn−2)≤xn−2,…,X(t0)≤x0]=P[Xtn≤xn|X(tn−1)≤xn−1]
This means that, given the present state of the process, the future state is independent of the
past. This property is usually referred to as the Markov property. In second-order Markov
processes the future state depends on both the current state and the last immediate state, and
so on for higher-order Markov processes. In this chapter we consider only first-order Markov
processes.
Markov processes are classified according to the nature of the time parameter and the nature
of the state space. With respect to state space, a Markov process can be either a discrete-state
Markov process or continuous-state Markov process. A discrete-state Markov process is called
a Markov chain. Similarly, with respect to time, a Markov process can be either a discrete-time
Markov process or a continuous-time Markov process. Thus, there are four basic types of
Markov processes:
1.Discrete-time Markov chain (or discrete-time discrete-state Markov process)
2.Continuous-time Markov chain (or continuous-time discrete-state Markov process)
3.Discrete-time Markov process (or discrete-time continuous-state Markov process)
4.Continuous-time Markov process (or continuous-time continuous-state Markov process)
3. Queuing network optimization

Queueing network modelling, the specific subject of this book, is a particular approach to
computer system modelling in which the computer system is represented as a network of
queues which is evaluated analytically. A network of queues is a collection of service centers,
which represent system resources, and customers, which represent users or transactions.
Analytic evaluation involves using software to solve efficiently a set of equations induced by the
network of queues and its parameters. (These definitions, and the informal overview that
follows, take certain liberties that will be noted in Section 1.5.) 1.2.1. Single Service Centers
Figure 1.1 illustrates a single service center. Customers arrive at the service center, wait in the
queue if necessary, receive service from the server, and depart. In fact, this service center and
its arriving customers constitute a (somewhat degenerate) queueing network model.
1.2.2. Multiple Service Centers It is hard to imagine characterizing a contemporary computer
system by two parameters, as would be required in order to use the model of Figure 1.1. (In
fact, however, this was done with success several times in the simpler days of the 1960’s.)
Figure 1.3 shows a more realistic model in which each system resource (in this case a CPU and
three disks) is represented by a separate service center.

4. Time series and Regression models


5. Evaluation of large scale models
Unit V

1. Decision assessment types,


2. Five types of decision assessment efforts

five decision-making styles. They are: Visionary, Guardian, Motivator, Flexible, and Catalyst.
Each style is a combination of preferences from a set of six pairs of opposing characteristics:
● prefers ad hoc or process
● prefers action or caution
● gathers information narrowly or widely
● believes corporate interests or personal interests prevail
● likes continuity or change
● prefers storytelling or facts
Although the authors stress that the research is still in an early stage, here is a summary of
what they have learned so far.
Decision makers all have particular ways they like to work and there are actions each should
take to keep their tendencies from undermining their intent.
Visionary
The visionary decision maker is "a champion of radical change with a natural gift for leading
people through turbulent times." Such people like change, gather information relatively
narrowly, and are strongly biased toward action but "may be too quick to rush in the wrong
direction."
If you are a visionary leader, you should seek the opinions and views of a broad group and
"encourage dissenters to voice their concerns." Only that way can you get a wider set of views
and information that can be critical to success.
Guardian
A guardian is a "model of fairness who preserves the health, balance, and values of the
organization." Such people have sound decision-making processes, try for fact-based choices,
and plan carefully. They like continuity, are moderately cautious, and gather information
relatively widely.
Those are fine characteristics for normal times. But the guardian can be too cautious and slow
moving during a crisis, when there is "desperate need for change." That is why a guardian
should talk to people outside the organization and have them "challenge deeply held beliefs
about the company and its industry." Task forces are then in order to "explore major changes
in the environment."
Motivator
Motivators are good choices for change. They are charismatic, can convince people of the need
for action, and build alignment among parts of the company. But like all good storytellers, they
risk believing the story in the face of countervailing facts. They gather information relatively
narrowly, and strongly believe that self-interest prevails over corporate interest.
Rather than looking simply for outside counsel, motivators need to explore the existing facts
and see if there are other ways to interpret them--ways that do not necessarily play into the
narrative they have created. Formal processes are a help. Motivators can use surveys to get a
realistic sense of the rest of the company.
Flexible
Flexible leaders are, as you might expect from the name, more versatile than other types of
leaders: "comfortable with uncertainty, open minded in adapting to circumstances, and willing
to involve a variety of people in the decision making." They mildly lean to ad hoc approaches
rather than formal processes and are fairly cautious.
The problem with flexible leaders is that they can become too open-minded. Looking at all the
potential issues, solutions, and outcomes can paralyze the decision-making process. They
should set deadlines for decisions before the paralytic debate can commence. It can also make
sense to create a framework for ordinary repetitive decisions, making them the subject of a set
of rules so as not to waste time on reconsidering.
Catalyst
The catalyst is an excellent person to lead the work of groups, whether making decisions or
implementing them. They are balanced, being in the middle on four out of the six
characteristics, although they slightly prefer action to caution and are slightly biased toward
broadly, rather than narrowly, gathering information. The more extreme the necessary
decision, the more they can naturally resist inherent biases.
That said, being middle of the road can yield only average results. To avoid that, a catalyst
should watch for circumstances that require high-stakes decisions and realize that they may
need a different type of decision process, like having a team look at the situation and suggest
potential approaches.
3. Utility theory

Utility theory bases its beliefs upon individuals’ preferences. It is a theory postulated in

economics to explain behavior of individuals based on the premise people can consistently rank

order their choices depending upon their preferences. Each individual will show different

preferences, which appear to be hard-wired within each individual. We can thus state that

individuals’ preferences are intrinsic. Any theory, which proposes to capture preferences, is, by

necessity, abstraction based on certain assumptions. Utility theory is a positive theory that

seeks to explain the individuals’ observed behavior and choices.The distinction between

normative and positive aspects of a theory is very important in the discipline of economics.

Some people argue that economic theories should be normative, which means they should be

prescriptive and tell people what to do. Others argue, often successfully, that economic theories

are designed to be explanations of observed behavior of agents in the market, hence positive in

that sense. This contrasts with a normative theory, one that dictates that people should behave

in the manner prescribed by it. Instead, it is only since the theory itself is positive, after

observing the choices that individuals make, we can draw inferences about their preferences.

When we place certain restrictions on those preferences, we can represent them analytically

using a utility function—a mathematical formulation that ranks the preferences of the

individual in terms of satisfaction different consumption bundles provide. Thus, under the

assumptions of utility theory, we can assume that people behaved as if they had a utility

function and acted according to it. Therefore, the fact that a person does not know his/her

utility function, or even denies its existence, does not contradict the theory. Economists have

used experiments to decipher individuals’ utility functions and the behavior that underlies

individuals’ utility.

To begin, assume that an individual faces a set of consumption “bundles.” We assume that

individuals have clear preferences that enable them to “rank order” all bundles based on
desirability, that is, the level of satisfaction each bundle shall provide to each individual. This

rank ordering based on preferences tells us the theory itself has ordinal utility—it is designed

to study relative satisfaction levels. As we noted earlier, absolute satisfaction depends upon

conditions; thus, the theory by default cannot have cardinal utility, or utility that can represent

the absolute level of satisfaction. To make this theory concrete, imagine that consumption

bundles comprise food and clothing for a week in all different combinations, that is, food for

half a week, clothing for half a week, and all other possible combinations.

The utility theory then makes the following assumptions:

1. Completeness: Individuals can rank order all possible bundles. Rank ordering implies

that the theory assumes that, no matter how many combinations of consumption

bundles are placed in front of the individual, each individual can always rank them in

some order based on preferences. This, in turn, means that individuals can somehow

compare any bundle with any other bundle and rank them in order of the satisfaction

each bundle provides. So in our example, half a week of food and clothing can be

compared to one week of food alone, one week of clothing alone, or any such

combination. Mathematically, this property wherein an individual’s preferences enable

him or her to compare any given bundle with any other bundle is called the

completeness property of preferences.

2. More-is-better: Assume an individual prefers consumption of bundle A of goods to

bundle B. Then he is offered another bundle, which contains more of everything in

bundle A, that is, the new bundle is represented by αA where α = 1. The more-is-better

assumption says that individuals prefer αA to A, which in turn is preferred to B, but

also A itself. For our example, if one week of food is preferred to one week of clothing,

then two weeks of food is a preferred package to one week of food. Mathematically, the

more-is-better assumption is called the monotonicity assumption on preferences. One


can always argue that this assumption breaks down frequently. It is not difficult to

imagine that a person whose stomach is full would turn down additional food. However,

this situation is easily resolved. Suppose the individual is given the option of disposing

of the additional food to another person or charity of his or her choice. In this case, the

person will still prefer more food even if he or she has eaten enough. Thus under the

monotonicity assumption, a hidden property allows costless disposal of excess quantities

of any bundle.

3. Mix-is-better: Suppose an individual is indifferent to the choice between one week of

clothing alone and one week of food. Thus, either choice by itself is not preferred over

the other. The “mix-is-better” assumption about preferences says that a mix of the two,

say half-week of food mixed with half-week of clothing, will be preferred to both

stand-alone choices. Thus, a glass of milk mixed with Milo (Nestlè’s drink mix), will be

preferred to milk or Milo alone. The mix-is-better assumption is called the “convexity”

assumption on preferences, that is, preferences are convex.

4. Rationality: This is the most important and controversial assumption that underlies all

of utility theory. Under the assumption of rationality, individuals’ preferences avoid any

kind of circularity; that is, if bundle A is preferred to B, and bundle B is preferred to C,

then A is also preferred to C. Under no circumstances will the individual prefer C to A.

You can likely see why this assumption is controversial. It assumes that the innate

preferences (rank orderings of bundles of goods) are fixed, regardless of the context and

time.

If one thinks of preference orderings as comparative relationships, then it becomes simpler to

construct examples where this assumption is violated. So, in “beats”—as in A beat B in college

football. These are relationships that are easy to see. For example, if University of Florida

beats Ohio State, and Ohio State beats Georgia Tech, it does not mean that Florida beats
Georgia Tech. Despite the restrictive nature of the assumption, it is a critical one. In

mathematics, it is called the assumption of transitivity of preferences.

Whenever these four assumptions are satisfied, then the preferences of the individual can be

represented by a well-behaved utility function.The assumption of convexity of preferences is

not required for a utility function representation of an individual’s preferences to exist. But it

is necessary if we want that function to be well behaved. Note that the assumptions lead to “a”

function, not “the” function. Therefore, the way that individuals represent preferences under a

particular utility function may not be unique. Well-behaved utility functions explain why any

comparison of individual people’s utility functions may be a futile exercise (and the notion of

cardinal utility misleading). Nonetheless, utility functions are valuable tools for representing

the preferences of an individual, provided the four assumptions stated above are satisfied. For

the remainder of the chapter we will assume that preferences of any individual can always be

represented by a well-behaved utility function. As we mentioned earlier, well-behaved utility

depends upon the amount of wealth the person owns.

Utility theory rests upon the idea that people behave as if they make decisions by assigning

imaginary utility values to the original monetary values. The decision maker sees different

levels of monetary values, translates these values into different, hypothetical terms (“utils”),

processes the decision in utility terms (not in wealth terms), and translates the result back to

monetary terms. So while we observe inputs to and results of the decision in monetary terms,

the decision itself is made in utility terms. And given that utility denotes levels of satisfaction,

individuals behave as if they maximize the utility, not the level of observed dollar amounts.

While this may seem counterintuitive, let’s look at an example that will enable us to appreciate

this distinction better. More importantly, it demonstrates why utility maximization, rather

than wealth maximization, is a viable objective. The example is called the “St. Petersburg
paradox.” But before we turn to that example, we need to review some preliminaries of

uncertainty: probability and statistics.

KEY TAKEAWAYS

● In economics, utility theory governs individual decision making. The student must

understand an intuitive explanation for the assumptions: completeness,

monotonicity, mix-is-better, and rationality (also called transitivity).

● Finally, students should be able to discuss and distinguish between the various

assumptions underlying the utility function.

4. Group decision making and Voting approaches


5. Social welfare function;

The social welfare function is analogous to the consumer theory of indifference-curve–budget


constraint tangency for an individual, except that the social welfare function is a mapping of
individual preferences or judgments of everyone in the society as to collective choices, which
apply to all, whatever individual preferences are for (variable) constraints on factors of
production. One point of a social welfare function is to determine how close the analogy is to
an ordinal utility function for an individual with at least minimal restrictions suggested by
welfare economics, including constraints on the number of factors of production.
There are two major distinct but related types of social welfare functions:
● A Bergson–Samuelson social welfare function considers welfare for a given set of
individual preferences or welfare rankings.
● An Arrow social welfare function considers welfare across different possible sets of
individual preferences or welfare rankings and seemingly reasonable axioms that
constrain the function.

Bergson–Samuelson social welfare function[edit]


In a 1938 article, Abram Bergson introduced the social welfare function. The object was "to
state in precise form the value judgments required for the derivation of the conditions of
maximum economic welfare" set out by earlier writers, including Marshall and Pigou, Pareto
and Barone, and Lerner. The function was real-valued and differentiable. It was specified to
describe the society as a whole. Arguments of the function included the quantities of different
commodities produced and consumed and of resources used in producing different
commodities, including labor.
Necessary general conditions are that at the maximum value of the function:
● The marginal "dollar's worth" of welfare is equal for each individual and for each
commodity
● The marginal "diswelfare" of each "dollar's worth" of labor is equal for each
commodity produced of each labor supplier
● The marginal "dollar" cost of each unit of resources is equal to the marginal value
productivity for each commodity.
Bergson showed how welfare economics could describe a standard of economic efficiency
despite dispensing with interpersonally-comparable cardinal utility, the hypothesization of
which may merely conceal value judgments, and purely subjective ones at that.

Arrow social welfare function (constitution)[edit]


Kenneth Arrow (1963) generalizes the analysis. Along earlier lines, his version of a social
welfare function, also called a 'constitution', maps a set of individual orderings (ordinal utility
functions) for everyone in the society to a social ordering, a rule for ranking alternative social
states (say passing an enforceable law or not, ceteris paribus). Arrow finds that nothing of
behavioral significance is lost by dropping the requirement of social orderings that are
real-valued (and thus cardinal) in favor of orderings, which are merely complete and transitive,
such as a standard indifference curve map. The earlier analysis mapped any set of individual
orderings to one social ordering, whatever it was. This social ordering selected the top-ranked
feasible alternative from the economic environment as to resource constraints. Arrow proposed
to examine mapping different sets of individual orderings to possibly different social orderings.
Here the social ordering would depend on the set of individual orderings, rather than being
imposed (invariant to them). Stunningly (relative to a course of theory from Adam Smith and
Jeremy Bentham on), Arrow proved the general impossibility theorem which says that it is
impossible to have a social welfare function that satisfies a certain set of "apparently
reasonable" conditions.

6. Systems Engineering methods for Systems Engineering Management

You might also like