SA Unit 2 Notes QUALITY ATTRIBUTE WORKSHOP

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

QUALITY ATTRIBUTE WORKSHOP

Definition

Quality attribute workshops (QAWs) provide a method for analyzing a system’s


architecture against a number of critical quality attributes, such as availability,
performance, security, interoperability, and modifiability, that are derived from
mission or business goals. The QAW does not assume the existence of a software
architecture. It was developed to complement the Architecture Tradeoff Analysis
MethodSM (ATAMSM) in response to customer requests for a method to identify
important quality attributes and clarify system requirements before there is a
software architecture to which the ATAM could be applied

Introduction

In software-intensive systems, the achievement of qualities—such as


performance, availability, security, and modifiability—is dependent on the
software architecture. In addition, quality attributes of large systems can be highly
limited by a system’s requirements and constraints.
Thus, it is in our best interest to try to determine as early as possible whether the
system will have the desired qualities. Quality requirements should be described
concretely before an architecture is developed. We distinguish system
architecture from software architecture according to the following two
definitions:

• system architecture: the fundamental and unifying system structure


defined in terms of system elements, interfaces, processes, constraints, and
behaviors [INCOSE 96]

• software architecture: the structure or structures of the system, which


comprise software elements, the externally visible properties of those
elements and the relationships among them [Bass 03]

Development of software-intensive systems begins with a description of the


system’s operation and high-level functional requirements, and any constraints
on the system, such as legacy or new systems. From these items, the architect
derives a system architecture and a software architecture that can then be used to
drive detailed design and implementation. (See Figure 1.) The process of creating
those architectures is often unstructured.1 Quality attributes could be missing
from the requirements document, and even if addressed adequately, they are often
vaguely understood and weakly articulated.

The Quality Attribute Workshop (QAW) is a facilitated method that engages


system stakeholders early in the system development life cycle to discover the
driving quality attributes of a software-intensive system. The QAW is system-
centric and stakeholder focused; it is used before the software architecture has
been created. The QAW provides an opportunity to gather stakeholders together
to provide input about their needs and expectations with respect to key quality
attributes that are of particular concern to them.

Both the system and software architectures are key to realizing quality attribute
requirements in the implementation. Although an architecture cannot guarantee
that an implementation will meet its quality attribute goals, the wrong architecture
will surely spell disaster. As an example, consider security. It is difficult, maybe
even impossible, to add effective security to a system as an afterthought.
Components as well as communication mechanisms and paths must be designed
or selected early in the life cycle to satisfy security requirements. The critical
quality attributes must be well understood and articulated early in the
development of a system, so the architect can design an architecture that will
satisfy them. The QAW is one way to discover, document, and prioritize a
system’s quality attributes early in its life cycle.
It is important to point out that we do not aim at an absolute measure of quality;
rather our purpose is to identify scenarios from the point of view of a diverse
group of stakeholders (e.g., architects, developers, users, sponsors). These
scenarios can then be used by the system engineers to analyze the system’s
architecture and identify concerns (e.g., inadequate performance, successful
denial-of-service attacks) and possible mitigation strategies (e.g., prototyping,
modeling, simulation).

QAW Method

The QAW is a facilitated, early intervention method used to generate, prioritize,


and refine quality attribute scenarios before the software architecture is
completed. The QAW is focused on system-level concerns and specifically the
role that software will play in the system. The QAW is dependent on the
participation of system stakeholders—individuals on whom the system has
significant impact, such as end users, installers, administrators (of database
management systems [DBMS], networks, help desks, etc.), trainers, architects,
acquirers, system and software engineers, and others. The group of stakeholders
present during any one QAW should number at least 5 and no more than 30 for a
single workshop. In preparation for the workshop, stakeholders receive a
“participants handbook” providing example quality attribute taxonomies,
questions, and scenarios. If time allows, the handbook should be customized to
the domain of the system and contain the quality attributes, questions, and
scenarios that are appropriate to the domain and the level of architectural detail
available.

The contribution of each stakeholder is essential during a QAW; all participants


are expected to be fully engaged and present throughout the workshop.
Participants are encouraged to comment and ask questions at any time during the
workshop. However, it is important to recognize that facilitators may
occasionally have to cut discussions short in the interest of time or when it is clear
that the discussion is not focused on the required QAW outcomes. The QAW is
an intense and demanding activity. It is very important that all participants stay
focused, are on time, and limit side discussions throughout the day.

The QAW involves the following steps:

1.QAW Presentation and Introductions

2.Business/Mission Presentation

3.Architectural Plan Presentation

4.Identification of Architectural Drivers

5.Scenario Brainstorming

6.Scenario Consolidation
7.Scenario Prioritization

8.Scenario Refinement

The following sections describe each step of the QAW in detail.

Step 1: QAW Presentation and Introductions

In this step, QAW facilitators describe the motivation for the QAW and explain
each step of the method. We recommend using a standard slide presentation that
can be customized depending on the needs of the sponsor. Next, the facilitators
introduce themselves and the stakeholders do likewise, briefly stating their
background, their role in the organization, and their relationship to the system
being built.

Step 2: Business/Mission Presentation


After Step 1, a representative of the stakeholder community presents the business
and/or mission drivers for the system. The term “business and/or mission drivers”
is used carefully here.

Some organizations are clearly motivated by business concerns such as


profitability, while others, such as governmental organizations, are motivated by mission
concerns and find profitability meaningless. The stakeholder representing the business and/or
mission concerns (typically a manager or management representative) spends about

one hour presenting the system’s business/mission context

• high-level functional requirements, constraints, and quality attribute


requirements

During the presentation, the facilitators listen carefully and capture any relevant
information that may shed light on the quality attribute drivers. The quality
attributes that will be refined in later steps will be derived largely from the
business/mission needs presented in this step.

Step 3: Architectural Plan Presentation

While a detailed system architecture might not exist, it is possible that high-level
system descriptions, context drawings, or other artifacts have been created that
describe some of the system’s technical details. At this point in the workshop, a
technical stakeholder will present the system architectural plans as they stand
with respect to these early documents. Information in this presentation may
include

• plans and strategies for how key business/mission requirements will be satisfied

• key technical requirements and constraints—such as mandated operating


systems, hardware, middleware, and standards—that will drive architectural
decisions

• presentation of existing context diagrams, high-level system diagrams, and other


written Descriptions

Step 4: Identification of Architectural Drivers

During steps 2 and 3, the facilitators capture information regarding architectural


drivers that are key to realizing quality attribute goals in the system. These drivers
often include high-level requirements, business/mission concerns, goals and
objectives, and various quality attributes. Before undertaking this step, the
facilitators should excuse the group for a 15-minute break, during which they will
caucus to compare and consolidate notes taken during steps 2 and 3. When the
stakeholders reconvene, the facilitators will share their list of key architectural
drivers and ask the stakeholders for clarifications, additions, deletions, and
corrections. The idea is to reach a consensus on a distilled list of architectural
drivers that include high-level requirements, business drivers, constraints, and
quality attributes. The final list of architectural drivers will help focus the
stakeholders during scenario brainstorming to ensure that these concerns are
represented by the scenarios collected.

Step 4: Identification of Architectural Drivers

During steps 2 and 3, the facilitators capture information regarding architectural


drivers that are key to realizing quality attribute goals in the system. These drivers
often include high-level requirements, business/mission concerns, goals and
objectives, and various quality attributes. Before undertaking this step, the
facilitators should excuse the group for a 15-minute break, during which they will
caucus to compare and consolidate notes taken during steps 2 and 3. When the
stakeholders reconvene, the facilitators will share their list of key architectural
drivers and ask the stakeholders for clarifications, additions, deletions, and
corrections. The idea is to reach a consensus on a distilled list of architectural
drivers that include high-level requirements, business drivers, constraints, and
quality attributes. The final list of architectural drivers will help focus the
stakeholders during scenario brainstorming to ensure that these concerns are
represented by the scenarios collected.

Step 5: Scenario Brainstorming


After the architectural drivers have been identified, the facilitators initiate the
brainstorming process in which stakeholders generate scenarios. The facilitators
review the parts of a good scenario (stimulus, environment, and response) and
ensure that each scenario is well formed during the workshop.

Each stakeholder expresses a scenario representing his or her concerns with


respect to the system in round-robin fashion. During a nominal QAW, at least
two round-robin passes are made so that each stakeholder can contribute at least
two scenarios. The facilitators ensure that at least one representative scenario
exists for each architectural driver listed in Step 4. Scenario generation is a key
step in the QAW method and must be carried out with care. We suggest the
following guidance to help QAW facilitators during this step:

1. Facilitators should help stakeholders create well-formed scenarios. It is


tempting for stakeholders to recite requirements such as “The system shall
produce reports for users.”

While this is an important requirement, facilitators need to ensure that the quality
attribute aspects of this requirement are explored further. For example, the
following scenario sheds more light on the performance aspect of this
requirement: “A remote user requests a database report via the Web during peak
usage and receives the report within five seconds. ”
Note that the initial requirement hasn’t been lost, but the scenario further explores
the performance aspect of this requirement. Facilitators should note that quality
attribute names by themselves are not enough. Rather than say “the system shall
be modifiable,” the scenario should describe what it means to be modifiable by
providing a specific example of a modification to the system vis-à-vis a scenario.

2. The vocabulary used to describe quality attributes varies widely. Heated


debates often revolve around to which quality attribute a particular system
property belongs. It doesn’t matter what we call a particular quality attribute, as
long as there’s a scenario that describes what it means.

3.Facilitators need to remember that there are three general types of scenarios and
to ensure that each type is covered during the QAW:

a. use case scenarios - involving anticipated uses of the system b. growth


scenarios - involving anticipated changes to the system

c. exploratory scenarios - involving unanticipated stresses to the system that can


include uses and/or changes

4.Facilitators should refer to the list of architectural drivers generated in Step 4


from time to time during scenario brainstorming to ensure that representative
scenarios exist for each one.
Step 6: Scenario Consolidation

After the scenario brainstorming, similar scenarios are consolidated when


reasonable.

To do that, facilitators ask stakeholders to identify those scenarios that are very
similar in content. Scenarios that are similar are merged, as long as the people
who proposed them agree and feels that their scenarios will not be diluted in the
process. Consolidation is an important step because it helps to prevent a
“dilution” of votes during the prioritization of scenarios (Step 7).

Such a dilution occurs when stakeholders split their votes between two very
similar scenarios. As a result, neither scenario rises to importance and is therefore
never refined (Step 8). However, if the two scenarios are similar enough to be
merged into one, the votes might be concentrated, and the merged scenario may
then rise to the appropriate level of importance and be refined further.

Facilitators should make every attempt to reach a majority consensus with the
stakeholders before merging scenarios. Though stakeholders may be tempted to
merge scenarios with abandon, they should not do so. In actuality, very few
scenarios are merged.

Step 7: Scenario Prioritization


Prioritization of the scenarios is accomplished by allocating each stakeholder a
number of votes equal to 30% of the total number of scenarios generated after
consolidation. The actual number of votes allocated to stakeholders is rounded to
an even number of votes at the discretion of the facilitators. For example, if 30
scenarios were generated, each stakeholder gets 30 x

0.3, or 9, votes rounded up to 10. Voting is done in round-robin fashion, in two


passes. During each pass, stakeholders allocate half of their votes. Stakeholders
can allocate any number of their votes to any scenario or combination of
scenarios. The votes are counted, and the scenarios are prioritized accordingly.

Step 8: Scenario Refinement

After the prioritization, depending on the amount of time remaining, the top four
or five scenarios are refined in more detail. Facilitators further elaborate each one,
documenting the following:

• Further clarify the scenario by clearly describing the following six things:

1.stimulus - the condition that affects the system

2.response - the activity that results from the stimulus

3.source of stimulus - the entity that generated the stimulus

4.environment - the condition under which the stimulus occurred


5.artifact stimulated - the artifact that was stimulated

6.response measure - the measure by which the system’s response will be


evaluated

• Describe the business/mission goals that are affected by the scenario.

• Describe the relevant quality attributes associated with the scenario.

• Allow the stakeholders to pose questions and raise any issues regarding the
scenario. Such questions should concentrate on the quality attribute aspects of
the scenario and any concerns that the stakeholders might have in achieving the
response called for in the scenario.

See the example template for scenario refinement in Appendix A. This step
continues until time runs out or the highest priority scenarios have been refined.
Typically, time runs out first.

Documenting Quality Attributes


2.2

"A science is as mature as its measurement tools," (Louis Pasteur in Ebert Dumke,
). Measuring software quality is motivated by at least two reasons:

Risk Management: Software failure has caused more than inconvenience.


Software errors have caused human fatalities. The causes have ranged from
poorly designed user interfaces to direct programmingerrors. An example of a
programming error that led to multiple deaths is discussed in Dr. Leveson's
paper.[4] This resulted in requirements for the development of some types of
software, particularly and historically for software embedded in medical and
other devices that regulate critical infrastructures:"[Engineers who write
embedded software] see Java programs stalling for one third of a second to
perform garbage collection and update the user interface, and they envision
airplanes falling out of the sky.".[5] Inthe United States, within the Federal
Aviation Administration (FAA), the Aircraft Certification Service provides
software programs, policy, guidance and training, focus on software and
Complex Electronic Hardware that has an effect on the airborne product (a
“product” is an aircraft, an engine, or a propeller)".

Cost Management: As in any other fields of engineering, an application with


good structural software quality costs less to maintain and is easier to understand
and change in response to pressing business needs. Industry data demonstrate that
poor application structural quality in core business applications (such as
Enterprise Resource Planning (ERP), Customer Relationship Management
(CRM) or large transaction processing systems in financial services) results in
cost and schedule overruns and creates waste in the form of rework (up to 45%
of development time in some organizations ). Moreover, poor structural quality
is strongly correlated with high-impact business disruptions due to corrupted data,
application outages, security breaches, and performance problems. However, the
distinction between measuring and improving software quality in an embedded
system (with emphasis on risk management) and software quality in business
software (with emphasis on cost and maintainability management) is becoming
somewhat irrelevant. Embedded systems now often include a user interface and
their designers are as much concerned with issues affecting usability and user
productivity as their counterparts who focus on business applications. The latter
are in turn looking at ERP or CRM system as a corporate nervous system whose
uptime and performance are vital to the well-being of the enterprise. This
convergence is most visible in mobile computing: a user who accesses an ERP
application on their smartphone is depending on the quality of software across all
types of software layers.

Both types of software now use multi-layered technology stacks and


complex architecture so software quality analysis and measurement have to be
managed in a comprehensive and consistent manner, decoupled from the
software's ultimate purpose or use. In both cases, engineers and management need
to be able to make rational decisions based on measurement and fact-based
analysis in adherence to the precept "In God (we) trust. All others bring data".
((mis-)attributed to W. Edwards Deming and others).

CISQ's Quality Model

Even though "quality is a perceptual, conditional and somewhat subjective


attribute and may be understood differently by different people" (as noted in the
article on quality in business), software structural quality characteristics have
been clearly defined by the Consortium for IT Software Quality (CISQ). Under
the guidance of Bill Curtis, co-author of the Capability Maturity Model
framework and CISQ's first Director; and Capers Jones, CISQ's Distinguished
Advisor, CISQ has defined five major desirable characteristics of a piece of
software needed to provide business value.[16] In the House of Quality model,
these are "Whats" that need to be achieved:
1. Reliability: An attribute of resiliency and structural solidity. Reliability
measures the level of risk and the likelihood of potential application failures. It
also measures the defects injected due to modifications made to the software (its
“stability” as termed by ISO). The goal for checking and monitoring Reliability
is to reduce and prevent application downtime, application outages and errors that
directly affect users, and enhance the image of IT and its impact on a company’s
business performance.

2. Efficiency: The source code and software architecture attributes are the
elements that ensure high performance once the application is in run-time mode.
Efficiency is especially important for applications in high execution speed
environments such as algorithmic or transactional processing where performance
and scalability are paramount. An analysis of source code efficiency and
scalability provides a clear picture of the latent business risks and the harm they
can cause to customer satisfaction due to response-time degradation.

3. Security: A measure of the likelihood of potential security breaches due to


poor coding practices and architecture. This quantifies the risk of encountering
critical vulnerabilities that damage the business.

4. Maintainability: Maintainability includes the notion of adaptability,


portability and transferability (from one development team to another).
Measuring and monitoring maintainability is a must for mission-critical
applications where change is driven by tight time-to-market schedules and where
it is important for IT to remain responsive to business-driven changes. It is also
essential to keep maintenance costs under control.

5. Size: While not a quality attribute per se, the sizing of source code is a software
characteristic that obviously impacts maintainability. Combined with the above
quality characteristics, software size can be used to assess the amount of work
produced and to be done by teams, as well.

2.3Six Part Scenarios


Overall factors that affect run-time behavior, system design, and user experience

Introduction

• Functionality and Quality Attributes are orthogonal

• Overall factors that affect run-time behavior, system design, and user
experience

Architecture and Quality Attributes

• Architecture, by itself, is unable to achieve qualities

• Architecture should include the factors of interest for each attribute

Quality Attributes Scenario


➢ Is a quality-attribute-specific requirement

➢ It consists of six parts:

• Source of stimulus

• Stimulus

• Environment

• Artifact

• Response

• Response measure
Common Quality Attributes

➢ It categorizes the attributes in various specific areas

• Design qualities

• Runtime qualities

• System qualities
• User qualities

• Non-runtime qualities

• Architecture qualities

• Business qualities

➢ Conceptual Integrity:

• Defines the consistency and coherence of the overall design

• Includes the way that components or modules are designed

➢ Maintainability:

• Ability of the system to undergo changes with a degree of ease

➢ Reusability:

Defines the capability for components and subsystems to be suitable for use in other
applications
Runtime Quality Attributes

➢ Interoperability:

• Ability of a system or different systems to operate successfully by


communicating and exchanging information with other external systems
written and run by external parties

➢ Manageability:

• Defines how easy it is for system administrators to manage the application

➢ Reliability:

• Ability of a system to remain operational over time

➢ Scalability:

• Ability of a system to either handle increases in load without impact on the


performance of the system, or the ability to be readily enlarged

➢ Performance:
• Indication of the responsiveness of a system to execute any action

➢ Security:

Capability of a system to prevent malicious or accidental actions outside of the


designed usage

➢ Availability:

• Proportion of time that the system is functional and working

Measured as a percentage of the total system downtime over a predefined period


System Quality Attributes

2
➢ Supportability:

• Ability of the system to provide information helpful for identifying and resolving issues
when it fails to work correctly

➢ Testability:
Measure of how easy it is to create test criteria for the system and its components

User Quality Attributes

➢ Usability:

• Defines how well the application meets the requirements of the user and
consumer by being intuitive
Non-runtime Quality Attributes

➢ Portability:

• Ability of a system to run under different computing environments

➢ Reusability:

• Degree to which existing applications can be reused in new applications

➢ Integrability:
Ability to make the separately developed components of the system work
correctly together

➢ Modifiability:

Ease with which a software system can accommodate changes to its software

Architecture Quality Attributes

➢ Correctness:

• Accountability for satisfying all requirements of the system

➢ Conceptual Integrity:
• Integrity of the overall structure that is composed from a number of small
architectural structures

Business Quality Attributes

➢ Cost and schedule:

• Cost of the system with respect to time to market, expected project lifetime,
and utilization of legacy and COTS systems

➢ Marketability:

• Use of the system with respect to market competition

CASE STUDY ON SOFTWARE QUALITY-USABILITY

Company A is a software development company consisting of several


subsidiaries. Company A is used to conducting regular usability surveys about
their products already on the market. Recently, the company decided to initiate a
software quality improvement programme. Usability was identified as an aspect
of software quality that the company was already doing, but which could be done
better, and more rigorously.

The objectives Company A had in their usability project were the following:

1. Gain knowledge about newer techniques regarding 'user centered design'


and evaluation of software regarding ease of use.

2. Evaluate their already existing method on effectiveness and correctness

3. Improvement of their existing methods by combining the best of two


methods

4. Embedding the methods in the software development process.

The trial application is a complex document management package for the


technical engineering environment, which is being sold world-wide. Company A
sells the software using a distributor and dealer channel, causing a distance
between the developers and the end-users. Because of this, the ease of use when
leaving the factory is very important. The software is being developed at several
different geographical locations. This results in a complex process, in which
communication, version management, strict procedures, and long distance project
management are essential.
The embedding of user centered design, and the evaluation of the software in so-
called usability tests into this process asks for increased flexibility, stricter
management and much more communication. This was therefore the challenge.

The software package is very complex, and therefore very difficult for non-
experienced users to fully understand, and apply in their process. The software is
market leader in the market Company A is selling the software. At the same time
the market has matured during the last three years, forcing Company A to pay
more attention to the user friendliness, and end user acceptance of the software.

The project was divided in different parts to work towards an integration of the
Music techniques in the development process of Company A.

Usability requirement analysis

This task evaluated the existing problems of the software using the SUMI
questionnaire as a 'big bang' event and generated the work plan for
improvements to solve these problems.

Training course
The training course was meant to teach the developers the way of user
centered design and evaluation of the software and to show how the
methods introduced related to the current evaluation methods used by the
company. Heuristic analysis and co-evaluation were used.

Usability test on prototype

This test evaluated the results of the applied improvements to the software,
and the evaluation method improvements were applied during this test.
Principally, the method of performance analysis using video capture was
used, but SUMI was also used to quantify the gains.

The results were the following:

The software development process improvement program that was taking place
at Company A was sped up enormously.

The trial application was improved enormously regarding subjective usability


(SUMI, before - after effect).
The development team was taught to use basic usability techniques and tools,
and the methodology was embedded in the company software development
process.

The consultants of Company A services are using the knowledge about the
Company A usability program to offer their clients the tools and methods to
evaluate the implementations they buy.

A hitherto-undiscovered type of application was discovered during the course


of Context of Use analysis. This has since been designed from the beginning
using user-centered design techniques and is now selling as an adjunct to
Company A product range.

The benefits for the organization are enormous. The result of the project is that
all people are aware of usability, know what the word means, and also work
according to that. This means that the attitude of the developers, product
marketers, and sales people has been changed, which is the most important
benefit for Company A.

The benefits for the market arise because users are getting more and more mature,
and do not accept bad software anymore. The developers of software in this
market, which used to be highly technically specialized, are herefore forced to
create better software, more generally usable. This was the first reason why our
company started a software quality improvement program. Usability methods
took our software quality improvement programme a quantum leap into the
future.

The company knew it had a software quality problem in the making, maybe not
just now, but a few years down the road. In addition, multi-site development,
entered into for economic reasons, placed even greater stress on the software
lifecycle to produce quality results. Using the User-Centered Design approach,
the company was able to set a discipline on the development lifecycle and the
design teams were able to check the quality status of the work at well-defined
stages. This became a crucial aspect of the way the company dealt with the
'quality problem.'

Although Company A was keen to get more experience in user centered design,
they needed a focus. They were attracted to us because of our multi-national
reputation. The hard data that SUMI gave us about their chosen product was just
the right thing for that purpose, and it gave a meaning to the training workshops.
Designers knew what they needed to beat. The Context of Use analysis, first
carried out as a training exercise, suddenly revealed the need for a new product
to go with the rest of the product line. After this, we had a high standing all the
way with the company.

You might also like