Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Architecture Evaluation

Related terms:

internet of things, Software Architecture, Architecture Design, Analysis Method, Ar-


chitecture Analysis, Architecture Review, Evaluation Session, Reference Architecture

View all Topics

Using Reference Architectures for De-


sign and Evaluation of Web of Things
Systems
Muhammad Aufeef Chauhan, Muhammad Ali Babar, in Managing the Web of
Things, 2017

7.3.3.2 Architecture Evaluation Activities


Architecture evaluation activities consisted of three stages. (i) Before the evaluation
session, the groups prepared a short architecture evaluation questionnaire on quality
attributes considered in the architecture design, key architecture design decisions,
strengths and weaknesses of the design decisions, sensitivity and trade-off points,
and risks and non-risks in the architecture [26]. (ii) During the evaluation sessions,
the group whose architecture was evaluated presented the architecture while the
group who was evaluating the architecture asked questions based upon their initial
preparation as well as from integration perspective of their own architecture. During
the architecture presentation, design artefacts were analysed for evaluation of the
IoT subsystem architectures as well some new artefacts specific to architecture
evaluation such as architecture utility trees [21] were generated to present archi-
tecture design decision corresponding to quality attributes. During the architecture
evaluation sessions, sensitivity points, trade off points, architecture risks and ar-
chitecture non risks were discussed. (iii) After the individual architecture evaluation
sessions, a joint session was conducted in which each group briefly presented
their IoT subsystem architecture, quality attributed those were considered in the
architecture and feedback it received during the individual architecture evaluation
session. Each group member also prepared a written report on the evaluation of the
architecture that the group member had evaluated in the architecture evaluation
session. The report was shared with teaching staff, who was responsible for design
and analysis of the WoT architecture. Table 7.3 shows commonly reported missing
quality requirements, risky design decisions, sensitivity and trade-off points, and
common improvements suggested in the IoT subsystem architectures during the
evaluation sessions.

Table 7.3. Summary of the Points Discussed in the Evaluation Sessions

Dimension Details
Risks Using same communication protocol for in-
ter-system and intra-system communication can
make failure safety procedures difficult.Using
broker pattern can result in selection of services
not fully compliant with QoS constraints.Data
storage on centralised persistence units is a risk
for availability.Communication overhead of using
Publisher/Subscriber pattern.
Sensitivity and Trade-off Points Maintaining Facade for continuously evolving
services.Using Singleton pattern instead of Mes-
sage broker pattern for multi-tenant session
management.Negative impact of layered pattern
on performance.Trade-off between performance
and modifiability by using either publisher scriber
or layered pattern.Misuse of Pipes and Filter pat-
tern where Broker pattern can be used.Trade-off is
needed for Security versus Performance.
Commonly Missing Quality Attributes Safety on Failure.System availability when Inter-
net connection is not available.
IoT RA Shortcomings Architecture and design patterns for implement-
ing different elements of the IoT RA are not spec-
ified in the IoT RA.

> Read full chapter

Making Software Architecture and Agile


Approaches Work Together
Muhammad Ali Babar, in Agile Software Architecture, 2014

1.2.5 Software architecture evaluation


Software architecture evaluation is an important activity in the software architecting
process. The fundamental goal of architecture evaluation is to assess the potential
of a proposed/chosen architecture to deliver a system capable of fulfilling required
quality requirements and to identify any potential risks [51,52]. Researchers and
practitioners have proposed a large number of architecture evaluation methods for
which a classification and comparison framework has also been proposed [53]. Most
widely used architecture evaluation methods are scenario-based. These methods
are called scenario-based because scenarios are used to characterize the quality
attributes required of a system. It is believed that scenario-based analysis is suitable
for development-time quality attributes (such as maintainability and usability) rather
than for run-time quality attributes (such as performance and scalability), which
can be assessed using quantitative techniques such as simulation or mathemat-
ical models [39]. Among the well-known, scenario-based architecture evaluation
methods are the SA analysis method (SAAM) [54], the architecture tradeoff analysis
method (ATAM) [55], the architecture level maintainability analysis (ALMA) [56], and
the performance assessment of SA (PASA) [57].

SAAM is the earliest method proposed to analyze architecture using scenarios. The
analysis of multiple candidate architectures requires applying SAAM to each of the
proposed architectures and then comparing the results. This can be very costly in
terms of time and effort if the number of architectures to be compared is large.
SAAM has been further extended into a number of methods, such as SAAM for
complex scenarios [58], extending SAAM by integration in the domain-centric and
reuse-based development process [59], and SAAM for evolution and reusability
[60]. ATAM grew out of SAAM. The key advantages of ATAM are explicit ways of
understanding how an architecture supports multiple competing quality attributes
and of performing trade-off analysis. ATAM uses both qualitative techniques, such
as scenarios, and quantitative techniques for measuring the qualities of the archi-
tecture.

Benstsson and Bosch proposed several methods (such as SBAR [61], ALPSM [62],
and ALMA) [56]. All these methods use one or a combination of various analysis
techniques (i.e., scenarios, simulation, mathematical modeling, or experience-based
reasoning [39]). All of these methods use scenarios to characterize quality attributes.
The desired scenarios are mapped onto architectural components to assess the
architecture's capability to support those scenarios or identify the changes required
to handle those scenarios. PASA is an architecture analysis method that combines
scenarios and quantitative techniques [57]. PASA uses scenarios to determine a sys-
tem’s performance objectives and applies principles and techniques from software
performance engineering (SPE) to determine whether an architecture is capable
of supporting the performance scenarios. PASA includes performance-sensitive
architectural styles and anti-patterns as analysis tools and formalizes the architecture
analysis activity of the performance engineering process reported in [63].

> Read full chapter

Lightweight Evaluation of Software Ar-


chitecture Decisions
Veli-Pekka Eloranta, ... Kai Koskimies, in Relating System Quality and Software
Architecture, 2014

6.1 Architecture Evaluation Methods


Software architecture evaluation is the analysis of a system's capability to satisfy the
most important stakeholder concerns, based on its large-scale design, or archi-
tecture (Clements et al., 2002). On the one hand, the analysis discovers potential
risks and areas for improvement; on the other hand, it can raise confidence in
the chosen architectural approaches. As a side effect, architecture evaluation also
can stimulate communication between the stakeholders and facilitate architectural
knowledge sharing.

Software architecture evaluations should not be thought as code reviews. In archi-


tecture evaluation, the code is rarely viewed. The goal of architecture evaluation is to
find out if made architecture decisions support the quality requirements set by the
customer and to find out signs of technical debt. In addition, decisions and solutions
preventing road-mapped features from being developed during the evolution of the
system can be identified. In other words, areas of further development in the system
are identified.

In many evaluation methods, business drivers that affect the architectural design are
explicitly mentioned, and important quality attributes are specified. Given that these
artifacts are also documented during the evaluation, the evaluation may improve
the architectural documentation (AD) as well. In addition, as evaluation needs AD,
some additional documentation may be created for the evaluation, contributing to
the overall documentation of the system.

The most well-known approaches to architecture evaluation are based on scenarios,


for example, SAAM (Kazman et al., 1994), ATAM (Kazman et al., 2000), ALMA (Ar-
chitecture-level Modifiability Analysis) (Bengtsson, 2004), FAAM (Family-architec-
ture Assessment Method) (Dolan, 2002), and ARID (Active Review of Intermediate
Designs) (Clements, 2000). These methods are considered mature: They have been
validated in the industry (Dobrica and Niemelä, 2002), and they have been in use for
a long time.

In general, scenario-based evaluation methods take one or more quality attributes


and define a set of concrete scenarios concerning them, which are analyzed against
the architectural approaches used in the system. Each architectural approach is
either a risk or a nonrisk with respect to the analyzed scenario. Methods like ATAM
(Kazman et al., 2000) also explicitly identify decisions being a trade-off between
multiple quality attributes and decisions that are critical to fulfill specific quality
attribute requirements (so-called sensitivity-points).
Many of the existing architecture evaluation methods require considerable time and
effort to carry out. For example, SAAM evaluation is scheduled for one full day with
wide variety of stakeholders present. The SAAM report (Kazman et al., 1997) shows
that in 10 evaluations performed by SEI where projects ranged from 5 to 100 KLOC
(1,000 Lines of Code) the effort was estimated to be 14 days. Also, medium-sized
ATAM might take up to 70 person-days (Clements et al., 2002). On the other hand,
there are some experience reports indicating that less work might bring results
as well (Reijonen et al., 2010). In addition, there exist techniques that can be uti-
lized to boost the architecture evaluation (Eloranta and Koskimies, 2010). However,
evaluation methods are often so time consuming that it is impractical to do them
repeatedly. Two- or three-day evaluation methods are typically one-shot evaluations.
This might lead to a situation where software architecture is not evaluated at all,
because there is no suitable moment for the evaluation. The architecture typically
changes constantly, and once the architecture is stable enough, it might be too late
for the evaluation because much of the system is already implemented.

Many scenario-based methods consider scenarios as refinements of the architec-


turally significant requirements, which concern quality attributes or the function-
ality the target system needs to provide. These scenarios are then evaluated against
the decisions. These methods do not explicitly take other decision drivers into
account, for example, expertise, organization structure, or business goals. CBAM
(Cost Benefit Analysis Method) (Kazman et al., 2001) is an exception to this rule
because it explicitly regards financial decision forces during the analysis.

The method presented in this chapter holistically evaluates architecture decisions


in the context of the architecturally significant requirements and other important
forces like business drivers, company culture and politics, in-house experience, and
the development context.

Almost all evaluation methods identify and utilize architecture decisions, but they do
not validate the reasoning behind the decisions. Only CBAM operates partially also
in the problem-space. The other methods merely explore the solution space and try
to find out which consequences of the decisions are not addressed. In DCAR, the
architecture decisions are a first-class entity, and the whole evaluation is carried out
purely on considering the decision drivers of the made decisions.

Architectural software quality assurance (aSQA) (Christensen et al., 2010) is an


example of a method that is iterative and incremental and has built-in support for
agile software projects. The method is based on the utilization of metrics, but it can
be carried out using scenarios or expert judgment, although the latter option has not
been validated in industry. It is also considered to be more lightweight than many
other evaluation methods, because it is reported to take 5 h or less per evaluation.
However, aSQA does not evaluate architecture decisions, but rather uses metrics to
assess the satisfaction of the prioritized quality requirements.

Pattern-based architecture review (PBAR) (Harrison and Avgeriou, 2010) is another


example of a lightweight method that does not require extensive preparation by
the company. In addition, PBAR can be conducted in situations where no AD
exists. During the review, the architecture is analyzed by identifying patterns and
pattern relationships in the architecture. PBAR, however, also focuses on quality
attribute requirements and does not regard the whole decision-making context.
It also specializes on pattern-based architectures and cannot be used to validate
technology or process related decisions, for instance.

Many of the existing evaluation methods focus on certain quality attribute (such
as maintainability in ALMA, Bengtsson, 2004, interoperability and extensibility in
FAAM, Dolan, 2002, or some other single aspect of the architecture such as eco-
nomics (CBAM), Kazman et al., 2001). However, architecture decisions are affected
by a variety of drivers. The architect needs to consider not only the wanted quality
attributes and costs, but also the experience, expertise, organization structure,
and resources, for example, when making a decision. These drivers may change
during the system development, and while a decision might still be valid, new
more beneficial options might have become available and these should be taken
into consideration. We intend to support this kind of broad analysis of architecture
decisions with DCAR. Further, we aim at a method that allows the evaluation of
software architecture iteratively decision by decision, so that it can be integrated with
agile development methods and frameworks such as Scrum (Schwaber and Beedle,
2001).

> Read full chapter

Continuous Software Architecture


Analysis
Georg Buchgeher, Rainer Weinreich, in Agile Software Architecture, 2014

7.3.2 Scenario-based evaluation methods


Scenario-based architecture evaluation is a specific kind of architecture review, which
is based on the notion of a scenario. A scenario is a “short statement describing an
interaction of one of the stakeholders with the system” [23]. Each identified scenario
is then checked to determine whether it is supported by a system’s architecture or
not. Well-known examples of scenario-based evaluation methods are ATAM [2] and
SAAM [1]. An overview of other existing scenario-based analysis methods can be
found in Refs. [24] and [25].

Scenario-based architecture analysis is typically performed as a one- or two-day


workshop, where ideally all system stakeholders participate in the review. The work-
shop includes the explanation of the architecture, the identification of the most
important scenarios, the analysis of the identified scenarios, and the presentation
of the results.

Like other review-based methods, scenario-based evaluation methods are a static


and manual analysis approach.

> Read full chapter

Lightweight Architecture Knowledge


Management for Agile Software Devel-
opment
Veli-Pekka Eloranta, Kai Koskimies, in Agile Software Architecture, 2014

8.3.1 Architecture evaluation methods, agility, and AKM


Software architecture is typically one of the first descriptions of the system to be built.
It forms a basis for the design and dictates whether the most important qualities and
functionalities of the system can be achieved. Architecture evaluation is a systematic
method to expose problems and risks in the architectural design, preferably before
the system is implemented.

ATAM is a well-known, scenario-based architecture evaluation method used in


industry [20]. The basic idea of a scenario-based architecture evaluation method
is to refine quality attributes into concrete scenarios phrased by the stakeholders
(developers, architects, managers, marketing, testing, etc.). In this way, the stake-
holders can present their concerns related to the quality requirements. The scenarios
are prioritized according to their importance and expected difficulty, and highly
prioritized scenarios are eventually used in the architectural analysis. The analysis is
preceded by presentations of the business drivers and of the software architecture.

Architectural evaluations not only reveal risks in the system design, but also bring
up a lot of central information about software architecture. The authors have carried
out approximately 20 full-scale scenario-based evaluations in the industry, and in
most cases the industrial participants have expressed their need for uncovering
architectural knowledge as a major motivation for the evaluation. A typical feedback
comment has been that a significant benefit of the evaluation was communication
about software architecture between different stakeholders, which otherwise would
not have taken place. Thus, software architecture evaluation has an important facet
related to AKM that is not often recognized.

Essential architectural information emerging in ATAM evaluations include ASRs (and


scenarios refining them), architectural decisions, relationships between require-
ments and decisions, analysis and rationale for the decisions, and identified risks of
the architecture. Since the issues discussed in the evaluation are based on probable
and important scenarios from the viewpoint of several kinds of stakeholders, it is
reasonable to argue that the information emerging in ATAM (and other evaluation
methods) is actually the most relevant information about the software architecture.
Furthermore, this information is likely to actually be used later on and is therefore
important to document.

From the viewpoint of agile development, the main drawback of ATAM (and sce-
nario-based architecture evaluation methods in general) is heavyweightness; sce-
nario-based methods are considered to be rather complicated and expensive to use
[25–27]. A medium-sized ATAM evaluation can take up to 40 person-days covering
the work of different stakeholders ([28], p. 41). In our experience, even getting all
the required stakeholders in the same room for 2 or 3 days is next to impossible in
an agile context. Furthermore, a lot of time in the evaluation is spent on refining
quality requirements into scenarios and on discussing the requirements and even
the form of the scenarios. Most of the scenarios are actually not used (as they
don’t get sufficient votes in the prioritization), implying notable waste in the
lean sense [16]. On the other hand, although communication about requirements
is beneficial, it is often time-consuming as it comes back to the question of the
system’s purpose. In an agile context, the question about building the right product
is a central concern that is taken care of by fast development cycles and incremental
development, allowing the customers to participate actively in the process. Thus,
a more lightweight evaluation method that concentrates on the soundness of the
current architectural decisions, rather than on the requirement analysis, would
better serve an agile project setup.

Techniques to boost architecture evaluation using domain knowledge have been


proposed by several authors. So-called “general scenarios” [29] can be utilized in
ATAM evaluation to express patterns of scenarios that tend to reoccur in different
systems in the same domain. Furthermore, if multiple ATAM evaluations are carried
out in the same domain, the domain model can be utilized to find the concrete
scenarios for the system [30]. In this way, some of the scenarios can be found offline
before the evaluation sessions. This will speed up the elicitation process in the eval-
uation sessions and create significant cost savings because less time is required for
the scenario elicitation. However, even with these improvements, our experience and
a recent survey [47] suggest that scenario-based architecture evaluation methods
are not widely used in the industry because they are too heavyweight—especially for
agile projects.

Another problem with integrating ATAM with agile processes is that ATAM is de-
signed for one-off evaluation rather than for the continuous evaluation that would
be required in Agile. ATAM is based on a holistic view of the system, starting
with top-level quality requirements that are refined into concrete scenarios, and
architectural approaches are analyzed only against these scenarios. This works well in
a one-off evaluation, but poses problems in an agile context where the architecture
is developed incrementally. The unit of architectural development is an architectural
decision, and an agile evaluation method should be incremental with respect to this
unit, in the sense that the evaluation can be carried out by considering a subset of
the decisions at a time.

The lack of incrementality in ATAM is reflected in the difficulty to decide on the


proper time for architectural evaluation in an agile project. If the architecture is
designed up-front, as in traditional waterfall development, the proper moment for
evaluation is naturally when the design is mostly done. However, when using agile
methods, such as Scrum, the architecture is often created in sprints. Performing
an ATAM-like evaluation in every sprint that creates architecture would be too
time-consuming. On the other hand, if the evaluation is carried out as a post-Scrum
activity, the system is already implemented and the changes become costly.

A recent survey shows that architects seldom revisit the decisions made [25]. This
might be because the implementation is ready and changing the architectural
decisions would be costly. Therefore, it would be advisable to do evaluation of the
decisions right after they are made. This approach can be extended to agile practices.
If the architecture and architectural decisions are made in sprints, it would be
advisable to revisit and review these decisions immediately after the sprint.

Partly motivated by this reasoning, a new software architecture evaluation method


called DCAR was proposed in [21]. DCAR uses architectural decisions as the basic
concept in the architecture evaluation. Another central concept in DCAR is a decision
force—that is, any fact or viewpoint that has pushed the decision in a certain direction
[31]. Forces can be requirements, or existing decisions (e.g., technology choices,
previous experiences, political or economical considerations, etc.). A force is a basic
unit of the rationale of a decision; essentially, a decision is made to balance the
forces. In DCAR, a set of architectural decisions is analyzed by identifying the forces
that have affected the decisions and by examining whether the decision is still
justified in the presence of the current forces. Besides the evaluation team, only
architects and chief developers are assumed to participate in the evaluation.
DCAR is expected to be more suitable for agile projects than scenario-based methods
because of its lightweightness: a typical system-wide DCAR evaluation session can be
carried out in half a day, with roughly 15-20 person-hours of project resources [21].
Thus, performing a DCAR evaluation during a sprint is quite feasible—especially
when developing systems with long life spans or with special emphasis on risks (e.g.,
safety-critical systems). On the other hand, since DCAR is structured according to
decisions rather than scenarios, it can be carried out incrementally by considering a
certain subset of decisions at a time. If the forces of decisions do not change between
successive DCAR evaluations, the conclusions drawn in previous evaluation sessions
are not invalidated by later evaluations. Thus, in Scrum, decisions can be evaluated
right after the sprint they are made in. If the number of decisions is relatively small
(say, < 10 decisions), such a partial evaluation requires less than 2 h and can be done
as part of the sprint retrospect in Scrum.

If new information emerges in the sprints, the old decisions can be revisited and
reevaluated. DCAR makes this possible by documenting the relationships between
decisions. If a decision needs to be changed in a sprint, it is easy to see which
earlier decisions might be affected as well and (re-)evaluate them. Additionally, as
the decision drivers (i.e., forces) are documented in each decision, it is rather easy
to see if the emergent information is going to affect the decision and if the decision
should be reevaluated.

From the AKM viewpoint, a particularly beneficial feature of DCAR is that the
decisions are documented as part of the DCAR process, during the evaluation. This
decision documentation can be reused in software architecture documentation as
such. If a tool is used in DCAR to keep track of the evaluation and to record the
decision documentation, this information can be immediately stored in the AIR
without extra effort.

> Read full chapter

Architectural Debt Management in Val-


ue-Oriented Architecting
Zengyang Li, ... Paris Avgeriou, in Economics-Driven Software Architecture, 2014

9.6 Related work


Value-oriented software architecting is an important area in value-based software
engineering (Boehm, 2006), especially for architecture practitioners, since it explic-
itly considers economic aspects as a driven factor within the whole architecting
process. Practitioners and researchers in the software architecture community have
already put considerable effort into this area and have investigated value and eco-
nomic impact in architecture design. Kazman et al. proposed the Cost-Benefit Analy-
sis Method (CBAM) for architecture evaluation (Kazman et al., 2001), which models
and calculates the costs and benefits of architecture decisions to assist architecture
evaluation in a cost and benefit perspective. Both CBAM and architecture evaluation
with DATDM evaluate architectural strategies from a cost-benefit perspective based
on scenarios. The major differences between CBAM and architecture evaluation
with DATDM are as follows: (1) CBAM evaluates the quality attribute benefit of an
architectural strategy, while our approach evaluates both the nontechnical benefit
(e.g., organizational benefit) and the quality attribute benefit of an architecture
decision; (2) CBAM estimates the cost of implementing an architectural strategy, but
our approach estimates the future cost of maintenance and evolution tasks, plus the
implementation cost of an architecture decision; and (3) our approach considers the
probability of a change scenario in the next release as a parameter when estimating
the cost of an ATD item. Martínez-Fernández et al. presented a reuse-based eco-
nomic model for software reference architectures (Martínez-Fernández et al., 2012).
This economic model provides a cost-benefit analysis for the adoption of reference
architectures to optimize architectural decision making. This model also estimates
the development and maintenance benefits and costs of a specific product based
on reuse of a candidate reference architecture, and the reference architecture with
highest ROI (return on investment) is selected. With this model, the benefits and
costs of a software architecture as a whole are calculated, while in our DATDM
approach, benefits and costs are measured based on architecture decisions and
incurred ATD items.

Architectural technical debt management is an emerging research area in software


architecture. To date, little research has been conducted on technical debt man-
agement at the architecture level, and the scope of architectural technical debt is
not clear (Kruchten et al., 2012). Nord et al. employed an architecture-focused and
measurement-based approach to develop a metric to quantify and manage archi-
tectural technical debt (Nord et al., 2012). In their approach, architectural technical
debt is modeled as rework, and the amount of rework caused by a suboptimal
architecture design strategy is considered as the metric for architectural technical
debt measurement. This approach “can be used to optimize the cost of development
over time while continuing to deliver value to the customer” (Nord et al., 2012, p.
91). Measuring ATD incurred by different design paths in this approach provides a
good way to estimate the ATD incurred by a group of architecture decisions.

> Read full chapter


Archample—Architectural Analysis Ap-
proach for Multiple Product Line Engi-
neering
Bedir Tekinerdogan, ... Onur Aktuğ, in Relating System Quality and Software Archi-
tecture, 2014

10.4 Archample Method


The activities of the Archample approach are shown in Figure 10.6. As the figure
shows, Archample consists of four phases: Preparation, Design Documentation, Eval-
uation, and Reporting. Archample is performed by a set of key stakeholders:
Figure 10.6. Archample process.

• Project decision makers: People interested in the result of the evaluation and
who can affect the project’s directions. These decision makers are usually the
project managers.
• MPL architect: A person or team responsible for design of the MPL architecture
and the coordination of the design of the subarchitecture.
• PL architect: A person or team responsible for design of a single PL architecture.
The single PL architect typically informs the MPL architect about the results
and if needed also adapts the architecture to fit the overall architecture.
• Architecture stakeholders: Developers, testers, integrators, maintainers, perfor-
mance engineers, users, builders of systems interacting with the one under
consideration, and others.
• MPL architecture evaluator(s): A person or team responsible for the evaluation
of the MPL architecture as well as the coordination of the evaluation of the PL
architectures.
• PL architecture evaluator(s): A person or team responsible for the evaluation of
the PL architecture as well as the coordination of the evaluation of the MPL
architectures.

In principle, all these stakeholders may apply to both viewpoints in the previous
section. In the following subsections, we elaborate on each phase of the method.

10.4.1 Preparation phase


During the Preparation Phase, first the stakeholders and the evaluation team (step 1)
are selected. The stakeholders are typically a subset of the stakeholders (including
project decision makers) listed above. After the stakeholders are selected, the sched-
ule for evaluation is planned (step 2). In general, the complete evaluation of the MPL
will take more time than for a single architecture evaluation. Hence, for defining the
schedule a larger timeframe than usual is adopted.

10.4.2 Selection of feasible MPL decomposition


In this phase the different MPL architecture design alternatives are provided (step 3),
and the feasible alternative is selected (step 4). The MPL alternatives are described
using the MPL decomposition and uses viewpoints. Representation of the MPL
architecture in step 3 is necessary to ensure that the proper input is provided to
the analysis in step 4. At this stage, no detailed design of the MPL is necessary. This
is because designing an MPL is a time-consuming process. Only after the feasible
decomposition is found in step 4 will the design documentation be completed in
step 5.

For the selection of the feasible PL architecture in step 4 we adopt the Goal-Ques-
tion-Metric (GQM) approach, a measurement model promoted by Basili and others
(Roy and Graham, 2008). The GQM approach is based upon the assumption that for
an organization to measure in a purposeful way, the goals of the projects need to be
specified first. Subsequently, a set of questions must be defined for each goal, and
finally a set of metrics associated with each question is defined to answer each one in
a measurable way. For applying the GQM, usually a six-step process is recommended
where the first three steps are about using business goals to drive the identification
of the right metrics, and the last three steps are about gathering the measurement
data and making effective use of the measurement results to drive decision making
and improvements. The six steps are usually defined as follows (Roy and Graham,
2008; Solingen and Berghout, 1999):

1. Develop a set of corporate, division, and project business goals and associated
measurement goals for productivity and quality.
2. Generate questions (based on models) that define those goals as completely
as possible in a quantifiable way.
3. Specify the measures needed to be collected to answer those questions and
track process and product conformance to the goals.
4. Develop mechanisms for data collection.

5. Collect, validate, and analyze the data in real time to provide feedback to
projects for corrective action.
6. Analyze the data in a post mortem fashion to assess conformance to the goals
and to make recommendations for future improvements.

10.4.3 Evaluation of selected MPL design alternative


Step 4 focuses on selecting a feasible MPL decomposition alternative. An MPL con-
sists of several PLs and thus multiple architectures. Likewise, in step 5 of Archample,
we focus on refined analysis of the selected MPL alternative. In fact, the selected
alternative can be a single PL architecture or different MPL architectures. In case
the alternative is a CPL, we apply a staged-evaluation approach in which the MPL
units (PLs or CPLs) are recursively evaluated. From this perspective, we distinguish
among the following two types of evaluations: (a) top-down product evaluations and
(b) bottom-up product evaluations.

In the top-down evaluation, first the higher level PLs are evaluated. This is illustrated
in Figure 10.7. Here, the evaluation order is indicated through the numbers in the
filled circles. The evaluation starts with evaluation the top-level decomposition of
the MPL architecture and continues with the subelements of the MPL, which can be
again CPL or single PL.
Figure 10.7. Top-down MPL evaluation.

In the bottom-up approach first the leaf PLs are evaluated, then the higher level ar-
chitectures. An example bottom-up specialization is shown in Figure 10.8. Obviously,
other hybrid specialization approaches that fall between top-down and bottom-up
strategy can be applied. The selection of the particular evaluation strategy (top-down,
bottom-up, or hybrid) depends on the particular constraints and requirements of the
project. A hybrid approach can be preferred by considering the dependency relations
among PLs, which are modeled in the PL dependency view.

Figure 10.8. Bottom-up MPL evaluation.

The evaluation of the architecture can be done using any architecture evaluation
method (including GQM again). Over the last decade several different architecture
analysis approaches have been proposed to analyze candidate architectures with
respect to desired quality attributes (Babar et al., 2004; Dobrica and Niemela, 2002;
Kazman et al., 2005). The architecture evaluation methods can be categorized in
different ways. Early evaluation methods evaluate the architecture before its imple-
mentation while late architecture evaluation methods require the implementation to
perform the evaluation. In principle, within Archample we do not restrict the selection
of any method.

10.4.4 Reporting and workshop


In the last phase of Archample, a report of the evaluation results is provided and a
workshop with the stakeholders is organized. The stakeholders are typically a subset
of the list as defined in Section 10.4. A template for the report is given in Table 10.2.

Table 10.2. Outline of the Final Evaluation Report

Chapter 1 Introduction

Chapter 2 Archample Overview

Chapter 3 Context and Business Drivers

Chapter 4 MPL Architecture Alternatives

Chapter 5 GQM Analysis of MPL Alternatives

Chapter 6 Architecture Documentation of Selected Alterna-


tive
Chapter 7 Evaluation of Selected Alternative

Chapter 8 Overall Recommendations

Chapter 9 Conclusion

Appendix

The first Chapters 1–3Chapter 1Chapter 2Chapter 3 of the report provide the back-
ground information about the company and its business goals and describe the
Archample method. Chapter 4 defines the different MPL architecture alternatives.
Chapter 5 analyzes the MPL design alternatives and selects a feasible alternative.
Chapter 6 presents the documentation of the selected alternative. In Chapter 7,
the evaluation of the alternative is described using staged-evaluation approach
(top-down, bottom-up, hybrid) and the evaluation results. Chapter 8 presents the
overall recommendations, and Chapter 9 concludes the report. An appendix can
consist of several sections and include, for example, the glossary for the project,
explanation about standards, viewpoints, or other pertinent factors. After the first
complete draft of the report, a workshop is organized to discuss the results. The
discussions during the workshop are used to adapt the report and define the final
version.

> Read full chapter


Quality concerns in large-scale and
complex software-intensive systems
Bedir Tekinerdogan, ... Richard Soley, in Software Quality Assurance, 2016

1.4 Addressing System Qualities


SQA can be addressed in several different ways and cover the entire software
development process.

Different software development lifecycles have been introduced including waterfall,


prototyping, iterative and incremental development, spiral development, rapid ap-
plication development, and agile development. The traditional waterfall model is a
sequential design process in which progress is seen as flowing steadily downwards
(like a waterfall) through the phases of Analysis, Design, Implementation, Testing,
and Maintenance. The waterfall model implies the transition to a phase only when
its preceding phase is reviewed and verified. Typically, the waterfall model places
emphasis on proper documentation of artefacts in the life cycle activities. Advocates
of agile software development paradigm argue that for any non-trivial project
finishing a phase of a software product’s life cycle perfectly before moving to the
next phases is practically impossible. A related argument is that clients may not know
exactly what requirements they need and as such requirements need to be changed
constantly.

It is generally acknowledged that a well-defined mature process will support the


development of quality products with a substantially reduced number of defects.
Some popular examples of process improvement models include the Software En-
gineering Institute’s Capability Maturity Model Integration (CMMI), ISO/IEC 12207,
and SPICE (Software Process Improvement and Capability Determination).

Software design patterns are generic solutions to recurring problems. Software


quality can be supported by reuse of design patterns that have been proven in the
past. Related to design patterns is the concept of anti-patterns, which are a common
response to a recurring problem that is usually ineffective and counterproductive.
Code smell is any symptom in the source code of a program that possibly indicates
a deeper problem. Usually code smells relate to certain structures in the design that
indicate violation of fundamental design principles and likewise negatively impact
design quality.

An important aspect of SQA is software architecture. Software architecture is a


coordination tool among the different phases of software development. It bridges
requirements to implementation and allows reasoning about satisfaction of systems’
critical requirements (Albert and Tullis, 2013). Quality attributes (Babar et al., 2004)
are one kind of non-functional requirement that are critical to systems. The Software
Engineering Institute (SEI) defines a quality attribute as “a property of a work product
or goods by which its quality will be judged by some stakeholder or stakeholders”
(Koschke and Simon, 2003). They are important properties that a system must
exhibit, such as scalability, modifiability, or availability (Stoermer et al., 2006).

Architecture designs can be evaluated to ensure the satisfaction of quality at-


tributes. Tvedt Tesoriero et al. (2004), Stoermer et al. (2006) divide architectural
evaluation work into two main areas: pre-implementation architecture evaluation-
, and implementation-oriented architecture conformance. In their classification,
pre-implementation architectural approaches are used by architects during initial
design and provisioning stages, before the actual implementation starts. In contrast
implementation-oriented architecture conformance approaches assess whether the
implemented architecture of the system matches the intended architecture of the
system. Architectural conformance assesses whether the implemented architecture
is consistent with the proposed architecture’s specification, and the goals of the
proposed architecture.

To evaluate or design a software architecture at the pre-implementation stage, tactics


or architectural styles are used in the architecting or evaluation process. Tactics are
design decisions that influence the control of a quality attribute response. Architec-
tural Styles or Patterns describe the structure and interaction between collections of
components affecting positively to a set of quality attributes but also negatively to
others. Software architecture methods are encountered in the literature to design
systems based on their quality attributes such as the Attribute Driven Design (ADD)
or to evaluate the satisfaction of quality attributes in a software architectural design
such as the Architecture Tradeoff Analysis Method (ATAM). For example, ADD and
ATAM follow a recursive process based on quality attributes that a system needs to
fulfill. At each stage, tactics and architectural patterns (or styles) are chosen to satisfy
some qualities.

Empirical studies have demonstrated that one of the most difficult tasks in software
architecture design and evaluation is finding out what architectural patterns/styles
satisfy quality attributes because the language used in patterns does not directly
indicate the quality attributes. This problem has also been indicated in the literature
(Gross and Yu, 2001 and Huang et al., 2006).

Also, guidelines for choosing or finding tactics that satisfy quality attributes have
been reported to be an issue in as well as defining, evaluating, and assessing which
architectural patterns are suitable to implement the tactics and quality attributes
(Albert and Tullis, 2013). Towards solving this issue Bachmann et al. (2003), Babar
et al. (2004) describe steps for deriving architectural tactics. These steps include
identifying candidate reasoning frameworks which include the mechanisms needed
to use sound analytic theories to analyze the behavior of a system with respect
to some quality attributes (Bachmann et al., 2005). However, this requires that
architects need to be familiar with formal specifications that are specific to quality
models. Research tools are being developed to aid architects integrate their rea-
soning frameworks (Christensen and Hansen, 2010), but still reasoning frameworks
have to be implemented, and tactics description and how they are applied has to be
indicated by the architect. It has also been reported by Koschke and Simon (2003)
that some quality attributes do not have a reasoning framework.

Harrison and Avgeriou have analyzed the impact of architectural patterns on quality
attributes, and how patterns interact with tactics (Harrison and Avgeriou, 2007;
Harrison and Avgeriou). The documentation of this kind of analysis can aid in
creating repositories for tactics and patterns based on quality attributes.

Architecture prototyping is an approach to experiment whether architecture tac-


tics provide desired quality attributes or not, and to observe conflicting qualities
(Bardram et al., 2005). This technique can be complementary to traditional architec-
tural design and evaluation methods such as ADD or ATAM (Bardram et al., 2005).
However, it has been noted to be quite expensive and that “substantial” effort must
be invested to adopt architecture prototyping (Bardram et al., 2005).

Several architectural conformance approaches exist in the literature (Murphy et al.,


2001; Ali et al.; Koschke and Simon, 2003). These check whether software conform
to the architectural specifications (or models). These approaches can be classified
either by using static (source code of system) (Murphy et al., 2001; Ali et al.) or
dynamic analysis (running system) (Eixelsberger et al., 1998), or both. Architectural
conformance approaches have been explicit in being able to check quality attributes
(Stoermer et al., 2006; Eixelsberger et al., 1998) and specifically run-time properties
such as performance or security (Huang et al., 2006). Also, several have provided
feedback on quality metrics (Koschke, 2000).

> Read full chapter

Validating the Architecture


Murat Erder, Pierre Pureur, in Continuous Architecture, 2016

When Do We Need to Validate?


How can we “continuously” evaluate architectures in Continuous Architecture, and
what is the best approach for each stage of the delivery of a project? Let’s return to
the case study introduced in Chapter 2 to provide an answer to this question with
an example.

As you will remember from Chapter 2, an IT group in a large U.S. financial services
corporation has decided to leverage the Continuous Architecture process to deliver
its new “WebShop” system, a web-based system to allow prospective customers to
compare their offerings with the competitions’ offerings. As we saw in Chapter 3, the
team has started capturing both their functional and quality attribute requirements.
However, their decisions to leverage open source products such as the “MEAN”
stack (see Chapter 3 for a discussion of the MEAN stack), to use JavaScript as their
development language for both the User Interface and the server components, and
to leverage a cloud infrastructure for development and for most of their testing have
started worrying some of the IT leadership. As a result, the enterprise chief architect
is asked to conduct an architecture evaluation of the system to ensure that the project
is not in jeopardy.

Fortunately, the enterprise chief architect is familiar with Continuous Architecture


and with the evaluation techniques we presented earlier in this chapter. She sets up
a small review team with experienced architects, a trained facilitator, and a scribe.
The review team is created as a peer review team by involving active architects from
other project areas.

Initial Evaluation
The review team meets with the WebShop project team, and given that the project is
still in the early stages, decides to focus on reviewing architecture and design deci-
sions made so far. The WebShop team members (especially their solution architect)
are skeptical of the value of this exercise and initially participate reluctantly. However,
they agree to provide the review team with some documentation, including their
decision log and an early draft of their Quality Attribute utility tree. They also provide
a conceptual view of their proposed architecture and agree to participate in the
review session together with one of their business stakeholders.

Based on the business drivers, the review team decides to focus on a small subset of
Quality Attributes, including cost effectiveness, performance, and security. Using the
project’s architecture and design decision log, they group the decisions by Quality
Attribute and update the preliminary utility tree; see Figure 6.1 for an example. They
prioritize the Quality Attributes using the business drivers, which then enables them
to further create a list of prioritized decisions.
Figure 6.1. “WebShop” utility tree with decisions.

Using these prioritized decisions, the review team runs a short 3-hour architecture
evaluation session and is able to confirm that the decisions are appropriate and
do not conflict with the existing IT standards and practices. As new insights into
the system emerge during the review, the solution architect and the “WebShop”
project team get more comfortable with the evaluation process and realize that this
is not a “finger-pointing” exercise. On the contrary, they understand that this review
improves the architecture and the system that they are in the process of delivering.

For example, the review identified a potential issue with the lack of configurability
of the solution, which may not meet the system’s quality attribute requirements.
Because the review occurred early in the Software Development Life Cycle, the
project team was able to react quickly and find an appropriate solution to the
problem, by externalizing some key configuration parameters in a database table.
By the end of the review, the “WebShop” team members (including the solution
architect) are enthusiastically participating in the process.

Continuous Evaluation
A few weeks later, the team believes that they now have a solid architecture, and
they start delivering well-tested code. Their architecture and design decision log has
been growing rapidly since the initial review, and they seem to have reached a point
where they are not frequently adding decisions to the log. Based on their positive
experience with the initial review session, the team decides that it is time for another
review and meets with the enterprise chief architect and the architecture peer review
team.

Given that the architecture is now fairly stable and that Quality Attributes, associated
refinements, and associated scenarios have now been captured in a well-document-
ed utility tree, the enterprise chief architect suggests running a full-day evaluation
session based on a few scenarios designed to stress the architecture. The project
team agrees, and they proceed with the evaluation process. As part of this evaluation,
the validation team creates several tests based on the scenarios and tests the code
produced so far against those scenarios.

Two key risks are discovered during the evaluation session: performance and secu-
rity. As part of the analysis and tests of the scenarios associated with the Quality
Attributes, the evaluation team discovers that the “WebShop” system may not be able
to provide the expected response time with the anticipated load (Figure 6.2 shows
the utility tree with the performance and latency scenario). Further discussion with
the project team reveals that the architecture “sensitivity point” associated with this
risk is the set of back-end services that the “mobile shopping” system is planning to
use (Figure 6.3). In addition, the architecture peer review team has an extra concern:
If the business stakeholders decide to add a mobile interface to the system, the
usage frequency may significantly increase and exceed the scenario stimulus of
25 concurrent users accessing the system simultaneously. This in turn would cause
additional stress on the back-end services, possibly causing performance issues in
other systems that use the same services.

Figure 6.2. “WebShop” utility tree with scenarios.


Figure 6.3. “WebShop” architecture with sensitivity points.

Similarly, the validation team analyzes and tests the scenarios associated with secu-
rity and discovers that the security services may not be able to handle the access
control requirements associated with the system. Those services were designed to
control access from existing customers who have already established a security pro-
file. However, the “mobile shopping” system is expected to be used by prospective
customers without a preestablished security profile (Figure 6.2). Access control is
noted as a risk, with security services as the associated sensitivity point (see Figure
6.3).

Based on the feedback from the architecture validation session, the team addresses
the two risks flagged by the review. They meet with the team responsible for
maintaining the back-end services and jointly design some improvements that
greatly increase the performance and scalability of those services. Likewise, they
meet with the team responsible for maintaining the security services and negotiate
the inclusion of their requirements in a future release to be delivered in time to
support their delivery schedule.

Periodic Architecture Checkups


Based on the recommendation of the enterprise chief architect, the team decides to
schedule periodic architecture checkups, triggered by significant additions to the
architecture decision and design log.

The team monitors the decision log on at least a weekly basis and organizes a review
when the team believes that one or several decisions have significantly impacted the
architecture or the design of the WebShop system. These reviews can be suggested
by any member of the team who thinks that the architecture or the design has
significantly changed at any time. As a result, the team meets and decides whether
an architecture checkup is warranted.

These architecture checkups follow a decision-based approach and include tests run
against the code produced so far. By now, every team member is familiar with the
approach, and the enterprise chief architect and her team no longer need to facilitate
or even attend the review sessions. If possible, business stakeholders are invited to
participate in the architecture checkups to ensure that the system fulfills its business
drivers and objectives and to make the software delivery process as transparent as
possible. Each session is followed by a brief readout that summarizes the findings
of the session, any new risks or issues discovered in the session, and the plan of
action to address those risks and issues if applicable. These results are published in
the enterprise social media platform to provide full visibility.

Code Inspections
In addition to the decision-based architecture checkups, the team also conducts
periodic code inspections. Most of those reviews are automated using static code
analysis tools as part of the continuous deployment process (see Chapter 5 for a
discussion of the continuous deployment process), but there may be times when
a manual evaluation is required to supplement the static code analysis tools, for
example, when a component is unusually complex or exhibits some issues when
performance testing the system. These reviews are simple checklist-based valida-
tions that essentially ensure that the architecture decisions have been properly
implemented in the code and that the code is well written and easy to understand.

Code Inspections
Code inspections can be achieved by either manual code reviews or by using static
analysis tools.

Manual Code Reviews

A code review process is much simpler than an architecture review. A team of experts
gets together with the author of the code and manually inspects that code to discover
defects. It is, of course, much more efficient to discover defects before a system is
deployed than after deployment.

Depending on the programming language being used, a typical code inspection


review would look at 200 to 400 lines of code, so a decision needs to be made before
the meeting on which component needs to be manually inspected.

Static Code Analysis


Static program analysis is the analysis of computer software that is performed
without actually executing programs (analysis performed on executing programs is
known as dynamic analysis). In most cases the analysis is performed on some version
of the source code, and in the other cases, some form of the object code. The term is
usually applied to the analysis performed by an automated tool, with human analysis
being called program understanding, program comprehension, or code review.5,6

A number of static code analysis tools are available to save time on most code
reviews. Their advantage is that they are able to inspect 100% of the code, but they
may not be able to find every defect that an expert would, assuming the expert had
time to inspect 100% of the code. Static code analysis tools can be either open source
or proprietary. Please see Wikipedia7 for a list of tools available for each commonly
used programming language.

> Read full chapter

ScienceDirect is Elsevier’s leading information solution for researchers.


Copyright © 2018 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V. Terms and conditions apply.

You might also like