Professional Documents
Culture Documents
IOT Architecture
IOT Architecture
Related terms:
Dimension Details
Risks Using same communication protocol for in-
ter-system and intra-system communication can
make failure safety procedures difficult.Using
broker pattern can result in selection of services
not fully compliant with QoS constraints.Data
storage on centralised persistence units is a risk
for availability.Communication overhead of using
Publisher/Subscriber pattern.
Sensitivity and Trade-off Points Maintaining Facade for continuously evolving
services.Using Singleton pattern instead of Mes-
sage broker pattern for multi-tenant session
management.Negative impact of layered pattern
on performance.Trade-off between performance
and modifiability by using either publisher scriber
or layered pattern.Misuse of Pipes and Filter pat-
tern where Broker pattern can be used.Trade-off is
needed for Security versus Performance.
Commonly Missing Quality Attributes Safety on Failure.System availability when Inter-
net connection is not available.
IoT RA Shortcomings Architecture and design patterns for implement-
ing different elements of the IoT RA are not spec-
ified in the IoT RA.
SAAM is the earliest method proposed to analyze architecture using scenarios. The
analysis of multiple candidate architectures requires applying SAAM to each of the
proposed architectures and then comparing the results. This can be very costly in
terms of time and effort if the number of architectures to be compared is large.
SAAM has been further extended into a number of methods, such as SAAM for
complex scenarios [58], extending SAAM by integration in the domain-centric and
reuse-based development process [59], and SAAM for evolution and reusability
[60]. ATAM grew out of SAAM. The key advantages of ATAM are explicit ways of
understanding how an architecture supports multiple competing quality attributes
and of performing trade-off analysis. ATAM uses both qualitative techniques, such
as scenarios, and quantitative techniques for measuring the qualities of the archi-
tecture.
Benstsson and Bosch proposed several methods (such as SBAR [61], ALPSM [62],
and ALMA) [56]. All these methods use one or a combination of various analysis
techniques (i.e., scenarios, simulation, mathematical modeling, or experience-based
reasoning [39]). All of these methods use scenarios to characterize quality attributes.
The desired scenarios are mapped onto architectural components to assess the
architecture's capability to support those scenarios or identify the changes required
to handle those scenarios. PASA is an architecture analysis method that combines
scenarios and quantitative techniques [57]. PASA uses scenarios to determine a sys-
tem’s performance objectives and applies principles and techniques from software
performance engineering (SPE) to determine whether an architecture is capable
of supporting the performance scenarios. PASA includes performance-sensitive
architectural styles and anti-patterns as analysis tools and formalizes the architecture
analysis activity of the performance engineering process reported in [63].
In many evaluation methods, business drivers that affect the architectural design are
explicitly mentioned, and important quality attributes are specified. Given that these
artifacts are also documented during the evaluation, the evaluation may improve
the architectural documentation (AD) as well. In addition, as evaluation needs AD,
some additional documentation may be created for the evaluation, contributing to
the overall documentation of the system.
Almost all evaluation methods identify and utilize architecture decisions, but they do
not validate the reasoning behind the decisions. Only CBAM operates partially also
in the problem-space. The other methods merely explore the solution space and try
to find out which consequences of the decisions are not addressed. In DCAR, the
architecture decisions are a first-class entity, and the whole evaluation is carried out
purely on considering the decision drivers of the made decisions.
Many of the existing evaluation methods focus on certain quality attribute (such
as maintainability in ALMA, Bengtsson, 2004, interoperability and extensibility in
FAAM, Dolan, 2002, or some other single aspect of the architecture such as eco-
nomics (CBAM), Kazman et al., 2001). However, architecture decisions are affected
by a variety of drivers. The architect needs to consider not only the wanted quality
attributes and costs, but also the experience, expertise, organization structure,
and resources, for example, when making a decision. These drivers may change
during the system development, and while a decision might still be valid, new
more beneficial options might have become available and these should be taken
into consideration. We intend to support this kind of broad analysis of architecture
decisions with DCAR. Further, we aim at a method that allows the evaluation of
software architecture iteratively decision by decision, so that it can be integrated with
agile development methods and frameworks such as Scrum (Schwaber and Beedle,
2001).
Architectural evaluations not only reveal risks in the system design, but also bring
up a lot of central information about software architecture. The authors have carried
out approximately 20 full-scale scenario-based evaluations in the industry, and in
most cases the industrial participants have expressed their need for uncovering
architectural knowledge as a major motivation for the evaluation. A typical feedback
comment has been that a significant benefit of the evaluation was communication
about software architecture between different stakeholders, which otherwise would
not have taken place. Thus, software architecture evaluation has an important facet
related to AKM that is not often recognized.
From the viewpoint of agile development, the main drawback of ATAM (and sce-
nario-based architecture evaluation methods in general) is heavyweightness; sce-
nario-based methods are considered to be rather complicated and expensive to use
[25–27]. A medium-sized ATAM evaluation can take up to 40 person-days covering
the work of different stakeholders ([28], p. 41). In our experience, even getting all
the required stakeholders in the same room for 2 or 3 days is next to impossible in
an agile context. Furthermore, a lot of time in the evaluation is spent on refining
quality requirements into scenarios and on discussing the requirements and even
the form of the scenarios. Most of the scenarios are actually not used (as they
don’t get sufficient votes in the prioritization), implying notable waste in the
lean sense [16]. On the other hand, although communication about requirements
is beneficial, it is often time-consuming as it comes back to the question of the
system’s purpose. In an agile context, the question about building the right product
is a central concern that is taken care of by fast development cycles and incremental
development, allowing the customers to participate actively in the process. Thus,
a more lightweight evaluation method that concentrates on the soundness of the
current architectural decisions, rather than on the requirement analysis, would
better serve an agile project setup.
Another problem with integrating ATAM with agile processes is that ATAM is de-
signed for one-off evaluation rather than for the continuous evaluation that would
be required in Agile. ATAM is based on a holistic view of the system, starting
with top-level quality requirements that are refined into concrete scenarios, and
architectural approaches are analyzed only against these scenarios. This works well in
a one-off evaluation, but poses problems in an agile context where the architecture
is developed incrementally. The unit of architectural development is an architectural
decision, and an agile evaluation method should be incremental with respect to this
unit, in the sense that the evaluation can be carried out by considering a subset of
the decisions at a time.
A recent survey shows that architects seldom revisit the decisions made [25]. This
might be because the implementation is ready and changing the architectural
decisions would be costly. Therefore, it would be advisable to do evaluation of the
decisions right after they are made. This approach can be extended to agile practices.
If the architecture and architectural decisions are made in sprints, it would be
advisable to revisit and review these decisions immediately after the sprint.
If new information emerges in the sprints, the old decisions can be revisited and
reevaluated. DCAR makes this possible by documenting the relationships between
decisions. If a decision needs to be changed in a sprint, it is easy to see which
earlier decisions might be affected as well and (re-)evaluate them. Additionally, as
the decision drivers (i.e., forces) are documented in each decision, it is rather easy
to see if the emergent information is going to affect the decision and if the decision
should be reevaluated.
From the AKM viewpoint, a particularly beneficial feature of DCAR is that the
decisions are documented as part of the DCAR process, during the evaluation. This
decision documentation can be reused in software architecture documentation as
such. If a tool is used in DCAR to keep track of the evaluation and to record the
decision documentation, this information can be immediately stored in the AIR
without extra effort.
• Project decision makers: People interested in the result of the evaluation and
who can affect the project’s directions. These decision makers are usually the
project managers.
• MPL architect: A person or team responsible for design of the MPL architecture
and the coordination of the design of the subarchitecture.
• PL architect: A person or team responsible for design of a single PL architecture.
The single PL architect typically informs the MPL architect about the results
and if needed also adapts the architecture to fit the overall architecture.
• Architecture stakeholders: Developers, testers, integrators, maintainers, perfor-
mance engineers, users, builders of systems interacting with the one under
consideration, and others.
• MPL architecture evaluator(s): A person or team responsible for the evaluation
of the MPL architecture as well as the coordination of the evaluation of the PL
architectures.
• PL architecture evaluator(s): A person or team responsible for the evaluation of
the PL architecture as well as the coordination of the evaluation of the MPL
architectures.
In principle, all these stakeholders may apply to both viewpoints in the previous
section. In the following subsections, we elaborate on each phase of the method.
For the selection of the feasible PL architecture in step 4 we adopt the Goal-Ques-
tion-Metric (GQM) approach, a measurement model promoted by Basili and others
(Roy and Graham, 2008). The GQM approach is based upon the assumption that for
an organization to measure in a purposeful way, the goals of the projects need to be
specified first. Subsequently, a set of questions must be defined for each goal, and
finally a set of metrics associated with each question is defined to answer each one in
a measurable way. For applying the GQM, usually a six-step process is recommended
where the first three steps are about using business goals to drive the identification
of the right metrics, and the last three steps are about gathering the measurement
data and making effective use of the measurement results to drive decision making
and improvements. The six steps are usually defined as follows (Roy and Graham,
2008; Solingen and Berghout, 1999):
1. Develop a set of corporate, division, and project business goals and associated
measurement goals for productivity and quality.
2. Generate questions (based on models) that define those goals as completely
as possible in a quantifiable way.
3. Specify the measures needed to be collected to answer those questions and
track process and product conformance to the goals.
4. Develop mechanisms for data collection.
5. Collect, validate, and analyze the data in real time to provide feedback to
projects for corrective action.
6. Analyze the data in a post mortem fashion to assess conformance to the goals
and to make recommendations for future improvements.
In the top-down evaluation, first the higher level PLs are evaluated. This is illustrated
in Figure 10.7. Here, the evaluation order is indicated through the numbers in the
filled circles. The evaluation starts with evaluation the top-level decomposition of
the MPL architecture and continues with the subelements of the MPL, which can be
again CPL or single PL.
Figure 10.7. Top-down MPL evaluation.
In the bottom-up approach first the leaf PLs are evaluated, then the higher level ar-
chitectures. An example bottom-up specialization is shown in Figure 10.8. Obviously,
other hybrid specialization approaches that fall between top-down and bottom-up
strategy can be applied. The selection of the particular evaluation strategy (top-down,
bottom-up, or hybrid) depends on the particular constraints and requirements of the
project. A hybrid approach can be preferred by considering the dependency relations
among PLs, which are modeled in the PL dependency view.
The evaluation of the architecture can be done using any architecture evaluation
method (including GQM again). Over the last decade several different architecture
analysis approaches have been proposed to analyze candidate architectures with
respect to desired quality attributes (Babar et al., 2004; Dobrica and Niemela, 2002;
Kazman et al., 2005). The architecture evaluation methods can be categorized in
different ways. Early evaluation methods evaluate the architecture before its imple-
mentation while late architecture evaluation methods require the implementation to
perform the evaluation. In principle, within Archample we do not restrict the selection
of any method.
Chapter 1 Introduction
Chapter 9 Conclusion
Appendix
The first Chapters 1–3Chapter 1Chapter 2Chapter 3 of the report provide the back-
ground information about the company and its business goals and describe the
Archample method. Chapter 4 defines the different MPL architecture alternatives.
Chapter 5 analyzes the MPL design alternatives and selects a feasible alternative.
Chapter 6 presents the documentation of the selected alternative. In Chapter 7,
the evaluation of the alternative is described using staged-evaluation approach
(top-down, bottom-up, hybrid) and the evaluation results. Chapter 8 presents the
overall recommendations, and Chapter 9 concludes the report. An appendix can
consist of several sections and include, for example, the glossary for the project,
explanation about standards, viewpoints, or other pertinent factors. After the first
complete draft of the report, a workshop is organized to discuss the results. The
discussions during the workshop are used to adapt the report and define the final
version.
Empirical studies have demonstrated that one of the most difficult tasks in software
architecture design and evaluation is finding out what architectural patterns/styles
satisfy quality attributes because the language used in patterns does not directly
indicate the quality attributes. This problem has also been indicated in the literature
(Gross and Yu, 2001 and Huang et al., 2006).
Also, guidelines for choosing or finding tactics that satisfy quality attributes have
been reported to be an issue in as well as defining, evaluating, and assessing which
architectural patterns are suitable to implement the tactics and quality attributes
(Albert and Tullis, 2013). Towards solving this issue Bachmann et al. (2003), Babar
et al. (2004) describe steps for deriving architectural tactics. These steps include
identifying candidate reasoning frameworks which include the mechanisms needed
to use sound analytic theories to analyze the behavior of a system with respect
to some quality attributes (Bachmann et al., 2005). However, this requires that
architects need to be familiar with formal specifications that are specific to quality
models. Research tools are being developed to aid architects integrate their rea-
soning frameworks (Christensen and Hansen, 2010), but still reasoning frameworks
have to be implemented, and tactics description and how they are applied has to be
indicated by the architect. It has also been reported by Koschke and Simon (2003)
that some quality attributes do not have a reasoning framework.
Harrison and Avgeriou have analyzed the impact of architectural patterns on quality
attributes, and how patterns interact with tactics (Harrison and Avgeriou, 2007;
Harrison and Avgeriou). The documentation of this kind of analysis can aid in
creating repositories for tactics and patterns based on quality attributes.
As you will remember from Chapter 2, an IT group in a large U.S. financial services
corporation has decided to leverage the Continuous Architecture process to deliver
its new “WebShop” system, a web-based system to allow prospective customers to
compare their offerings with the competitions’ offerings. As we saw in Chapter 3, the
team has started capturing both their functional and quality attribute requirements.
However, their decisions to leverage open source products such as the “MEAN”
stack (see Chapter 3 for a discussion of the MEAN stack), to use JavaScript as their
development language for both the User Interface and the server components, and
to leverage a cloud infrastructure for development and for most of their testing have
started worrying some of the IT leadership. As a result, the enterprise chief architect
is asked to conduct an architecture evaluation of the system to ensure that the project
is not in jeopardy.
Initial Evaluation
The review team meets with the WebShop project team, and given that the project is
still in the early stages, decides to focus on reviewing architecture and design deci-
sions made so far. The WebShop team members (especially their solution architect)
are skeptical of the value of this exercise and initially participate reluctantly. However,
they agree to provide the review team with some documentation, including their
decision log and an early draft of their Quality Attribute utility tree. They also provide
a conceptual view of their proposed architecture and agree to participate in the
review session together with one of their business stakeholders.
Based on the business drivers, the review team decides to focus on a small subset of
Quality Attributes, including cost effectiveness, performance, and security. Using the
project’s architecture and design decision log, they group the decisions by Quality
Attribute and update the preliminary utility tree; see Figure 6.1 for an example. They
prioritize the Quality Attributes using the business drivers, which then enables them
to further create a list of prioritized decisions.
Figure 6.1. “WebShop” utility tree with decisions.
Using these prioritized decisions, the review team runs a short 3-hour architecture
evaluation session and is able to confirm that the decisions are appropriate and
do not conflict with the existing IT standards and practices. As new insights into
the system emerge during the review, the solution architect and the “WebShop”
project team get more comfortable with the evaluation process and realize that this
is not a “finger-pointing” exercise. On the contrary, they understand that this review
improves the architecture and the system that they are in the process of delivering.
For example, the review identified a potential issue with the lack of configurability
of the solution, which may not meet the system’s quality attribute requirements.
Because the review occurred early in the Software Development Life Cycle, the
project team was able to react quickly and find an appropriate solution to the
problem, by externalizing some key configuration parameters in a database table.
By the end of the review, the “WebShop” team members (including the solution
architect) are enthusiastically participating in the process.
Continuous Evaluation
A few weeks later, the team believes that they now have a solid architecture, and
they start delivering well-tested code. Their architecture and design decision log has
been growing rapidly since the initial review, and they seem to have reached a point
where they are not frequently adding decisions to the log. Based on their positive
experience with the initial review session, the team decides that it is time for another
review and meets with the enterprise chief architect and the architecture peer review
team.
Given that the architecture is now fairly stable and that Quality Attributes, associated
refinements, and associated scenarios have now been captured in a well-document-
ed utility tree, the enterprise chief architect suggests running a full-day evaluation
session based on a few scenarios designed to stress the architecture. The project
team agrees, and they proceed with the evaluation process. As part of this evaluation,
the validation team creates several tests based on the scenarios and tests the code
produced so far against those scenarios.
Two key risks are discovered during the evaluation session: performance and secu-
rity. As part of the analysis and tests of the scenarios associated with the Quality
Attributes, the evaluation team discovers that the “WebShop” system may not be able
to provide the expected response time with the anticipated load (Figure 6.2 shows
the utility tree with the performance and latency scenario). Further discussion with
the project team reveals that the architecture “sensitivity point” associated with this
risk is the set of back-end services that the “mobile shopping” system is planning to
use (Figure 6.3). In addition, the architecture peer review team has an extra concern:
If the business stakeholders decide to add a mobile interface to the system, the
usage frequency may significantly increase and exceed the scenario stimulus of
25 concurrent users accessing the system simultaneously. This in turn would cause
additional stress on the back-end services, possibly causing performance issues in
other systems that use the same services.
Similarly, the validation team analyzes and tests the scenarios associated with secu-
rity and discovers that the security services may not be able to handle the access
control requirements associated with the system. Those services were designed to
control access from existing customers who have already established a security pro-
file. However, the “mobile shopping” system is expected to be used by prospective
customers without a preestablished security profile (Figure 6.2). Access control is
noted as a risk, with security services as the associated sensitivity point (see Figure
6.3).
Based on the feedback from the architecture validation session, the team addresses
the two risks flagged by the review. They meet with the team responsible for
maintaining the back-end services and jointly design some improvements that
greatly increase the performance and scalability of those services. Likewise, they
meet with the team responsible for maintaining the security services and negotiate
the inclusion of their requirements in a future release to be delivered in time to
support their delivery schedule.
The team monitors the decision log on at least a weekly basis and organizes a review
when the team believes that one or several decisions have significantly impacted the
architecture or the design of the WebShop system. These reviews can be suggested
by any member of the team who thinks that the architecture or the design has
significantly changed at any time. As a result, the team meets and decides whether
an architecture checkup is warranted.
These architecture checkups follow a decision-based approach and include tests run
against the code produced so far. By now, every team member is familiar with the
approach, and the enterprise chief architect and her team no longer need to facilitate
or even attend the review sessions. If possible, business stakeholders are invited to
participate in the architecture checkups to ensure that the system fulfills its business
drivers and objectives and to make the software delivery process as transparent as
possible. Each session is followed by a brief readout that summarizes the findings
of the session, any new risks or issues discovered in the session, and the plan of
action to address those risks and issues if applicable. These results are published in
the enterprise social media platform to provide full visibility.
Code Inspections
In addition to the decision-based architecture checkups, the team also conducts
periodic code inspections. Most of those reviews are automated using static code
analysis tools as part of the continuous deployment process (see Chapter 5 for a
discussion of the continuous deployment process), but there may be times when
a manual evaluation is required to supplement the static code analysis tools, for
example, when a component is unusually complex or exhibits some issues when
performance testing the system. These reviews are simple checklist-based valida-
tions that essentially ensure that the architecture decisions have been properly
implemented in the code and that the code is well written and easy to understand.
Code Inspections
Code inspections can be achieved by either manual code reviews or by using static
analysis tools.
A code review process is much simpler than an architecture review. A team of experts
gets together with the author of the code and manually inspects that code to discover
defects. It is, of course, much more efficient to discover defects before a system is
deployed than after deployment.
A number of static code analysis tools are available to save time on most code
reviews. Their advantage is that they are able to inspect 100% of the code, but they
may not be able to find every defect that an expert would, assuming the expert had
time to inspect 100% of the code. Static code analysis tools can be either open source
or proprietary. Please see Wikipedia7 for a list of tools available for each commonly
used programming language.