Professional Documents
Culture Documents
03 - Quick Evaluation of A Software Architecture Using The Decision Centric Architecture Review (Cruz, Salinas, Astudillo ECSA'2020)
03 - Quick Evaluation of A Software Architecture Using The Decision Centric Architecture Review (Cruz, Salinas, Astudillo ECSA'2020)
03 - Quick Evaluation of A Software Architecture Using The Decision Centric Architecture Review (Cruz, Salinas, Astudillo ECSA'2020)
(camera-ready)
1 Introduction
There is wide agreement that software architecture is key to reach the desired
level of quality attributes in a software system. We can systematically approach
the evaluation of those quality attributes supported by the software architecture
by the use of software architecture evaluation methods.3
3
The literature in the software architecture evaluation community use the phrases
architecture evaluation and architecture review as synonyms. We will make use of
this convention.
2
3 Evaluation context
Fig. 1. Deployment view of the system front and back (as of this writing).
15. Resources required : DCAR is considered suitable for projects that do not
have budget, schedule or stakeholders available for a full-fledged architecture
evaluation; this was critical for us, because (as mentioned earlier) architec-
ture evaluation was not considered in the initial project planning.
DCAR was proposed in [18] as a lightweight method (in terms of time and
resources) to support a decision-by-decision software architecture evaluation.
Unlike scenario-based methods, where reviewers expect to test the architecture
against scenarios, DCAR favors working with a per-decision approach, putting
decision rationale at the core of the method.
A key DCAR concept is “decision force,” which is defined as any non-trivial
influence on an architect making decisions to find a solution for an architec-
tural problem [18][17]. Decision forces are discussed, and then used to challenge
decisions, to see whether they are fit to purpose or must be modified.
DCAR has nine steps:
The evaluation was carried out in a mixed offline-online fashion. The online part
of the evaluation took two half-days and had place at a meeting room of the or-
ganization site, with a whiteboard and a big screen for presentation purposes. In
the first online evaluation session, five people were involved: two client’s project
engineers (one as review leader and the other as customer representative), one
supporting quality engineer, and the vendor’s project manager and lead archi-
tect. The second online evaluation session took place in the same location two
weeks later, but without the supporting quality engineer; his absence forced us
to reorganize some activities related to (1) challenging decisions and (2) writing
down key insights gathered from the evaluation.
The offline activities were mainly devoted to completing decisions, decisions
documentation, and retrospectives and reporting. Nevertheless, frequent emails
and phone calls were required to assure alignment between both organizations.
Finally, when prioritizing decisions, we used consensus rather than voting.
We present three architectural decisions that were challenged and then validated
or changed. Figure 2 shows the relationships among these decisions, as recom-
mended by DCAR, using a published documentation framework for architecture
decisions [9].
Fig. 4. Decision partial documentation, with asynchronous project profile save (thin
arrowhead).
Not all reviewed decisions were accepted. In particular, when we reviewed the
proposed authentication solution, we immediately noticed some unusual ele-
ments. The system was required to use the organization’s on-premise Active
Directory, but the proposed system was to be tested and deployed in Microsoft
Azure cloud solution (see sequence diagram in Figure 1). The vendor proposed
a non-standard hybrid approach that shared authentication elements between
front- and back-end, which would have allowed some kinds of intrusion. We re-
quested additional details on this proposed solution, and eventually rejected this
decision (which had been already made) and decided to keep synchronized an
Azure-deployed Active Directory instance with the on-premise Active Directory
instance, for which a standard authentication method was in use (OAuth). More-
over, we requested the vendor to strictly follow Microsoft’s documentation and
guidance for authentication.
We made this decision in the meeting room itself, with the participants al-
ready mentioned. However, synchronizing both Active Directory servers was out-
side our domain. Therefore, this decision forced us to start a follow-up conversa-
tion with the organization’s IT department personnel, which fortunately agreed
to allow its account to synchronize with the cloud-deployed Active Directory.
An interesting issue is that when we rejected this decision, we made it almost
intuitively and based on experience, not following any detailed guidance. DCAR
does not define specific criteria for accepting or rejecting decisions (i.e., when a
decision should be accepted with agreed improvements, and when just rejected).
We did not find any workaround, so we believe this issue is still open and should
be further developed in the future.
11
In this section we present the main lessons we learned and recommendations for
DCAR adopters.
By considering the sources for inputs of some activities and weighting the risks
of offline working, some DCAR activities can be left for offline execution. We
note that, however, that this worked well in a case where development is carried
out by a vendor, and would not necessarily be so for an in-house development.
DCAR yields an evaluation report that includes all the reviewed decisions; this
forced us to explicitly note the details of each architectural decision. When devel-
opment finishes, we expect this report to contain a (final, updated) architecture
description with minimal erosion (of course, it is our responsibility to maintain
it up-to-date by discussing it with the vendor). In our case, we will also require
the vendor to sign it off.
In one case, we made a decision (see section 6.3) that implied technical issues
outside our domain. We do not regard this situation as accidental: in fact, we
expect that future quick reviews will experience similar issues given the natural
trade-off between a quick evaluation and the time-consuming coordination to
bring all related people into the review sessions.
Also, follow-up conversations should be carried out as soon as possible after
a review session, to allow in the next session to reassess the decisions feasibility
and rework them if necessary.
12
9 Threats to validity
Some concerns must be addressed regarding internal and external validity, es-
pecially for readers interested in leveraging this experience in future software
architecture evaluations.
In terms of internal validity, i.e. correctness of results for this specific case,
perhaps the most important threat to be mentioned here is cognitive bias, es-
pecially confirmation and anchoring biases [19]. Two members of the evaluation
13
10 Conclusions
Architecture evaluation is a systematic approach to assess a software architec-
ture’s fitness to software requirements, and especially quality attributes.
While running an innovation project support platform that was developed by
a vendor, we considered necessary to evaluate its architecture. Among existing
evaluation methods, we chose DCAR, a decision-analysis oriented method; we
particularly appreciated that it is lightweight in terms of our resources, and that
it would not impact the schedule of a project that had not included architecture
evaluation to start with. We illustrated the method use in our project with three
modified decisions and one rejected decision.
With careful planning, we managed to evaluate the architecture in a mixed
online-offline fashion, with the online part taking just two half-days; since the
software is being developed by a vendor, this allowed us to minimize in-person
participation by their employees. Nevertheless, we are unclear whether this ap-
proach implied for us more or less effort than expected in a full online evaluation,
and leave this comparison as invitation for future work.
Finally, we recommend our own experience of making frequent mini-reviews
of architecture decisions, to understand the architecture, formalize it with its
resulting reports, and raise its visibility in the team itself.
Acknowledgments
This work was partially supported by CCTVal (Centro Cientı́fico y Tecnológico
de Valparaı́so, ANID PIA/APOYO AFB180002), DGIIE-UTFSM, Project InES
(PMI FSM 1402), and FONDECYT (grant 1150810). We also thank Rich Hilliard
for his helpful comments on an earlier version of this article.
14
References
1. Babar, M.A., Gorton, I.: Software architecture review: The state of practice. Com-
puter 42(7), 26–32 (July 2009)
2. Babar, M.A., Zhu, L., Jeffery, R.: A framework for classifying and comparing soft-
ware architecture evaluation methods. In: 2004 Australian Software Engineering
Conference. Proceedings. pp. 309–318 (April 2004)
3. Babar, M.A., Bass, L., Gorton, I.: Factors influencing industrial practices of soft-
ware architecture evaluation: An empirical investigation. In: Overhage, S., Szyper-
ski, C.A., Reussner, R., Stafford, J.A. (eds.) Software Architectures, Components,
and Applications. pp. 90–107. QoSA 2007, Springer Berlin Heidelberg (2007)
4. Bass, L., Kazman, R.: Making architecture reviews work in the real world. IEEE
Software 19(01), 67–73 (jan 2002)
5. Bosch, J.: Software architecture: The next step. In: Oquendo, F., Warboys, B.C.,
Morrison, R. (eds.) Software Architecture. EWSA 2004, Springer (2004)
6. Clements, P., Kazman, R., Klein, M.: Evaluating Software Architectures: Methods
and Case Studies. Addison-Wesley Longman (2002)
7. Cruz, P., Astudillo, H., Hilliard, R., Collado, M.: Assessing migration of a 20-year-
old system to a micro-service platform using atam. In: 2019 IEEE International
Conference on Software Architecture Companion (ICSA-C). pp. 174–181 (2019)
8. Harrison, N., Avgeriou, P.: Pattern-based architecture reviews. IEEE Software
28(6), 66–71 (Nov 2011)
9. van Heesch, U., Avgeriou, P., Hilliard, R.: A documentation framework for archi-
tecture decisions. Journal of Systems and Software 85(4), 795–820 (2012)
10. Kazman, R., Bass, L., Abowd, G., Webb, M.: SAAM: a method for analyzing the
properties of software architectures. In: 16th International Conference on Software
Engineering. pp. 81–90. ICSE ’94 (May 1994)
11. Obbink, H., Kruchten, P., Kozaczynski, W., Hilliard, R., Ran, A., Postema,
H., Lutz, D., Kazman, R., Tracz, W., Kahane, E.: Report on Soft-
ware Architecture Review and Assessment (SARA), Version 1.0. At
http://kruchten.com/philippe/architecture/SARAv1.pdf, p. 58 (06 2019)
12. Overdick, H.: The resource-oriented architecture. In: 2007 IEEE Congress on Ser-
vices (Services 2007). pp. 340–347 (July 2007)
13. Parnas, D.L., Weiss, D.M.: Active design reviews: Principles and practices. In: 8th
International Conference on Software Engineering. pp. 132–136. ICSE ’85, IEEE
Computer Society Press, Los Alamitos, CA, USA (1985)
14. Reijonen, V., Koskinen, J., Haikala, I.: Experiences from scenario-based architec-
ture evaluations with atam. In: Babar, M.A., Gorton, I. (eds.) Software Architec-
ture. ECSA 2010. pp. 214–229. Springer (2010)
15. de Silva, L., Balasubramaniam, D.: Controlling software architecture erosion: A
survey. Journal of Systems and Software 85(1), 132 – 151 (2012)
16. Tyree, J., Akerman, A.: Architecture decisions: demystifying architecture. IEEE
Software 22(2), 19–27 (March 2005)
17. van Heesch, U., Avgeriou, P., Hilliard, R.: Forces on architecture decisions - a
viewpoint. In: 2012 Joint Working IEEE/IFIP Conference on Software Architecture
and European Conference on Software Architecture. pp. 101–110 (Aug 2012)
18. van Heesch, U., Eloranta, V., Avgeriou, P., Koskimies, K., Harrison, N.: Decision-
centric architecture reviews. IEEE Software 31(1), 69–76 (Jan 2014)
19. Zalewski, A., Borowa, K., Ratkowski, A.: On cognitive biases in architecture de-
cision making. In: European Conference on Software Architecture (ECSA), LCNS
(2017)