Professional Documents
Culture Documents
Quality Measures and Attributes For Software Evaluation Salihu, G. A. and Shiitu A
Quality Measures and Attributes For Software Evaluation Salihu, G. A. and Shiitu A
1.0 INTRODUCTION
One of the most tedious tasks while deriving software as product for solving application
problems is for the software developers to meet the required quality attributes expected by the
software users and other stakeholders. Montagud and Insfran (2012) point out that an attribute
of software quality can be conceptualised as “an abstract or physical property of an artefact
produced that is measurable and derivable during the software product derivation”. The quality
of software product has been conceptualised differently by different stakeholders. For example,
for the software engineer, quality means the software’s correspondence to requirements; while
for the end-user it is conceived as the usability of the system. Over some years, numerous set
of measures for evaluating a software product have been established and a significant number
of quality attributes have also been proposed by various authors.
However, despite the existence of the quality measures, it is obvious that the practice
of software evaluation in the industry is still in its infancy stage and still evolving (Tahir,
Rasool & Gencel, 2016). Software metrics have been pointed out as measurement of end
product and development process of software and it is known to have been guiding different
software evaluation models and tools (Lee, 2014). Tahir, Rasool and Gencel (2016) argue that
for proper evaluation of software characteristics (the key and general software characteristics),
software measures are identified as effective means to be observing performance,
1
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
Research Questions
RQ1: What are the software quality attributes necessary for software evaluation?
RQ2: How are the types of quality attributes and measures affect software evaluation
RQ3: What are suitable factors for software evaluation implementation?
3
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
6
4
2
0
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017
YEAR OF PUBLICATION
A result of systematic review based on empirical method employed shows that case study is
widely used as empirical method as depicted in figure 3.2.
4
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
5
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
6
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
RQ1: What are the software quality attributes necessary for software evaluation?
Factor 1: Assessed Characteristics: 77% of the measures, from the result, actually measure
quality attributes relating to software maintenance. In general, the measures under software
maintainability are associated with modularity, reusability, analysability, modifiability,
testability, changeability and stability (Morasca, 2001). Out of the whole measures assessed,
13% measures efficiency in software implementation. Software performance, time behaviour,
resource utilisation, capacity are the measures (Al-qutaish & Sarayreh, 2016). Out of the whole
measures assessed, 3% measures functional suitability characteristics. These measures are
related to functional appropriateness, correctness and completeness (ISO/IEC, 2011). Out of
the whole measures assessed, 1% measures compatibility. The measures that are related to
compatibility are co-existence and interoperability (Al-qutaish & Sarayreh, 2016). Out of the
whole measures assessed, 2% measures reliability. The measures that are related to reliability
are fault tolerance, maturity and recoverability (ISO/IEC, 2011). Out of the whole measures
assessed, 2% measures security. Mens & Demeyer (2008) identifies measures that are related
to security as confidentiality, integrity, non-repudiation, authenticity and accountability. Out
of the whole measures assessed, 2% measures usability. The measures that are related to
usability are “recognisability, operability, user interface, user error protection, learnability,
aesthetics and accessibility” (ISO/IEC, 2011). Unfortunately, no measure was found for
portability which concerns adaptability, installability and replaceability.
RQ2: How are the types of quality attributes and measures affect software evaluation
Factor 2: Types of Quality Attributes. Most of the evaluated attributes concern internal
attributes (77%) while only 23% of the measures are external attributes. Internal attributes are
mostly measured because they are believed to influence the external attributes (ISO/IEC,
2011). Factor 3: Types of Measures. Most of the measures (83%) are found to be derived
measures The remaining 17% measures can be derived without employing addition measures
from other sources and that is what is regarded as base measures. Abran & Al-qutaish (2010)
point out that “the base measure evaluates task time, tasks’ number, number of users’ errors
and value proportional to each missing or incorrect element”. Factor 4: Classification of
Outcomes of the Measures: Mostly, the measures were calculated in a quantitative manner
(94%) while measures that were determined using qualitative approach was 4% and about 2%
combined both quantitative and qualitative methods. Yusof, Papazafeiropoulou, Paul and
7
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
Stergioulas (2008) observe that using qualitative approach can greatly enhance acceptance by
software users and can prevent failure of the system.
8
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
5.0 CONCLUSION
Maintainability has been found to be most desirable quality attribute while software is being
evaluated for adaptation, improvement and corrections. Other identified quality attributes such
as Efficient Performance, Suitable Functionalities, Software Security, Reliability, Usability &
Compatibility have also been found to be important and performing complementary roles in
supporting proper evaluation of software products. Internal attributes have also been identified
to be important in software evaluation because they influence external attributes. Majority of
quality attributes are not easy to be derived directly and need to be calculated from other
measures and this has made derived measure a highly important measure in software
evaluation. Moreover, the use of qualitative approach is becoming increasingly important in
effective software evaluation, Therefore quality attributes and measures are needed to be
applied in all stages of software development life cycle and there is that need for structured,
organised and periodically reviewed software quality attributes and measures for effective
software evaluation.
6.0 RECOMMENDATION
Based on the available results of systematic reviews and conceptual analyses, it is
recommended that Software quality and measures studies and researches should always reflect
the current state-of-art in the software industry and always be validated to assure that correct
measures, metrics and quality attributes are proposed for software evaluation. Moreover,
studies and researches should be encouraged in the industry to assure that industrial experiences
complement academic research outputs.
REFERENCES
Abdellatief, M., Sultan, A. B. M., Ghani, A. A. A., & Jabar, M. A. (2013). A mapping study to
investigate component-based software system metrics. Journal of Systems and
Software, 86(3), 587–603. https://doi.org/10.1016/j.jss.2012.10.001
Abran, A., & Al-qutaish, R. E. (2010). ISO 9126 : Analysis of Quality Models and Measures.
In A. Abran (Ed.), Software Metrics and Software Metrology. New York, USA: Wiley-
IEEE Computer Society. https://doi.org/10.1002/9780470606834.ch10
Al-qutaish, R. E., & Sarayreh, K. T. (2016). Software Process and Product ISO Standards : A
Comprehensive Survey, January(November). Retrieved from
https://www.researchgate.net/publication/237104465
Capra, E., Francalanci, C., Merlo, F., & Rossi-Lamastra, C. (2011). Firms’ involvement in
Open Source projects: A trade-off between software structural quality and popularity.
Journal of Systems and Software, 84(1), 144–161.
https://doi.org/10.1016/j.jss.2010.09.004
Chandrasekar, A., SudhaRajesh, M., & Rajesh, P. (2014). A Research Study on Software
Quality Attributes. International Journal of Scientific and Research Publications, 4(1),
1–4. Retrieved from http://www.ijsrp.org/research-paper-0114/ijsrp-p2535.pdf
Condori-Fernández, N., Panach, J. I., Baars, A. I., Vos, T., & Pastor, Ó. (2013). An empirical
approach for evaluating the usability of model-driven tools. Science of Computer
Programming, 78(11), 2245–2258. https://doi.org/10.1016/j.scico.2012.07.017
Dubey, S. K., Ghosh, S., & Rana, A. (2012). Comparison of Software Quality Models : An
9
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
Minhas, S. S., Sampaio, P., & Mehandjiev, N. (2012). A framework for the evaluation of
mashup tools. Proceedings - 2012 IEEE 9th International Conference on Services
Computing, SCC 2012, 431–438. https://doi.org/10.1109/SCC.2012.19
Montagud, S., & Insfran, E. (2012). A systematic review of quality attributes and measures for
software product lines. Software Qual J, (May 2014). https://doi.org/10.1007/s11219-
011-9146-7
Morasca, S. (2001). Handbook of Software Engineering and Knowledge Engineering. (U.
University of Pittsburgh, USA & Knowledge Systems Institute, Ed.) (1: Fundame).
USA: Knowledge Systems Institute. Retrieved from
https://www.uio.no/studier/emner/matnat/ifi/INF5181/h11/undervisningsmateriale/rea
ding-materials/Lecture-06/morasca-handbook.pdf
Olsina, L., Papa, M. F., & Becker, P. (2011). Assessing integrated measurement and evaluation
strategies: A case study. 2011 7th Central and Eastern European Software Engineering
Conference, CEE-SECR 2011. https://doi.org/10.1109/CEE-SECR.2011.6188462
Pfeifer, P., & Pliva, Z. (2014). A new method for in situ measurement of parameters and
degradation processes in modern nanoscale programmable devices. Microprocessors
and Microsystems, 38(6), 605–619. https://doi.org/10.1016/j.micpro.2014.04.008
Riaz, M., Breaux, T., & Williams, L. (2015). How have we evaluated software pattern
application? A systematic mapping study of research design practices. Information and
Software Technology, 65, 14–38. https://doi.org/10.1016/j.infsof.2015.04.002
Rodríguez, M., Oviedo, J. R., & Piattini, M. (2016). Evaluation of Software Product Functional
Suitability : A Case Study. Software Quality Professional Magazine, 18(3). Retrieved
from http://www.aqclab.es/images/AQCLab/Noticias/SQP/software-quality-
management-evaluation-of-software-product-functional-suitability-a-case-study.pdf
Rogers, P. J., Stevens, K., & Boymal, J. (2009). Qualitative cost-benefit evaluation of complex,
emergent programs. Evaluation and Program Planning, 32(1), 83–90.
https://doi.org/10.1016/j.evalprogplan.2008.08.005
Sethi, K., Cai, Y., Wong, S., Garcia, A., & Sant’Anna, C. (2009). From Retrospect to Prospect :
Assessing Modularity and Stability from Software Architecture. In Joint Working
IEEE/IFIP Conference on Software Architecture & European Conference on Software
Architecture (pp. 269–272). Retrieved from https://booksc.org/book/33823172/bbc9f2
Shariatzadeh, N., Sivard, G., & Chen, D. (2012). Software evaluation criteria for rapid factory
layout planning, design and simulation. Procedia CIRP, 3(1), 299–304.
https://doi.org/10.1016/j.procir.2012.07.052
Spinellis, D., Gousios, G., Karakoidas, V., Louridas, P., Adams, P. J., Samoladas, I., &
Stamelos, I. (2009). Evaluating the Quality of Open Source Software. Electronic Notes
in Theoretical Computer Science, 233(C), 5–28.
https://doi.org/10.1016/j.entcs.2009.02.058
Tahir, T., Rasool, G., & Gencel, C. (2016a). A systematic literature review on software
measurement programs. Information and Software Technology, 73(August 2018), 101–
121. https://doi.org/10.1016/j.infsof.2016.01.014
Tahir, T., Rasool, G., & Gencel, C. (2016b). A systematic literature review on software
measurement programs. Information and Software Technology, 73(February), 101–
121. https://doi.org/10.1016/j.infsof.2016.01.014
11
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
Tochot, P., Junpeng, P., & Makmee, P. (2012). Measurement Model of Evaluation Utilization:
External Evaluation. Procedia - Social and Behavioral Sciences, 69(Iceepsy), 1751–
1756. https://doi.org/10.1016/j.sbspro.2012.12.124
Ullah, M. I., Ruhe, G., & Garousi, V. (2010). Decision support for moving from a single
product to a product portfolio in evolving software systems. Journal of Systems and
Software, 83(12), 2496–2512. https://doi.org/10.1016/j.jss.2010.07.049
Unterkalmsteiner, M., Gorschek, T., Islam, A. K. M. M., Cheng, C. K., Permadi, R. B., & Feldt,
R. (2012). Evaluation and measurement of software process improvement-A systematic
literature review. IEEE Transactions on Software Engineering, 38(2), 398–424.
https://doi.org/10.1109/TSE.2011.26
Upadhyay, N. (2016). SDMF: Systematic Decision-making Framework for Evaluation of
Software Architecture. Procedia Computer Science, 91(Itqm), 599–608.
https://doi.org/10.1016/j.procs.2016.07.151
Van Hulse, J., & Khoshgoftaar, T. M. (2008). A comprehensive empirical evaluation of
missing value imputation in noisy software measurement data. Journal of Systems and
Software, 81(5), 691–708. https://doi.org/10.1016/j.jss.2007.07.043
Vavpotic, D., & Bajec, M. (2009). An approach for concurrent evaluation of technical and
social aspects of software development methodologies. Information and Software
Technology, 51(2), 528–545. https://doi.org/10.1016/j.infsof.2008.06.001
Weinreich, R., & Buchgeher, G. (2012). Towards supporting the software architecture life
cycle. Journal of Systems and Software, 85(3), 546–561.
https://doi.org/10.1016/j.jss.2011.05.036
Yi, X. J., Chen, L., Shi, J., Hou, P., & Lai, Y. H. (2018). A maintenance evaluation method for
complex systems with standby structure based on goal oriented method. IEEE
International Conference on Industrial Engineering and Engineering Management,
2017-Decem, 2130–2134. https://doi.org/10.1109/IEEM.2017.8290268
Yoon, I., Sussman, A., Memon, A., & Porter, A. (2008). Effective and Scalable Software
Compatibility Testing. In Proceedings of the ACM/SIGSOFT International Symposium
on Software Testing and Analysis. Seattle, WA, USA: ISSTA.
https://doi.org/10.1145/1390630.1390640
Yusof M.M., Papazafeiropoulou A., Paul R.J., & Stergioulas L, K. (2008). Investigating
evaluation frameworks for health information systems. International Journal of
Medical Informatics, 7, 377–385. https://doi.org/10.1016/j.ijmedinf.2007.08.004
APPENDIX
SELECTED PAPERS
12
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
13
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
14
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161
15
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria