Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec.

, 2020) ISSN: 2536-7161

QUALITY MEASURES AND ATTRIBUTES FOR SOFTWARE EVALUATION

*Salihu, G. A. and Shiitu A.


Umaru Ali Shinkafi Polytechnic, Sokoto-Nigeria
salihu_ganiyu@yahoo.com, shituabdullahi4u@gmail.com
ABSTRACT
Software measures have been providing scientific bases necessary for evaluating and
regulating necessary quality of software products. Numerous measures and attributes for
evaluating software products have been discussed by various authors and practitioners but
preliminary study, by the authors, reflects that few studies have articulated the current state-
of-art of software attributes and measures necessary to evaluate software products. This study
employs systematic review by selecting some primary papers from 2008 to 2017 that have
discussed software quality measures and attributes necessary for evaluating software. Ninety-
six (96) measures relating to thirty-two (32) quality attributes were found as a result of the
systematic review. The review produces results that indicate that highest percentage of the
measures assess attributes that will promote software maintainability, followed by
performance efficiency while other attributes are also found to be important at different scales.
Internal attributes were found to be more important as they influence the external attributes.
The study also discovered that the measures are applied at every phase of System Development
Life Cycle but mostly applied at evaluation phase. The results have summarily provided a
global conceptualisation of state-of-art in quality measures and attributes necessary to
evaluate software products by various stakeholders.
.
Keywords: Software, Quality, Measures, Attributes, Product, SDLC

1.0 INTRODUCTION
One of the most tedious tasks while deriving software as product for solving application
problems is for the software developers to meet the required quality attributes expected by the
software users and other stakeholders. Montagud and Insfran (2012) point out that an attribute
of software quality can be conceptualised as “an abstract or physical property of an artefact
produced that is measurable and derivable during the software product derivation”. The quality
of software product has been conceptualised differently by different stakeholders. For example,
for the software engineer, quality means the software’s correspondence to requirements; while
for the end-user it is conceived as the usability of the system. Over some years, numerous set
of measures for evaluating a software product have been established and a significant number
of quality attributes have also been proposed by various authors.
However, despite the existence of the quality measures, it is obvious that the practice
of software evaluation in the industry is still in its infancy stage and still evolving (Tahir,
Rasool & Gencel, 2016). Software metrics have been pointed out as measurement of end
product and development process of software and it is known to have been guiding different
software evaluation models and tools (Lee, 2014). Tahir, Rasool and Gencel (2016) argue that
for proper evaluation of software characteristics (the key and general software characteristics),
software measures are identified as effective means to be observing performance,

1
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

comprehending software functions, controlling activities, foretelling, and testing software


derivation and software review exercises.
To adequately come up with up-to-date state-of-art in software attributes and measures,
a systematic review was employed in this research. The method has been helpful in identifying,
measuring, and interpreting the quality measures and attributes that are essential for software
product evaluation. The remainder of this paper is organized as follows: Section 2 provides the
employed material and methods in the research. Section 3 discusses the results while section 4
discusses the expected results implications for researchers and practitioners. Finally, the last
section which is section 5 concluded the paper.

2.0 MATERIALS AND METHODS


One major method employed in this research is Systematic Review. A Systematic Review is
done in three main steps as recommended by the guidelines provided by (Kitchenham &
Pfleeger, 2000).

Fig. 2.1: Stages of Systematic Review (Kitchenham & Pfleeger, 2000).

Research Questions
RQ1: What are the software quality attributes necessary for software evaluation?
RQ2: How are the types of quality attributes and measures affect software evaluation
RQ3: What are suitable factors for software evaluation implementation?

2.1 Data Sources


The following digital libraries were explored to get the primary studies from journals, articles
and conference proceedings: IET INSPEC, ACM DL, Springerlink, ScienceDirect and IEEE
Xplore Digital Library. The search covered the period of 2008 to 2017. Various concatenated
strings for searching relevant papers were tested and the following string was able to provide
the greatest amount of relevant papers: ((software! AND (evalu* OR measu* OR quality*))
AND (met* OR improve* OR goal* OR challenge* OR problem* OR issue* OR lessons OR
experience* OR succ* OR failure OR test*)). The search was carried out by considering the
titles of the research papers and their associated abstracts. Different variations of notations for
the different digital libraries were appropriately adapted, because of variance in syntax. The
systematic review was extended to cover some relevant grey literatures to widen the scope of
selected studies.
2
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

2.2 Primary Studies Selection Criteria


Both the syntactic and semantic search methods were applied and some of the papers were fully
scanned to assure that the data satisfy criteria for selection and non-exclusion of papers. Table
2.1 shows the factors for selecting and excluding papers. Table 2.2 shows data extraction form:
Table 2.1: Selection and Non-Inclusion Criteria
Criteria for Selecting a Paper Criteria for Non-Inclusion of Paper
Papers which show software evaluation Papers written in non-English language
techniques
Papers presenting measures which do not have
Papers which show quality direct relationship with software evaluation.
measures/assessment for software
evaluation Papers that present measures without explaining
how to apply the measures
Papers which highlight relevant attributes that
measure software quality Papers proposing other quality measures that are not
related to software.

Table 2.2: Extraction of Data Template


Considered Factors Measure Quality Attributes
Factor1 Characteristics  Functional Suitability
Assessed  Performance Efficiency
 Security
 Maintainability
 Reliability
 Operability
 Compatibility
 Transferability
Factor 2 Type of Quality  Internal
Attribute  External
Factor 3 Forms of Measures  Base Measures
 Derived Measures
Factor 4 Classification of Outcomes of the Measures
Types of Evaluation  Qualitative Evaluation
 Quantitative Evaluation
 Hybrid Evaluation
Form of Precision  Exact
 Probabilistic
Factor 5 Stages of Software  Analysis
Development Life  Design
Cycle in which  Development
Measures are  Software Testing
applied  Implementation
 Evaluation
 Maintenance
 Software Use
Factor 6 Validation Methods
Theoretical Method  Property-Theory Validation
 Measurement-Based Validation
Practical Method  Case Studies
 Survey
 Experiment
Factor 7 Tool Support  Manual
 Automatic
Factor 8 Areas of Usage  Academic
 Industrial

3
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

2.3 Conducting the Systematic Review


A total number of 27 papers were selected as potential papers. The selected papers were chosen
in accordance with selection and non-inclusion criteria described in section 2.2. The Summary
of Papers Found and Selected by Source is attached as appendix A.

SOURCE NO OF NO OF DUPLICATED NO OF FINAL


PROSPECTIVE PAPERS PAPERS SELECTED
PAPERS
ACM 63 11 3
IEEE Xplore 78 19 10
Inspec 80 13 1
Science Direct 28 7 8
Springer link 18 8 1
GRAY LITERATURE 4
TOTAL 267 58 27

3.0 RESULTS OF THE SYSTEMATIC REVIEW


The distribution of the final selected primary studies over a period of ten years from 2008-
2017that shows that an average of 2-3 papers are published yearly is presented in figure 3.1.

DISTRIBUTION OF PRIMARY STUDIES OVER 10 YEARS


8
NO OF PAPERS

6
4
2
0
2008 2009 2010 2011 2012 2013 2014 2015 2016 2017
YEAR OF PUBLICATION

Fig. 3.1: Distribution of Primary Studies over a period

A result of systematic review based on empirical method employed shows that case study is
widely used as empirical method as depicted in figure 3.2.

Figure 3.2: Distribution of Studies based on Empirical Method


The table below presents the extracted data on software evaluation measures and quality
attributes to answer research questions:

4
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

Table 3.1: Data extraction criteria results


Frequency of
Considered Factors Possible Answer(s) Measures Percentage (%)
Factor 1: Assessed Characteristics
 Functional Suitability 3 3
 Performance 12 13
Efficiency 1 1
 Compatibility 2 2
 Usability 2 2
 Reliability 2 2
 Security 74 77
 Maintainability
 Portability 0 0
Factor 2: Types of Quality Attributes
Internal attributes 74 77
External attributes 22 23
Factor 3: Forms of Measures
Base measures 16 17
Derived measures 80 83
Factor 4: Classification of Outcomes of the Measures
Types of Quality Evaluation
Qualitative Evaluation 4 4
Quantitative Evaluation 90 94
Hybrid Evaluation 2 2
Form of Precision
Exact 92 96
Probabilistic 4 4
Factor 5: Stages of Software Development Life Cycle in which Measures are Applied
Analysis 4 4
Design 25 26
Development 3 3
Software Testing 9 9
Implementation 4 4
Evaluation 37 39
Maintenance 2 2
Software Use 12 13
Factor 6: Validation Methods
Theoretical Method
Yes 28 29
No 68 71
Practical Method
Yes 33 34
No 63 66
Factor 7: Tool support
Manual 45 47
Automatic 51 53
Factor 8: Areas of Usage
Academic 80 83
Industrial 16 17

5
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

Table 3.2: Discovered Attributes for Software Quality Evaluation

S/N Attributes # of # of Measures


Repetition
1. Accessibility 1 0
2. Accountability 0 0
3. Adaptability 0 0
4. Analysability 5 10
5. Appropriateness Recognisability 3 1
6. Authenticity 1 0
7. Availability 0 0
8. Capacity 1 0
9. Co-existence 1 0
10. Complexity 10 6
11. Confidentiality 1 1
12. Fault Tolerance 8 2
13. Functional Appropriateness 6 0
14. Functional Completeness 3 1
15. Functional Correctness 5 2
16. Installability 1 0
17. Integrity 1 1
18. Interoperability 3 1
19. Learnability 4 0
20. Maturity 1 0
21. Modifiability 2 33
22. Modularity 1 15
23. Non-repudiation 1 0
24. Operability 1 0
25. Recoverability 0 0
26. Replaceability 1 0
27. Resource Utilisation 3 2
28. Reusability 2 1
29. Testability 13 15
30. Time Behaviour 9 4
31. User Error Protection 1 0
32. User Interface Aesthetics 3 1

6
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

4.0 DISCUSSION OF RESULTS OF SYSTEMATIC REVIEW


The systematic review revealed that an average number of publications over 10 years stand
between 2-3 papers. The development is amazing because software evaluation as a practice has
been known for a long period and it was expected that more research papers would have been
found over a decade. Lack of adequate researches on software measures have been a limiting
factor reducing acquisition of experience in the software evaluation discipline Sethi, Cai,
Wong, Garcia, and Sant’Anna (2009).

4.1 Answering Research Questions

RQ1: What are the software quality attributes necessary for software evaluation?
Factor 1: Assessed Characteristics: 77% of the measures, from the result, actually measure
quality attributes relating to software maintenance. In general, the measures under software
maintainability are associated with modularity, reusability, analysability, modifiability,
testability, changeability and stability (Morasca, 2001). Out of the whole measures assessed,
13% measures efficiency in software implementation. Software performance, time behaviour,
resource utilisation, capacity are the measures (Al-qutaish & Sarayreh, 2016). Out of the whole
measures assessed, 3% measures functional suitability characteristics. These measures are
related to functional appropriateness, correctness and completeness (ISO/IEC, 2011). Out of
the whole measures assessed, 1% measures compatibility. The measures that are related to
compatibility are co-existence and interoperability (Al-qutaish & Sarayreh, 2016). Out of the
whole measures assessed, 2% measures reliability. The measures that are related to reliability
are fault tolerance, maturity and recoverability (ISO/IEC, 2011). Out of the whole measures
assessed, 2% measures security. Mens & Demeyer (2008) identifies measures that are related
to security as confidentiality, integrity, non-repudiation, authenticity and accountability. Out
of the whole measures assessed, 2% measures usability. The measures that are related to
usability are “recognisability, operability, user interface, user error protection, learnability,
aesthetics and accessibility” (ISO/IEC, 2011). Unfortunately, no measure was found for
portability which concerns adaptability, installability and replaceability.

RQ2: How are the types of quality attributes and measures affect software evaluation
Factor 2: Types of Quality Attributes. Most of the evaluated attributes concern internal
attributes (77%) while only 23% of the measures are external attributes. Internal attributes are
mostly measured because they are believed to influence the external attributes (ISO/IEC,
2011). Factor 3: Types of Measures. Most of the measures (83%) are found to be derived
measures The remaining 17% measures can be derived without employing addition measures
from other sources and that is what is regarded as base measures. Abran & Al-qutaish (2010)
point out that “the base measure evaluates task time, tasks’ number, number of users’ errors
and value proportional to each missing or incorrect element”. Factor 4: Classification of
Outcomes of the Measures: Mostly, the measures were calculated in a quantitative manner
(94%) while measures that were determined using qualitative approach was 4% and about 2%
combined both quantitative and qualitative methods. Yusof, Papazafeiropoulou, Paul and

7
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

Stergioulas (2008) observe that using qualitative approach can greatly enhance acceptance by
software users and can prevent failure of the system.

RQ3: What are suitable factors for software evaluation implementation?


Factor 5: Stages of Software Development Life Cycle: A major percentage, which is 39%
of the measures, was obtained during the evaluation stage of software development life cycle
The next measure, out of the whole measures assessed, is 26% are related to the Design stage
of the cycle. The remaining 35% was shared among the other stagess of cycle. It has been
unequivocally asserted by Kayarvizhy, Tessier, Sauer, and Walton (2016) that software
measures are applicable to the whole software development life cycle. Factor 6: Validation
Methods. 71% of the measures were not theoretically proven while 29% of the measures were
validated theoretically. With respect to the empirical validation, 34% measures were validated
empirically while 66% were not validated empirically. Montagud and Insfran (2012) assert
that “measure tends to measure whatever it wants to measure”. Factor 7: Tool Support: 47%
of the measures are not supported by tools during evaluation while 53% have. Kayarvizhy et
al., (2016) explain that there are tools that extract metrics from software. Some of these tools
include SD Metrics, Eclipse plug-in, JMetric, JHawk and JDepend.

4.2 Conceptual Analysis of Software Measures Evaluated


Out of 96 measures, 3% measures a quality attribute known as functional suitability. The
functional suitability measures include functional completeness, functional correctness and
functional appropriateness Rodríguez, Oviedo, & Piattini (2016). Out of the whole measures
assessed, 3% measures performance efficiency characteristic. “Performance efficiency is a
measure that determines the level at which software product provides expected level of
performance in relation to the amount of resources employed under certain conditions”
(Miguel, Mauricio, & Rodríguez, 2014). Out of the whole measures assessed, 1% measures
compatibility. The compatibility attribute measures the “effectiveness of components of a
software to pass information between/among themselves and/or to carry out their expected
functions while sharing the same resources” (Yoon, Sussman, Memon, & Porter, 2008).
Moreover, Out of the whole measures assessed, 1% measures usability. Berander et al., (2005)
advise that Software should possess usability quality to the extent that it assures reliability and
social-engineered. Additionally, Out of the whole measures assessed, 2% measure reliability.
This quality attribute measures the extent to which “a developed software can sustain an
expected level of performance when being used in a stated conditions”. (Yoon et al., 2008).
Furthermore, 2% of the measures measure security. Security quality measures “ the level of
protection of system items from unauthorised access or usage in unauthorised manner (Yoon
et al., 2008). Lastly, 77% of the measures measure maintainability attributes. The percentage
is not surprising because Maintainability according to Chandrasekar, SudhaRajesh, and Rajesh
(2014) is the level at which a software can be easily modified to correct and adjust performance
based on various need”. It is obvious that proper evaluation is necessary for effective software
maintenance.

8
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

5.0 CONCLUSION
Maintainability has been found to be most desirable quality attribute while software is being
evaluated for adaptation, improvement and corrections. Other identified quality attributes such
as Efficient Performance, Suitable Functionalities, Software Security, Reliability, Usability &
Compatibility have also been found to be important and performing complementary roles in
supporting proper evaluation of software products. Internal attributes have also been identified
to be important in software evaluation because they influence external attributes. Majority of
quality attributes are not easy to be derived directly and need to be calculated from other
measures and this has made derived measure a highly important measure in software
evaluation. Moreover, the use of qualitative approach is becoming increasingly important in
effective software evaluation, Therefore quality attributes and measures are needed to be
applied in all stages of software development life cycle and there is that need for structured,
organised and periodically reviewed software quality attributes and measures for effective
software evaluation.

6.0 RECOMMENDATION
Based on the available results of systematic reviews and conceptual analyses, it is
recommended that Software quality and measures studies and researches should always reflect
the current state-of-art in the software industry and always be validated to assure that correct
measures, metrics and quality attributes are proposed for software evaluation. Moreover,
studies and researches should be encouraged in the industry to assure that industrial experiences
complement academic research outputs.

REFERENCES
Abdellatief, M., Sultan, A. B. M., Ghani, A. A. A., & Jabar, M. A. (2013). A mapping study to
investigate component-based software system metrics. Journal of Systems and
Software, 86(3), 587–603. https://doi.org/10.1016/j.jss.2012.10.001
Abran, A., & Al-qutaish, R. E. (2010). ISO 9126 : Analysis of Quality Models and Measures.
In A. Abran (Ed.), Software Metrics and Software Metrology. New York, USA: Wiley-
IEEE Computer Society. https://doi.org/10.1002/9780470606834.ch10
Al-qutaish, R. E., & Sarayreh, K. T. (2016). Software Process and Product ISO Standards : A
Comprehensive Survey, January(November). Retrieved from
https://www.researchgate.net/publication/237104465
Capra, E., Francalanci, C., Merlo, F., & Rossi-Lamastra, C. (2011). Firms’ involvement in
Open Source projects: A trade-off between software structural quality and popularity.
Journal of Systems and Software, 84(1), 144–161.
https://doi.org/10.1016/j.jss.2010.09.004
Chandrasekar, A., SudhaRajesh, M., & Rajesh, P. (2014). A Research Study on Software
Quality Attributes. International Journal of Scientific and Research Publications, 4(1),
1–4. Retrieved from http://www.ijsrp.org/research-paper-0114/ijsrp-p2535.pdf
Condori-Fernández, N., Panach, J. I., Baars, A. I., Vos, T., & Pastor, Ó. (2013). An empirical
approach for evaluating the usability of model-driven tools. Science of Computer
Programming, 78(11), 2245–2258. https://doi.org/10.1016/j.scico.2012.07.017
Dubey, S. K., Ghosh, S., & Rana, A. (2012). Comparison of Software Quality Models : An
9
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

Analytical Approach. International Journal of Emerging Technology and Advanced


Engineering, 2(2), 111–119. Retrieved from www.ijetae.com (ISSN 2250-2459,
Volume 2, Issue 2, February 2012)
Duran, M. B., Pina, A. N., & Mussbacher, G. (2015). Evaluation of reusable concern-oriented
goal models. 5th International Model-Driven Requirements Engineering Workshop,
MoDRE 2015 - Proceedings, 53–62. https://doi.org/10.1109/MoDRE.2015.7343876
Garousi Yusifoʇlu, V., Amannejad, Y., & Betin Can, A. (2015). Software test-code
engineering: A systematic mapping. Information and Software Technology, 58(July),
123–147. https://doi.org/10.1016/j.infsof.2014.06.009
Glogger, I., Holzäpfel, L., Kappich, J., Schwonke, R., Nückles, M., & Renkl, A. (2013).
Development and Evaluation of a Computer-Based Learning Environment for
Teachers: Assessment of Learning Strategies in Learning Journals. Education Research
International, 2013, 1–12. https://doi.org/10.1155/2013/785065
Horsky, J., McColgan, K., Pang, J. E., Melnikas, A. J., Linder, J. A., Schnipper, J. L., &
Middleton, B. (2010). Complementary methods of system usability evaluation: Surveys
and observations during software design and development cycles. Journal of
Biomedical Informatics, 43(5), 782–790. https://doi.org/10.1016/j.jbi.2010.05.010
ISO/IEC. (2011). INTERNATIONAL STANDARD ISO/IEC: PRODUCT EVALUATION,
2011. Retrieved from https://www.iso.org/standard/24907.html
Jadhav, A. S., & Sonar, R. M. (2009). Evaluating and selecting software packages: A review.
Information and Software Technology, 51(3), 555–563.
https://doi.org/10.1016/j.infsof.2008.09.003
Kayarvizhy, N., Tessier, J., Sauer, F., & Walton, L. (2016). Systematic Review of Object
Oriented Metric Tools. International Journal of Computer Applications, 135(2), 8–13.
Retrieved from
https://pdfs.semanticscholar.org/73c9/df066e77c6bda57f1427b2287bd8688b1b97.pdf
Kitchenham, B., & Pfleeger, S. L. (2000). SOFTWARE QUALITY : THE ELUSIVE
TARGET, 12–21. Retrieved from
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.471.9089&rep=rep1&type=
pdf
Lee, M. (2014). Software Quality Factors and Software Quality Metrics to Enhance Software
Quality Assurance. British Journal of Applied Science & Technology, 4(21), 3069–
3095. Retrieved from
http://www.journalrepository.org/media/journals/BJAST_5/2014/Jun/Lee4212014BJ
AST10548_1.pdf
Mens, T., & Demeyer, S. (2008). Software Evolution. New York, USA: Springer-Verlag Berlin
Heidelberg.
Miguel, J. P. , Mauricio D., Rodríguez, G. (2014). A REVIEW OF SOFTWARE QUALITY
MODELS FOR THE EVALUATION OF SOFTWARE PRODUCTS José. Trial,
5(May).
Miguel, J. P., Mauricio, D., & Rodríguez, G. (2014). A Review of Software Quality Models
for the Evaluation of Software Products. International Journal of Software Engineering
& Applications (IJSEA), 5(6), 31–53. Retrieved from
https://arxiv.org/ftp/arxiv/papers/1412/1412.2977.pdf
10
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

Minhas, S. S., Sampaio, P., & Mehandjiev, N. (2012). A framework for the evaluation of
mashup tools. Proceedings - 2012 IEEE 9th International Conference on Services
Computing, SCC 2012, 431–438. https://doi.org/10.1109/SCC.2012.19
Montagud, S., & Insfran, E. (2012). A systematic review of quality attributes and measures for
software product lines. Software Qual J, (May 2014). https://doi.org/10.1007/s11219-
011-9146-7
Morasca, S. (2001). Handbook of Software Engineering and Knowledge Engineering. (U.
University of Pittsburgh, USA & Knowledge Systems Institute, Ed.) (1: Fundame).
USA: Knowledge Systems Institute. Retrieved from
https://www.uio.no/studier/emner/matnat/ifi/INF5181/h11/undervisningsmateriale/rea
ding-materials/Lecture-06/morasca-handbook.pdf
Olsina, L., Papa, M. F., & Becker, P. (2011). Assessing integrated measurement and evaluation
strategies: A case study. 2011 7th Central and Eastern European Software Engineering
Conference, CEE-SECR 2011. https://doi.org/10.1109/CEE-SECR.2011.6188462
Pfeifer, P., & Pliva, Z. (2014). A new method for in situ measurement of parameters and
degradation processes in modern nanoscale programmable devices. Microprocessors
and Microsystems, 38(6), 605–619. https://doi.org/10.1016/j.micpro.2014.04.008
Riaz, M., Breaux, T., & Williams, L. (2015). How have we evaluated software pattern
application? A systematic mapping study of research design practices. Information and
Software Technology, 65, 14–38. https://doi.org/10.1016/j.infsof.2015.04.002
Rodríguez, M., Oviedo, J. R., & Piattini, M. (2016). Evaluation of Software Product Functional
Suitability : A Case Study. Software Quality Professional Magazine, 18(3). Retrieved
from http://www.aqclab.es/images/AQCLab/Noticias/SQP/software-quality-
management-evaluation-of-software-product-functional-suitability-a-case-study.pdf
Rogers, P. J., Stevens, K., & Boymal, J. (2009). Qualitative cost-benefit evaluation of complex,
emergent programs. Evaluation and Program Planning, 32(1), 83–90.
https://doi.org/10.1016/j.evalprogplan.2008.08.005
Sethi, K., Cai, Y., Wong, S., Garcia, A., & Sant’Anna, C. (2009). From Retrospect to Prospect :
Assessing Modularity and Stability from Software Architecture. In Joint Working
IEEE/IFIP Conference on Software Architecture & European Conference on Software
Architecture (pp. 269–272). Retrieved from https://booksc.org/book/33823172/bbc9f2
Shariatzadeh, N., Sivard, G., & Chen, D. (2012). Software evaluation criteria for rapid factory
layout planning, design and simulation. Procedia CIRP, 3(1), 299–304.
https://doi.org/10.1016/j.procir.2012.07.052
Spinellis, D., Gousios, G., Karakoidas, V., Louridas, P., Adams, P. J., Samoladas, I., &
Stamelos, I. (2009). Evaluating the Quality of Open Source Software. Electronic Notes
in Theoretical Computer Science, 233(C), 5–28.
https://doi.org/10.1016/j.entcs.2009.02.058
Tahir, T., Rasool, G., & Gencel, C. (2016a). A systematic literature review on software
measurement programs. Information and Software Technology, 73(August 2018), 101–
121. https://doi.org/10.1016/j.infsof.2016.01.014
Tahir, T., Rasool, G., & Gencel, C. (2016b). A systematic literature review on software
measurement programs. Information and Software Technology, 73(February), 101–
121. https://doi.org/10.1016/j.infsof.2016.01.014
11
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

Tochot, P., Junpeng, P., & Makmee, P. (2012). Measurement Model of Evaluation Utilization:
External Evaluation. Procedia - Social and Behavioral Sciences, 69(Iceepsy), 1751–
1756. https://doi.org/10.1016/j.sbspro.2012.12.124
Ullah, M. I., Ruhe, G., & Garousi, V. (2010). Decision support for moving from a single
product to a product portfolio in evolving software systems. Journal of Systems and
Software, 83(12), 2496–2512. https://doi.org/10.1016/j.jss.2010.07.049
Unterkalmsteiner, M., Gorschek, T., Islam, A. K. M. M., Cheng, C. K., Permadi, R. B., & Feldt,
R. (2012). Evaluation and measurement of software process improvement-A systematic
literature review. IEEE Transactions on Software Engineering, 38(2), 398–424.
https://doi.org/10.1109/TSE.2011.26
Upadhyay, N. (2016). SDMF: Systematic Decision-making Framework for Evaluation of
Software Architecture. Procedia Computer Science, 91(Itqm), 599–608.
https://doi.org/10.1016/j.procs.2016.07.151
Van Hulse, J., & Khoshgoftaar, T. M. (2008). A comprehensive empirical evaluation of
missing value imputation in noisy software measurement data. Journal of Systems and
Software, 81(5), 691–708. https://doi.org/10.1016/j.jss.2007.07.043
Vavpotic, D., & Bajec, M. (2009). An approach for concurrent evaluation of technical and
social aspects of software development methodologies. Information and Software
Technology, 51(2), 528–545. https://doi.org/10.1016/j.infsof.2008.06.001
Weinreich, R., & Buchgeher, G. (2012). Towards supporting the software architecture life
cycle. Journal of Systems and Software, 85(3), 546–561.
https://doi.org/10.1016/j.jss.2011.05.036
Yi, X. J., Chen, L., Shi, J., Hou, P., & Lai, Y. H. (2018). A maintenance evaluation method for
complex systems with standby structure based on goal oriented method. IEEE
International Conference on Industrial Engineering and Engineering Management,
2017-Decem, 2130–2134. https://doi.org/10.1109/IEEM.2017.8290268
Yoon, I., Sussman, A., Memon, A., & Porter, A. (2008). Effective and Scalable Software
Compatibility Testing. In Proceedings of the ACM/SIGSOFT International Symposium
on Software Testing and Analysis. Seattle, WA, USA: ISSTA.
https://doi.org/10.1145/1390630.1390640
Yusof M.M., Papazafeiropoulou A., Paul R.J., & Stergioulas L, K. (2008). Investigating
evaluation frameworks for health information systems. International Journal of
Medical Informatics, 7, 377–385. https://doi.org/10.1016/j.ijmedinf.2007.08.004

APPENDIX
SELECTED PAPERS

12
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

13
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

14
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria
SosPoly Journal of Science & Agriculture, Vol. 3(2), (Dec., 2020) ISSN: 2536-7161

15
Umaru Ali Shinkafi Polytechnic Sokoto, Nigeria

You might also like