Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Review on OOP code Metrics

Software Metrics - PA 2559 - Review Assignment - GROUP 2

T.A.J.S. Athukorala Rahul Mohan Gouravarapu Sai Akhil


Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science
Blekinge Institute of Technology Blekinge Institute of Technology Blekinge Institute of Technology
Karlskrona, Sweden Karlskrona, Sweden Karlskrona, Sweden
that22@student.bth.se ramo22@student.bth.se sago22@student.bth.se

4
Dept. of Computer Science
Blekinge Institute of Technology
Karlskrona, Sweden
4@student.bth.se

Abstract—This document consist of two parts, Part1. a review II. OO METRICS


on OO code metrics based on ten selected research papers and
Part2. the practical exercises which extract OO metrics using A. Description
Source Monitor and CCCC tools.
Index Terms—OO metrics, goals, attributes, OO software
Object-oriented (OO) metrics are measurements used
systems, Metrics maturity assessing criteria to measure properties of OO software applications. Many
software metrics have been developed for OO paradigms such
I. I NTRODUCTION as abstraction, class, object, inheritance etc. to compute various
Software metrics is a measure of quantifiable software attributes like software quality, coupling, cohesion etc [13].
characteristics. The goal of identifying and analyzing software TABLE I shows all the metrics identified in each research
metrics is to determine current projects, products or processes paper.
and improve the quality of the overall software system in
completion while providing insights into its performance, B. Characterization
reliability and maintainability. Product, process and project metrics are the three main
As usage of the Object-Oriented (OO) paradigm has become categories of metrics. Similarly, OO metrics can be classified
widely used now a days, this review focuses on OO design into two major categories.
metrics which are important tools for assessing and measuring • Static Metrics- Measures the inherent properties the
the quality of OO applications. Not only potential design flows source or compiled code of a program. Can be used to
or anti-patterns which can impact the code quality but potential evaluate the quality of code at the design stage or during
performance bottlenecks which can impact on the scalability the code review
and responsiveness of the system can also be identified in early Ex: Size Metrics, Complexity metrics, Cohesion metrics,
stages of the software development process using OO metrics. Coupling metrics...etc.
Similarly, OO metrics can be used to assess the complexity
• Dynamic Metrics - Measures the behavior of the code
of the code. Moreover, the reliability (error handling, fault
during the execution of the program. Can be used to
tolerance, defensive programming techniques, etc.) and the
evaluate performance, improve the accuracy of static
maintainability (readability, ease of modification, reuseability,
metrics, and to identify potential issues that may not be
etc.) of the code can be evaluated using OO measures.
fetched during the design stage.
Hence, this intention is to review the selected research Ex: Test coverage metrics, Execution metrics, Memory
papers for (a) identification and categorization of OO metrics, Metrics, ...etc
(b) proposing maturity criteria to assess the identified metrics,
However, as the given research papers focus on the static
(c) using proposed criteria to reflect on OO metrics, (d)
metrics which assess the quality of the code, following cate-
identifying and categorizing the goals of used OO metrics,
gories of metrics will be considered in this context.
(e) identifying and classifying attributes of the codes, and
(f) identification and categorization of systems studied in the • Size metrics - Measures the size (population, volume,
given papers. Finally, a description of using tools to extract OO length, and functionality) of the code
metrics for analyzing software systems will also be provided. • Complexity metrics - Measures the complexity (how
classes are related to each other) of the code
TABLE I: Description of OO Metrics
Metrics Description (Context) Paper(s)
NOF No of files (No of files in application) [1]
LOC Lines of code per class (All nonempty, non-comment lines of class and its methods) [1] [2] [4] [7] [8]
STAT No of statements(No of statements in a class) [1]
CLOC No of lines with comments (No of lines of code in each class which contain comments) [1] [9]
PLOC Percentage of lines with comments (Percentage of lines with comments in a class) [1]
NOF No of functions (No of functions declared in a class) [1]
NCLASS No of classes (No of classes in a file) [1]
NINT No of interfaces (No of interfaces in a file) [1]
NSTR No of structures (No of structures in a file) [1]
NOM Avg No of methods per class(Avg No of methods per class) [1]
NOS Avg No of statements per method (Avg No of statements per method) [1]
NOC Avg No of calls per method (Avg No of calls per method) [1]
PBS Percentage of branching statements (Percentage of branching statements) [1]
MBD Maximum block depth (Maximum block depth) [1]
ABD Avg block depth (Avg block depth) [1]
CC Max Maximum cyclomatic complexity (Maximum cyclomatic complexity of a single method of a class) [1] [4]
Avg CC Avg cyclomatic complexity (Avg cyclomatic complexity of non-abstract methods in each class) [1] [4] [9]
DIT Depth of the inheritence tree (The length of the longest path from a sub-class to its base class) [2] [3] [5] [6] [7] [8]
NOC No of children (No of direct sub-classes that the class has) [2] [3] [5] [6] [7] [8][9]
MPC Message-passing couple (No of send statements defined in the class) [2] [9]
RFC Response for a class (Total No of local methods including inherited methods) [2] [3] [5] [6] [7] [8] [9]
LCOM Lack of cohesion of methods (No of null pairs of methods that do not have common attributes) [2] [3] [5] [6] [7] [8] [9] [10]
DAC Data abstraction coupling (No of abstract data types defined in the class) [2] [8]
WMC Weighted methods per class (Sum of McCabe’s cyclomatic complexity of local methods in class) [2] [4]
NOM/NMC No of methods per class (No of local methods in the class) [2] [3] [8]
SIZE2 No of properties (Total No of attributes and the No of local methods in the class) [2] [8]
CHANGE No of lines changed in the class (No of insertions, deletions and change of the content) [2]
CBO Coupling between objects (No of distinct non-inheritance-related classes on a class depends) [3] [5] [6] [7] [8] [9]
FANIN Fan in (Count of calls by higher modules/The No of inputs a function uses) [3] [5]
WMPC Weighted methods per class (A count of local methods implemented/ defined within a class) [3] [5] [7] [8]
SLOC Source lines of code (No of executable lines of source code (code and white space only)) [3] [5] [9]
CC McCabe McCabe’s Cyclomatic complexity (No of independent paths through a program unit) [3] [4] [5] [10]
SDMC SD method complexity (Standard deviation method complexity in a class) [4]
WMC Weighted methods per class (Sum of complexities of local methods of a class) [4] [6]
NIM No of instance methods (No of methods in an instance object of a class) [4]
NTM No of trivial methods (No of local methods in the class) [4]
ALOC Avg lines of code (Avg of the executable lines of code of a class) [4]
Modified Modified cyclomatic complexity (CC except that each case statement is not counted) [5]
Strict Strict cyclomatic Complexity (Identical to cyclomatic complexity (except AND and OR operators)) [5]
Essential Essential cyclomatic complexity (Code structuredness by counting cyclomatic complexity) [5]
CountPath CountPath complexity (No of unique decision paths through a body of code) [5]
Nesting Nesting complexity (The maximum nesting level of control constructs in the function) [5]
FanOut Fan out (The No of outputs that are set) [5]
HK Henry Kafura (Measures information flow relative to function size) [5]
CBC Count of base classes (The No of base classes) [5]
LCOMN Lack of cohesion of methods negative (LCOM with negative values) [7]
NOQ No of queries per class (No of queries for each class) [9]
Dcy No of dependencies (No of dependencies between modules/components with each other) [9]
CONS No of Constructors (Total No of constructors declared for each class) [9]
NOOC No of operations overridden (Total No of operations (methods) overridden by this class) [9]
NPM No of Public methods (All the methods in a class that are declared as public) [9]
JLOC JavaDoc lines of Code (No of lines of code in each class which contain javadoc comments) [9]
NOAC No of Operations added (Total No of operations (methods) added by this class) [9]
PDcy No of package dependencies (No of packages on which each class directly or indirectly depends) [9]
NAA No of attributes added (Total No of attributes (fields) added by this class) [9]
TCC Tight Class Cohesion (The cohesion of a class as a ratio) [10]
DCD Design Complexity Distance (DCD measures the similarity of the design of two classes) [10]
DCI Data, Context, and Interaction (DCI separates the behavior of a system from its data) [10]
This table contains the OO metrics mentioned in selected research papers
TABLE II: Categories of OO Metrics
Category Paper(s)
Size NOF, LOC/SIZE1, STAT, CLOC/LCOMM, PLOC, NOF, NCLASS, NINT, NSTR, NOM, NOS, NOC, PBS, MBD, ABD, NIM,
NTM, ALOC, NPM, SIZE2, CHANGE, WMPC, HK, CNOS, NOOC, NPM, JLOC, NOAC, NAA
Complexity CC McCabe, WMC McCabe, AMC, NOC, RFC, SDMC, CC Avg, CC Max, Modified, Strict, Essential, CountPath, Nesting, CBC
Inheritance CBC, NOC, DIT
Coupling DIT, NOC, MPC, RFC, DAC, CBO, FanIn, FanOut, CBC, Dcy, PDcy
Cohesion LCOM/LCC, LCOMN, TCC, DCD, DCI
Categories of OO metrics identified in research papers

TABLE III: Metrics assessing criteria


Criteria Description
Validity This refers to the degree to which a metric (measure) accurately represents the concept which is intended to measure. This criteria
can be used to assess whether a particular metric is theoretically sound and based on established software engineering principles.
Similarly this can be used to ensure that the identified metrics measure what it claim to measure. Additionally this ensures that there
is clear understanding of the relationship between metrics and the software attributes it presents.
Reliability This refers to the degree to which a measure or metric produces consistent results over time and across different contexts. This
criteria can be used to assess whether a particular metric is consistent and repeatable across different software projects and different
environments. Similarly, it is also useful to ensure that a particular metric produces consistent results when applied to the same
software context by different evaluators.
Sensitivity This refers to the ability of a metric (measure) to detect tiny changes in the characteristic being measured. This will ensure that a
particular metric is sensitive to the changes in its own type (size, complexity, cohesion etc.). Similarly, it assess whether the identified
metrics are able to differentiate between software artifacts of different levels of each type.
Correlation This refers to the degree to which two measures are related to each other. This criteria evaluate whether the selected metrics are
well-correlated with software maintainability or other relevant software attributes. In addition, it ensures whether there is an empirical
evidence that the metric is predictive of software quality or maintainability.
Consistency This refers to the degree to which a metric (measure) produces similar results across different raters or evaluators. This criteria can
be used to assess the size metric by the consistency of a particular size metrics with other size metrics used within the same software
project or organization. Similarly, it ensures whether the size metrics produce similar results to the other established size metrics
when applied to the same software artifact.
Criteria to assess maturity of identified metrics

TABLE IV: Justifications on criteria


Criteria Justification
Validity It is important to ensure that the software metrics being used are valid and truly reflect the characteristics they are intended to
measure. For example, if a metric is used to measure the size of a software module, it is essential to ensure that the metric accurately
captures the size of the module. Hence, this criterion is a vital aspect to consider when assessing a metric.
Reliability Ensuring that the software metrics being used are reliable is important, by proving that the same results will be produced consistently
when measuring the same characteristic of the software in different contexts. For example, if a metric is used to evaluate the complexity
of a software module, it is required to ensure that the consistent results must be produced across different software modules and
programming languages. Hence, this criteria is proposed.
Sensitivity It is important to ensure that the software metrics being used are sensitive enough to detect small changes in the software being
measured. For example, if some measure is used when the coupling between two software modules, is needed to ensure that the
metric is sensitive enough to detect small changes in the coupling between the two modules. So, this will be vital in assessing the
maturity of metrics.
Correlation It is better to evaluate the correlation between different software metrics being used to ensure that they are measuring complementary
characteristics of the software. For example, if both size and complexity metrics are being used to evaluate software modules, it is
essential to evaluate the correlation between these metrics to ensure that they are measuring different aspects of the software. So that
this criterion is good to be considered.
Consistency Ensuring that the software metrics being used are consistent is a critical aspect to be considered, meaning that different evaluators
or raters would produce similar results when measuring the same characteristic of the software.For example, if a metric is used to
measure the cohesion of a software module, ensuring that different evaluators would produce similar results when measuring the
cohesion of the same module is an essential point to be considered.
Justifications on proposed criteria to assess maturity of identified metrics

• Cohesion metrics - Measures the degree of interdepen- III. P ROPOSED CRITERIA


dence between different modules or classes
A. Description
• Inheritance metrics - Measures the degree to which the Depending on the identified metrics and the purpose of
elements within a module or class are related to each its use, the assessment criteria of OO metrics will be differ.
other Validity, reliability, sensitivity, understandably, actionability,
• Coupling metrics - physical connection between OO scalability, etc. are some of the general assessment criteria that
design elements can be applied to each OO metric category. These assessment
criteria can be used to evaluate each type of OO metric and
TABLE II shows the classification of OO metrics identified in to help in determining the usefulness and applicability of that
the given research papers. particular metric for a given software system. By analyzing the
TABLE V: Reflections on OO metrics
Metrics Criteria Reflections
Size Validity An ideal size metric must be based on established software engineering principles and have a solid theoretical
foundation. Its intended purpose must align with its actual measurement, and there should be a clear correlation
between the metric and the software attributes it represents. The precision of size metrics can be verified by
ensuring their ability to accurately determine the size of the software component they’re measuring, such as
the number of function points or lines of code.
Reliability The consistency of size metrics can be assessed by verifying their ability to generate reliable outcomes across
diverse software systems and programming languages.
Sensitivity One way to assess sensitivity is by verifying whether the metric has the ability to identify minor alterations in
the dimensions of the software element.
Correlation To determine correlation, it is possible to examine whether the dimensions of the software component are
linked to other pertinent software metrics, such as complexity.
Consistency To assess consistency, it is necessary to verify whether multiple evaluators yield comparable outcomes while
measuring the dimensions of the identical component.
Complexity Validity For a complexity metric to be considered mature, it must adhere to established software engineering principles
and be grounded in sound theory. The metric should precisely gauge the attribute it purports to measure, and
the correlation between the metric and the software attributes it represents must be apparent. To confirm the
accuracy of complexity metrics, it is necessary to ensure that they effectively capture the complexity of the
software component they measure, such as the number of decision points or the cyclomatic complexity.
Reliability To determine the reliability of complexity metrics, it is necessary to examine if they generate consistent outcomes
across various programming languages and software systems.
Sensitivity To assess sensitivity, it is possible to verify if the metric can identify minor alterations in the complexity of
the software component.
Correlation To determine correlation, it is necessary to examine whether the complexity of the software component is
linked to other pertinent software metrics, such as coupling or inheritance.
Consistency To assess consistency, it is possible to examine whether various assessors yield comparable outcomes while
gauging the intricacy of a particular component.
Cohesion Validity For a cohesion metric to be considered mature, it ought to be grounded on well-established software engineering
principles and possess a solid theoretical foundation. Its intended measurements should align with its actual
assessments, and there must be a definite comprehension of the link between the metric and the software
characteristics it represents. To validate the effectiveness of cohesion metrics, it is essential to confirm that
they precisely capture the level of cohesion of the software component being evaluated, which may include
elements such as the number of methods in a class or the LCOM (Lack of Cohesion of Methods) metric.
Reliability To determine the reliability of cohesion metrics, it is necessary to examine whether they generate consistent
outcomes when applied to various programming languages and software systems.
Sensitivity To assess sensitivity, we can examine whether the metric is capable of identifying minor modifications in the
cohesion of the software component.
Correlation To assess correlation, it is possible to examine whether the cohesion of the component aligns with other pertinent
software metrics, such as coupling or inheritance.
Consistency To evaluate consistency, it is possible to examine whether diverse assessors generate comparable outcomes
when measuring the cohesion of a specific component.
Inheritance Validity For an inheritance metric to be considered mature, it should be grounded on well-established software
engineering principles and have a solid theoretical foundation. It should accurately measure what it claims
to measure, and there should be a clear understanding of the correlation between the metric and the software
attributes it represents. To validate the effectiveness of coupling metrics, it is necessary to confirm that they
precisely capture the level of coupling between software components, such as the number of direct or indirect
dependencies between two modules.
Reliability To determine the reliability of coupling metrics, it is necessary to test whether they generate consistent results
when applied to various programming languages and software systems.
Sensitivity It is possible to examine whether the metric is capable of identifying minor modifications in the level of
coupling between software components.
Correlation One way to assess correlation is by examining whether the inheritance of a component is associated with other
pertinent software metrics, such as coupling or cohesion.
Consistency Consistency can be evaluated by checking if different evaluators produce similar results when measuring the
inheritance of the same components.
Coupling Validity For a coupling metric to be considered mature, it should be grounded on well-established software engineering
principles and possess a solid theoretical foundation. Its intended measurements should align with its actual
assessments, and there must be a clear comprehension of the correlation between the metric and the software
characteristics it represents. To validate the effectiveness of inheritance metrics, it is essential to confirm that
they precisely capture the level of inheritance between software components, such as the depth of inheritance
tree or the number of child classes.
Reliability To determine the reliability of inheritance metrics, it is necessary to test whether they generate consistent results
when applied to various programming languages and software systems.
Sensitivity It is possible to examine whether the metric is capable of identifying minor modifications in the level of
inheritance between software components.
Correlation It is possible to examine whether the inheritance of the component aligns with other pertinent software metrics,
such as coupling or cohesion.
Consistency we can examine whether diverse assessors generate comparable outcomes when measuring the inheritance of
the same components.
Reflect on identified OO metrics
code through the identified metrics, it is possible to identify TABLE VI: Description of goals
areas of the code that will need to improve, such as excessive
RP Goal Description
complexity, low cohesion, or high coupling, etc. TABLE III [1] Applying machine learning techniques to unveil predictors of
contains the criterian which are proposed to assess the maturity yearly cumulative code churn of software projects on the basis
of identified metrics including the size, complexity, cohesion, of metrics extracted from revision control systems
[2] Constructing an OO software maintainability prediction model
coupling, and inheritance. using a technique known as Bayesian network
B. Justifications [3] Investigating the relationship between OO metrics and the detection
of the faults in the OO software
Metrics such as size, complexity, cohesion, inheritance, [4] Exploring the predictive ability of several complexity-related met-
and coupling are commonly used in software engineering to rics for OO software that have not been heavily validated
[5] Empirically validate the framework and prediction accuracy, by
assess the quality of software design and implementation. presenting a framework to automatically predict vulnerabilities
However, it is important to evaluate these metrics using some based on CCC metric
criteria such as validity, reliability, sensitivity, consistency, [6] Empirically investigate the correlation between metrics and the
number of fine-grained source code changes in interfaces of ten
correlation, usability, and generalizability. TABLE IV contains Java open-source systems
some justifications for the proposed criteria in TABLE III [7] Empirical validation of OO metrics on Open source software for
using which the maturity of identified metrics can be assessed. fault prediction
[8] Empirically investigate the relationship of existing class level
IV. R EFLECTIONS ON OO METRICS object-oriented metrics with a quality parameter (maintainability)
[9] Utilizing data mining of enhanced metrics apart from traditionally
Object-oriented metrics are utilized to assess the quality of used software metrics for predicting maintainability of software
software systems developed using an object-oriented approach. systems
Over the years, various researchers and practitioners have [10] Improving the applicability of the class cohesion metrics by
proposed different types of metrics to evaluate different defining their values for such special classes and theoretically and
empirically validate the improved metrics
aspects of OO design such as complexity, cohesion, coupling, Goals identified in papers
inheritance, and polymorphism. The maturity of OO metrics
has been well-established through both theoretical and
empirical research. While many metrics have been validated B. Characterization
through empirical studies, there is an ongoing debate about
which metrics are most appropriate for different types of As mentioned earlier, metrics can be used to measure the
software systems, and some metrics have been questioned for quality of software design in OO programming, and the clas-
their validity and reliability. sification of goals in OO metrics will be useful to identify the
different aspects of software quality that need to be measured.
Additionally, it is important to recognize that OO metrics Complexity, failures, quality, assessment, change are some of
should be used alongside other software engineering practices, the common goals that can be classified in OO metrics. The
such as code reviews, testing, and refactoring to ensure the goals identified in given papers is summarized in TABLE VII.
high quality and maintainability of software systems. TABLE VII: Classification of goals
In summary, several criteria such as validity, reliability, Title Research Paper(s) Category
sensitivity, consistency, correlation, and usability are used Software fault prediction [7] Failures
to evaluate the quality and effectiveness of a software Code churn estimation [1] Change
Software change proneness [6]
system. Similarly, metrics such as size, complexity, cohesion, Software maintainability [2], [8], [9] Maintainability
inheritance, and coupling are used to quantify different Software fault-proneness [3], [4] Reliability
aspects of software code. Hence, a set of criteria has been Software vulnerabilities [5] Security
proposed to evaluate the maturity of each identified metric. Software cohesion [10] Cohesion
Categories of goals identified in research papers
TABLE V provides an explanation of how these quality
criteria can be applied to extracted code metrics.
VI. T HE CODE ATTRIBUTES
V. T HE GOALS A. Description
A. Description In OOP, code attributes refer to various characteristics or
Goal based measurements focus on the improvement which properties of code that can be measured using OO metrics.
need to be carried out in a particular context. It will always These attributes are useful to provide insight of the quality of
set goals once determines what should be measured by software design. It also helps to identify areas that may need
identifying and classifying entities to be examined and it also improvement. Complexity, cohesion, size, reuse, etc. are some
determines the process to be measured by identifying and of the common code attributes that can be measured using
assigning relevant metrics. The goals or purpose of using OO OO metrics. TABLE VIII depicts the attributes identified in
metrics in the research papers will be presented in TABLE VI. the selected papers.
TABLE VIII: Identified attributes VII. S TUDIED SYSTEMS
Attribute Description Paper(s) A. Description
Size The size (population, volume, [1], [2], [3], [4],
length, and functionality) of the [5], [6], [7], [8], TABLE X presents the systems identified in analyzed re-
cod [9] search papers.
Complexity The intricacy of the software [1], [3], [4], [5],
code [6], [7], [8], [9]
Cohesion The degree to which the compo- [5], [6], [8], [9], TABLE X: Identified systems
nents of a software module are [10]
related to each other System Description Paper(s)
Coupling The degree to which the compo- [3], [5], [6], [7], UIMS User Interface Management System [2], [8]
nents of a software module are [8], [9] QUES Quality Evaluation System [2], [8]
interdependent KCI NASA KCI dataset at NASA Metrics Data [3], [7]
Inheritance The reuse of code in OOP [3], [5], [6], [7], Program
[8] Rhino Rhino Software [4]
Maintainability How easily a software can be [1], [2], [8], [9] Mozzilla Mozilla Firefox releases [5], [7]
modified/updated Eclipse Eclipse plug-in [6], [9]
Reliability How well a software performs [3], [4], [7], [10] Hibernate Hibernate projects [6]
under different conditions Lucene Apache IO Lucene Core [9]
Effectiveness The ability of a software system [5] JHotdraw An open-source software framework [9]
to fulfill its intended purpose or JEdit A free and open-source text editor [9]
objectives
JTreeview An open-source software tool [9]
Changeability The ability of a software system [6]
Openbravo A web-based (ERP) system [10]
to accommodate changes/ modi-
JabRef Bibliography reference manager [10]
fications easily and efficiently
GanttProject project management software [10]
Description of attributes
Art of Illusion 3D modeling and rendering studio [10]
Description of identified systems

B. Characterization
Code attributes can be categorized into 2 main parts. The B. Characterization
categorization of attributes identified in given papers will be
shown in TABLE VIII. Identified systems were categorised by using the systems
• Internal attributes - Can be measured entirely in terms
implemented language as shown in TABLE XI.
of the process, product or resource itself. They can be
TABLE XI: Classification of systems
measured by examining the product, process or resource
on its own, separate from its behavior.
Systems Category
Ex: Product size UIMS, QUES Classical-Ada
• External attributes - an be measured only with respect KCI NASA, Mozzilla C++
Eclipse, Rihno, Lucene, JHotdraw, JEdit, JTreeview, Java
to how the process, product or resource relates to its Openbravo, JabRef, GanttProject, Art of Illusion
environment. Here, the behavior of the process, product Categories of systems identified in papers
or resource is important, rather than the entity itself.
Ex: Product quality

TABLE IX: Categories of attributes VIII. U SING THE METRICS TOOLS


A. Discussion
Paper(s) Attributes
Internal External D ESCRIPTION OF THE SELECTED OSS – S TRONGBOX
[1] Size, Complexity Maintainability
[2] Size Maintainability Strongbox is an open-source password manager that enables
[3] Coupling, Complexity, Size, Reliability
Inheritance users to store their passwords and other sensitive information
[4] Size, Complexity Reliability securely. It was developed using the Java programming lan-
[5] Cohesion, Coupling, Com- Effectiveness guage and is built on top of the Spring Framework. Strongbox
plexity, Inheritance, Size
[6] Cohesion, Size, Complexity, Changeability creates a virtual encrypted container where users can store
Inheritance files and directories, and it also supports secure collaboration
[7] Size, Complexity, Coupling, Reliability by allowing multiple users to access and modify the same
Inheritance
[8] Complexity, Cohesion, Size, Maintainability
encrypted container. The program is designed to be easy to
Coupling, Inheritance use for non-technical users while maintaining high levels of
[9] Size, Coupling, Complexity Maintainability security. As of version 1.0.0 Snapshot there are a total of
[10] Cohesion Reliability 755 java files in the system and consists of 60926 lines of
Categories of attributes identified in research papers
code (LOC), which were spread across multiple packages and
classes.
B. Reflections on Tools TOOL2: CCCC (C and C++ Code Counter) is an open-
source tool that measures code complexity in C and C++
Tool1: Source Monitor
programs.
• Source Monitor is a software tool for measuring various
1) The CCCC tool’s installation and configuration pro-
software metrics such as lines of code, cyclomatic com-
cess was easy, which had a positive impact on my
plexity, and code churn. The installation, configuration,
analysis of the Strongbox software. The tool’s user-
and usage of Source Monitor can be considered relatively
friendly interface allowed me to generate various reports,
easy, with a straightforward installation process and a
including tabulated results and diagrams, giving me a
user-friendly interface.
comprehensive view of the software’s metrics.
• The tool covers various Object Oriented metrics such
2) The CCCC tool generated appropriate metrics
as Lines of code(LOC), Number of comments, Per-
such as cyclomatic complexity, lines of code,
centage of branches, Number of calls, Percentage of
LOC/COM,MVG/COM and number of functions.
comments, Number of Classes, Methods/Class, Average
These metrics provided a comprehensive view of the
Statements/Method, Max Complexity, Max Depth, Aver-
software’s complexity and maintainability. However,
age Depth, Average Complexity.
interpreting some of the complex metrics was found to
• SourceMonitor can generate several types of reports
be challenging.
related to software source code, including:
3) The CCCC tool generates various reports, including the
– Source Monitor reports are presented in a tabular Summary Report, File Report
format with various metrics displayed as columns.
• The Summary Report gives an overview of the
The rows of the table correspond to different source
software complexity, including total functions, cy-
code files or functions, depending on the level of
clomatic complexity, and maintainability index.
detail selected by the user.
• The File Report provides detailed information on
– Some reports may include charts or graphs to visu-
each file’s complexity, highlighting the files that
alize the data, such as a Kiviat Graph showing the
require attention and we have acquired Table of
distribution of function points by type.
four of the 6 metrics proposed by Chidamber and
• SourceMonitor provides metrics at both the class and Kemerer in their various papers on ’a metrics suite
package level. for object oriented design’.
4) The CCCC tool is capable of measuring metrics at both
the class and package levels. It analyzes attributes such
as complexity and size at the class level and aggregates
metrics for all classes in a package at the package level.

Fig. 1: CCCC Software Metric Report

Fig. 3: CCCC Software Metric Report

Fig. 4: CCCC Software Metric Report

Fig. 2: CCCC Software Metric Report


R EFERENCES [13]. Ponnala, R., Reddy, C. R. K. (2019). ‘Object
Oriented Dynamic Metrics in Software Development: A
[1]. Karus S, Dumas M (2012) Code churn estimation Literature.’ International Journal of Applied Engineering
using organisational and code metrics: an experimental Research, 14(22), 4161-4172.
comparison. Inf Softw Technol 54(2):203–211. [14]. R. Harrison, S. Counsell and R. Nithi, ”An
[2]. Van Koten, C. and Gray, A.R., 2006. An application overview of object-oriented design metrics,” Proceed-
of Bayesian network for predicting object-oriented soft- ings Eighth IEEE International Workshop on Soft-
ware maintainability. Information and Software Technol- ware Technology and Engineering Practice incorporating
ogy, 48(1), pp.59-67. Computer Aided Software Engineering, London, UK,
[3]. Goel, B. and Singh, Y., 2008. Empirical investiga- 1997, pp. 230-235, doi: 10.1109/STEP.1997.615494.
tion of metrics for fault prediction on object-oriented
software. Computer and Information Science, pp.255-
265.
[4]. Olague, H.M., Etzkorn, L.H., Messimer, S.L. and
Delugach, H.S., 2008. An empirical validation of object-
oriented class complexity metrics and their ability to
predict error-prone classes in highly iterative, or agile,
software: a case study. Journal of software maintenance
and evolution: Research and practice, 20(3), pp.171-197.
[5]. Chowdhury, I. and Zulkernine, M., 2011. Using
complexity, coupling, and cohesion metrics as early
indicators of vulnerabilities. Journal of Systems Archi-
tecture, 57(3), pp.294-313.
[6]. Romano, D., Pinzger, M., 2011. Using source
code metrics to predict change-prone Java interfaces,
in: 2011 27th IEEE International Conference on
Software Maintenance (ICSM). IEEE, pp. 303–312.
doi:10.1109/ICSM.2011.6080797.
[7]. Gyimóthy, T., Ferenc, R. and Siket, I., 2005. Em-
pirical validation of object-oriented metrics on open
source software for fault prediction. IEEE Transactions
on Software engineering, 31(10), pp.897-910.
[8]. Kumar, L., Naik, D.K., Rath, S.K., 2015. Val-
idating the Effectiveness of Object-Oriented Metrics
for Predicting Maintainability, in: 3rd International
Conference on Recent Trends in Computing 2015
(ICRTC-2015). Elsevier Masson SAS, pp. 798–806.
doi:10.1016/j.procs.2015.07.479.
[9]. Kaur, A., Kaur, K., Pathak, K., 2014. Software
maintainability prediction by data mining of software
code metrics, in: 2014 International Conference on Data
Min- ing and Intelligent Computing (ICDMIC). IEEE,
pp. 1–6. doi:10.1109/ICDMIC.2014.6954262.
[10]. Al Dallal, J., 2011. Improving the applicability of
object-oriented class cohesion metrics. Inf. Softw. Tech-
nol. 53, 914– 928. doi:10.1016/j.infsof.2011.03.004.
[11]. Nuñez-Varela, A.S., Pérez-Gonzalez, H.G.,
Martı́nez-Perez, F.E. and Soubervielle-Montalvo, C.,
2017. Source code metrics: A systematic mapping study.
Journal of Systems and Software, 128, pp.164-197.
[12]. R. Jabangwe, J. Börstler, D. Šmite, and C. Wohlin,
‘Empirical evidence on the link between object-oriented
measures and external quality attributes: a systematic
literature review’, Empir Software Eng, vol. 20, no. 3,
pp. 640–693, Jun. 2015, doi: 10.1007/s10664-013-9291-
7.

You might also like