Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Improving Brownfield .

NET Apps with Code


Analysis and Metrics

Course Overview

Course Overview

Hi everyone. My name is Stephen Haunts, and welcome to my course, Improving Brownfield. NET

Applications with Code Analysis and Metrics. For many people, maintaining Brownfield or legacy

software is a large part of their job. When you maintain this all with a code, having the ability to analyze

code quality and set up automated quality checks can be invaluable. This isn't just valuable for legacy

code though. It's just as relevant when developing new code in solutions. In this course, we'll look at the

tools built into Visual Studio to help you improve your Brownfield application quality with code metrics

and static code analysis. This course will use practical demos to illustrate how using these metrics and

analysis tools can help you improve the structure, readability, and quality of your code. In this short

course, we will cover the following topics. First, we'll look at what quality actually means to software and

its users. We'll also look at the different types of testing and also the financial impact of poor quality

software to our users. Next, we'll start looking at some of the tools built into Visual Studio. We'll first look

at the built-in code metrics that you can generate from your code, what they mean, and how to apply

them. Then we'll look at the in-built static code analysis tools built into Visual Studio, how to set them up,

and interpret the rules that are flagged in your builds. Right into the course, you'll have all the theory and

practical skills you need to use code metrics and static code analysis in your software solutions. You will

know what each of the code metrics means and how the code affects them. You'll also know how to

enable static code analysis rules, set rules for inclusion, and also suppress rules in your builds, which is

needed from time to time. This course is ideal for software developers who work on large and small

enterprise systems, and ones who apply their learning about metrics and static code analysis. The
examples in this course are based around the. NET Framework and C#, built using Visual Studio 2017

Community Edition, but everything you learn is also applicable to earlier versions of Visual Studio, such

as 2015 and earlier. I hope that you will join me on this journey to learn about Improving Brownfield.

NET Applications with Code Analysis and Metrics, here at Pluralsight.

What Is Quality?

Course Overview

Hello, and welcome to my course, Improving. NET Apps with Code Analysis and Metrics. My name is

Stephen Haunts. This short course is split into four modules. By the end of it, you'll be familiar with the

code metrics and static code analysis tools that are built into Visual Studio. In this first module, we're

going to explore what software quality is by looking at what quality actually means when applied to

software. We'll talk about different testing techniques used by developers to try and increase software

quality. We'll also look at the cost of buggy software overall. In the second module, we'll introduce the

software quality metrics that are available in Visual Studio. I'll discuss what they are and what I believe

are sensible values to aim for. I will also show you some practical demonstrations on how to view these

metrics and how changing your code will affect these metrics. I'll finish this module by discussing how

you can incorporate metrics into your code review process. In the third module of this course, we'll talk

about static code analysis. We'll cover what it is and what's available to you in Visual Studio. I'll also do

another practical demonstration on how to set this up and create custom rule set. In the fourth and final

module of this course, I'll wrap up the key findings from what we have learned so far. This course is

aimed at experienced developers who are already experienced with C# and the. NET Framework using

Visual Studio. I will not go into depth about any programming constructs using the demonstrations

unless it directly relates to the use of code metrics or static code analysis. If you need to increase your

skills in C# and. NET, then Pluralsight has a huge range of courses to help you out. Everything I

demonstrate in this course has been done with Visual Studio 2017 Community Edition, which is the free
edition of Visual Studio, so you don't need any of the more expensive editions to the tool set. So

everything in this course is available to everyone. Let's get started with the course and take a look at

software quality.

What Is Software Quality?

When we look at quality in the context of software development, we are thinking of three main areas.

These are, quality as a conformant to requirements. So that means does the software do what is

required of it. And quality as in non-functional qualities of the system. This is about wherever the

software conforms to certain non-functional requirements. And finally, we have quality of

implementation, and this is about the quality of the actual code that you write. Has what's been written

been written to best practice? The first area about conformance to requirements, we have many

different types of testing like unit testing, integration testing, and automation testing, which we'll look at

in more detail in a moment. These types of testing are here to answer one question. Does the software

do what we expect it do? Is it fit for purpose against a requirement specified by the end user or

customer? For the second area about non-functional requirements, this is more about those areas that

sit outside the main system requirements. These include things like performance, stability, scalability,

security, interoperability, and many more areas. The software may conform to the main requirements

and features of the system, but performance may be very slow. There might be a non-functional

requirement around retrieving a customer record that should return a result within 2 seconds. So if this

retrieval operation takes 10 seconds, then it has failed that non-functional requirement. The third and

final area was about the quality of implementation, so that doesn't focus on the adherence of

requirements which we've already covered. The quality of implementation is about the quality of your

code. Are you writing good, maintainable code? Does it adhere to known quality standards? Is there

consistency in our approach across the whole code base? It's these types of questions that the

remainder of this course will help you with using the tools that come with Visual Studio. Let's now take a
look at some of the standard ways of testing software to help with our other two types of quality control,

adherence to requirements and non-functional requirements.

Typical Quality Techniques

Automated testing helps us build high-quality software as it reduces the feedback loop from detecting an

error to fixing it. There are many different types of automated testing that you can perform. Let's

summarize these now. First of all, we have unit testing. Unit testing is about isolating small areas of

software and testing them in isolation from the rest of the system to make sure they perform as you

expect them to. Unit testing is excellent to identifying defects early on in your development cycle.

Detecting problems early is important, as these issues are cheaper and easier to fix the earlier they are

detected. The most common approach to unit testing requires drivers and stubs to be written. The driver

simulates a calling unit and the stub simulates a unit being called. The investment of developer time in

this activity sometimes results in demoting unit testing to a lower level of priority, and that is almost

always a mistake. Even though the drivers and stubs cost time and money, unit testing provides some

undeniable advantages to the overall quality of your software solutions. It allows for automation of

testing process, reduces difficulties of discovering errors contained in more complex pieces of the

application, and test coverage is often enhanced because attention is given to each unit. Next, we have

integration testing. Integration testing is an actual extension to unit testing. In its simplest form, two

areas of code that are being unit tested are then tested together. Integration testing is about testing the

interfaces between different components. In a realistic scenario, many units are combined into

components, which are in turn aggregated into even larger parts of the system. Integration testing

combines different pieces of code together so that eventually the entire code base has been tested and

you achieve a higher overall test coverage. This integration testing shows issues with code components

when they are combined through their public interfaces. Next, we have automation testing. Automated

UI testing is where you test your application from the outside in. This means you have a series of tests

that use the user interface of your application like a real user would. Testing frameworks for UI
automation testing normally allows you to work in two different modes. The first is where you record test

in the test recorder while actually using the software and recording your mouse clicks and keyboard

presses. You can then play back these tests repeatedly. This type of testing is a great way of getting

into automated UI testing, as it doesn't require much intervention from the software development team.

Your existing testing team can start recording the tests and using them in the future. Any complication

here is that if a tester records a test for a particular set of data, if that data is persistent in a database,

then the next time you run that test, the test might be invalid. In this scenario, it is normal to do an

environment reset to restore your data back to a known point. This can be done by either a database

restore or starting with a blank database and then seeding enough data to make the application

functional. Another way of tackling this problem is to have your test create new user accounts in the

system each time they are run, but this is harder to do if you're recording playback style tests. The next

level of automating UI testing is to write test scripts that use an API that let you simulate button clicks

from code. This is preferable to the record and play back method, as the code for these tests can then

be checked in along with your regular source code, and you can get these automation tests to run inside

of your continuous delivery build process when deploying into an environment. Using scripted test

automation makes it easier to generate unique data per test, which means you don't need to resort to

restoring the database each time. Standard tools for automated UI testing are tools like Selenium or

WatiN for web application, and tools like Coded UI for testing thick client applications written in WPF or

WinForms. Well of course, that is a very Microsoft. NET approach, but all development environments

that let you produce UI applications will have their own testing frameworks. While they may be different

in execution, fundamentally they all follow similar principles. Next, we have performance testing. Having

an application that performs well is imperative to delivering a usable system. Performance testing is the

testing process that allows you to determine your overall speed or performance of the system under

test. Application performance testing is a type of testing that is known as non-functional requirement

testing. Non-functionals are requirements that are not direct to business functionality, but more of a set

of expectations about how the system should perform and behave. You can also detect and diagnose

other bottlenecks in your software, such as communications bottlenecks using performance testing. It is
better if you can resolve performance problems early, as most often a system would work much better if

a problem is fixed at single points or in a single component. There are three main areas of performance

testing, and these are low testing, stress testing, and soak testing. So, the first one is load testing. This

is performed to see how a system acts when it is put under a lot of load. You usually subject the system

to more load than may be expected in production. When load testing, you'll often test for an expected

number of concurrent users, each performing standard operations in your system. For example, in an

online store, this might be a user searching for an item, putting it in their shopping basket, and then

checking out to purchase that item. This testing will give out the response times of all the important

business critical transactions being carried out. If you want to test the upper acceptable limits of your

system, then you perform a stress testing on that system. Stress testing helps you determine the overall

robustness of the application as far as extreme load goes. This will help the production system

administration teams understand how the system will perform under excessive pressure. You can also

perform tests to measure the endurance of your system. Enduring this testing is also called soak

testing, and this type of testing is used to check if the system can withstand a continuous load of traffic.

During a soak test, you'll be measuring the utilization of memory to detect if you are suffering any

memory leaks, or a memory is not being cleared down correctly. You'll also be looking to see if the

overall performance of the system will be degrading overtime. We have now talked about what quality is

in regards to software development, and discussed some common testing techniques that are

employed. Let's now take a look at some of the impacts of software that hasn't had a good level of

testing and scrutiny applied to it.

The Cost of Buggy Software

We've now talked about what quality means with regards to software development and some of the

programming practices that are used to develop test suites for our software solutions. What about when

things go wrong? Failing software can be very expensive, both for the end user and the software

development company. The National Institute of Standards and Technology, otherwise known as NIST,
back in 2002 published a report called The Economic Impact of Inadequate Infrastructure for Software

Testing. You can see a link to the report on the screen, and I highly recommend downloading it and

having a read. The report discusses the point that bugs in software cost the economy of the United

States around $59. 5 billion dollars each year, with around half of that cost being born by the software

end user, and the rest of the money by developers and vendors of the application. By improving testing

practices, these costs could be reduced by about a third, which is around $22. 5 billion. But this would

not remove all of the software errors. Of the total $59. 5 billion cost, users incur 64% of the cost and

developers 36%. In a case study from the financial services sector, 4 software developers and around

100 users of their software were interviewed. In this study, it was agreed by developers that they

needed a better testing system that could be used to track a bug from when it first occurred and how it

influenced the rest of the system. An ideal testing system could solve problems in real time while the

system is being developed, rather than requiring the software developers to wait for the final version of

the software to be delivered to testers. The report stated that a total cost to financial services companies

from inadequate software testing was estimated at around $3. 3 billion. The key message here is that

defects and quality issues in software can have a severe financial impact on the companies that use the

software, as well as businesses that build the software and its developers. It's in our hands as

developers to help prevent these defects. This course isn't about unit testing and other software testing

specifics though. Instead, this course is about writing code to best practices and using code metrics to

help guide the quality of the code you write. Metrics and static code analysis isn't the silver bullet, but it's

a tool you can use in combination with good unit testing practices to increase the quality of your code.

Types of Application

The title of this course is Improving Brownfield. NET Apps with Code Analysis and Metrics. Before we

move onto the next module, we'll look at some of the code metrics in Visual Studio. I first want to talk

about what Brownfield and Greenfield software development is. We use the Brownfield angle for this

course because that is probably the most common scenario for developers in a modern enterprise
software development environment. But this course is just as relevant to Greenfield software

development too. So let's take a quick look at what Brownfield application development is, followed by

Greenfield development. Brownfield application development is a term commonly used to describe the

development of already existing legacy systems. In other words, extending and modifying an already

existing system that's in production. Brownfield software development can be very tricky. The system

we're trying to change might not be the best quality to start with. I've worked for many companies where

we are doing Brownfield development and the systems we were extending felt like they were held

together with sticky tape and bubble gum. One company I worked at meant to maintain a point of sale

system that was physically deployed into our retail stores, around 700 stores at the time, and that was

up to 5 PCs in each store. So, as you can imagine, as well as software development complexity, we

also had quite a big software deployment complexity problem as well. Deployment complexity aside, the

actual system we had to extend was very fragile and there was no unit tests anywhere. We were

changing the system to be compliant with some new financial regulations that were being launched in

the United Kingdom. There's a lot of spaghetti code that was very hard to read. We even had some

methods that were over 1, 000 lines long with a lot of deep nesting of if statements. Part of our approach

was to try and break these methods down into more manageable and understandable chunks, and it

was code metrics that we used to help us do this. So let's now take a quick look at what Greenfield

development is. Greenfield development happens when you start to build a brand-new project, as in you

start with a completely clean slate. With Greenfield development, you have no legacy code lying around

to get in your way. You are starting afresh. For many developers, myself included, this is preferable, as

it is easier to build your system quickly, and you don't have all the complexities in trying to support and

not break an existing system. The reality, though, is that in the modern commercial world, Greenfield

development isn't as common as Brownfield development. Sure, companies do write neat code in these

systems, but the vast majority of the time you'll be maintaining an already existing system, which is

either a well-built implementation, or you'll be supporting a poorly implemented and very low-quality

system. Greenfield is more common if you are building a personal system as a pet project or you are

working for a startup. In fact, at the time of writing this course, I am working for a startup that is writing a
new online multi-tenant, cloud-based insurance claims handling system. At the moment, this is

Greenfield as we are building a new system and we are trying to incorporate best practices into our

code as we go along. At some point in the future, though, this will be a Brownfield development

exercise, as you need to extend the system that is already in production and being used extensively by

our end users. Now that we've talked about software quality and testing, let us move on to our next

module, which looks at using the code metrics tools that are built into Microsoft's Visual Studio.

Using Code Metrics in Visual Studio

Introduction

Hi, my name is Stephen Haunts. Welcome back to my course, Improving Brownfield. NET Apps with

Code Analysis and Metrics. In this module, we're going to take a closer look at code metrics that are

available to you within Visual Studio. We'll start out by discussing what the different types of metrics are.

These include the maintainability index, cyclomatic complexity, depth of inheritance, class coupling, and

finally, lines of code. We'll then finish up with a demo of using some of these code metrics so that you

can see them in action. First of all, let's take a look at what code metrics are and where to find them.

Code metrics in Visual Studio are a way to allow developers to help measure the quality of the code

they are writing. These metrics give a visual indicator to the developer to help them understand what

parts of their code could be refactored to aid readability. The metrics we're going to cover in this module

help to give you immediate feedback on your code to allow you to see areas of code that might be the

hardest to understand. Let's look at how to access these metrics now and look at where each of them

are.

Accessing the Metrics

You can find the code metrics by clicking on the Analyze menu and then selecting Calculate Code

Metrics. From here, you then click the For Solution option. These metrics may take a while to calculate
and display depending on the size of the solution that you're analyzing. If you're looking at a huge

legacy Brownfield application, then this might take several minutes. If so, just be patient and let it

complete. That might be a good time to go grab a cup of coffee. Once it has finished calculating the

metrics, you will see a screen like the following. This example is from a real project that I have put live,

hence why I have had to block out some of the object names. But I wanted to show you a real example

from a real project that is in production. This screen has a project file list on the left-hand side of the

screen, and the metrics are displayed in each column on the right side. From here, you can then drill

down into each project as shown on the screen. Now that we've looked at how to access these metrics,

let's go through each of the metrics displayed here one by one, as this few does contain a lot of

numbers so we need to break it down.

Maintainability Index

The maintainability index will look at a block of code and give it a numerical score between 0 and 100.

This score represents the relative ease of maintaining that block of code. The higher the number, the

better maintainability. As well as assigning a numerical score, that block is also given a traffic light style

color coding. Green is a rating between 20 and 100. Because this is marked as green, the code is

deemed to have a good maintainability index. A yellow rating is a score between 10 and 19. This color

coding indicates that the code is only moderately maintainable. If the color coding is red, then this

indicates a rating between 0 and 9. And this indicates there is a maintainability problem with your code.

This color coding gives a good at-a-glance indication of the maintainability of your code. In calculating

this metric, only the cyclomatic complexity and lines of code are taken into account directly and some

other metric values are not exposed directly by the tools in Visual Studio. This non-exposed value is

called the Halstead Complexity Measure, whereby only the Halstead volume is used for calculation of

the maintainability index. The Halstead Volume Metric was created by Maurice Halstead in 1977 to

create an empirical method to measure software development efforts. His intention was to identify and

measure software properties, as well as show relationships between them. The calculation used by
Visual Studio can be seen on the screen, but you don't really need to understand how this works in

depth to make use of the metric. What is important is using the color-coded visual indicator as a sign

that there might be a problem lurking in your code. If you're interested to read up more about Halstead

Complexity Measures, then I highly recommend the links that are on the screen. The screenshot you

can see now represents a real project that I'm working on. For the importance I'll show real world

projects here instead of just a small sample app. I've had to obscure a few details, but you get the idea.

This is a website project that is actually in production at the time of recording. I've shown the

maintainability index for each of the sub projects included on the solution. At a glance, you can see that

the project is in a general state of clean health. But don't let this metric confuse you into thinking that

everything is great with the code itself. The metric makes no judgments about the quality of your

implementation. You should use it only as a visual indicator of any code smells due to over complexity.

If any areas are showing as yellow or red, then these are good places to tackle first. This screenshot

shows the solution at a project level. Let's now drill down into one of the projects a little further. Here we

are exposing three classes and their methods. This lets you visualize the maintainability index for each

class and method. As you can see in this example, everything is green. Let's now take a look at

cyclomatic complexity.

Cyclomatic Complexity

The cyclomatic complexity code metric measures the structural complexity of the code block being

inspected. This metric is determined by calculating the different number of code paths through the code

block being inspected. A block of code that has more complex paths through it will require many more

unit tests to be developed because of the overall complexities increased. The lower the number of this

metric, the better. Later on in this course, I'll discuss what I personally think are good numbers to aim for

based on my experience over the last 20 years. In the screenshot you can see at the moment, you can

see the overall combined complexity for each project in the solution. At this level, the metric is not very

useful, as a combined cyclomatic complexity for an entire project doesn't really tell you much. The
metric is more useful when you dive into individual class methods. In the screenshot you can see now,

you can see class methods for three classes, Aes, BlockEncrypter, and ByteHelpers. If you look at the

constructor for Aes, it has a complexity of 1, whereas if you look at the decrypt method it has a

complexity of 9. This indicates that this method is much more complex. But what does this really mean?

Let's look at a simple example. If we take this simple piece of code from a console application where the

main method is defined, but doesn't do anything, you can see that the main method has a cyclomatic

complexity of 1. For now, let's add an if statement as the main method and see what happens. You can

see that the complexity has increased to 2 because we have added a branch level. Let's now embed

another if statement into this method. Could we have added another level of branch in this complexity of

main is increased to 3? Now let's add a switch statement and see what happens. By adding a switch

statement with 4 case blocks, we've increased the complexity from 3 to 7. As you can see, cyclomatic

complexity is a count of conditional complexity in your code. The higher the count, the more complex

your method is, so a lower number is more favorable. Let's now take a look at the depth of inheritance

metric.

Depth of Inheritance

The depth of inheritance metric indicates how many classes are inherited from the object being

inspected back to the roots of the hierarchy. The larger the number indicates increased complexity due

to the amounts of classes being inherited, and the deeper this hierarchy, the harder it could be to

understand where different methods are defined or even redefined. In the screenshot you can see now,

you can see I have the depth of inheritance metric being shown from my real-world software project.

This view is showing the total depth of inheritance at a project level in the solution. This isn't as useful as

looking at it at a class level. The view on the screen now shows some of the cryptographic classes in

the project. At the class level, we can see that the depth of inheritance for Aes, BlockEncrypter, and

ByteHelpers is 1. This means that those classes do not inherit from any other classes meaning that

there is a lower coupling at the inheritance level. Let's look at this is as a more concrete example. Let's
say we have three simple classes, ClassA, ClassB, and ClassC. If we look at the depth of inheritance

metric, you can see that each have a depth of 1. Let's now make ClassC inherit from ClassB with the

following code. When we recalculate the metric, we see that ClassC has a depth of inheritance of 2

because it now inherits ClassB. Let's now make ClassA inherit from ClassC. This means ClassA inherits

from ClassC, which in turn inherits from ClassB. If we look at the metrics, we can see that ClassA now

has a depth of 3. This is because ClassA inherits from ClassC, which also in turn inherits from ClassB.

The higher the depth, the more complex your class structure is. So a lower number is more favorable for

less complexity. But this is a tradeoff between complexity and good object-oriented design. A good

design inherited structure is still a good thing, but these metrics let you see at a glance the complexity of

this depth. Three or 4 classes deep is probably not a bad thing, but if it was 10, I'd start worrying as this

is a lot to keep an eye on. Let's now take a look at the class coupling metric.

Class Coupling

The next metric to cover is the class coupling metric. This metric measures a coupling to other classes

through variables, method cause, base classes, or anything else that can couple classes together.

Ideally, you want to have low coupling between classes, as this means they are easier to reuse and

maintain. This is, of course, quite subjective and depends on what it is you are trying to do. But as a

general rule, a lower class coupling number is more desirable. In this screenshot on the screen at the

moment, you can see we have a combined set of class coupling metrics rolled up for each class project

as displayed in the code metric's result view. If we drill down into one of these project files, the

cryptographic projects in this case, you can see the class coupling details for each class and method in

that class. If we look at the method CheckHmac in the Aes class, we can see that there is a class

coupling metric of 4. This means the CheckHmac method has a variable method called base class or

anything else that can couple that method to another class. Let's look at a simple example to illustrate

this further. Here we have three simple classes, ClassA, ClassB, and ClassC as illustrated on the slide.

If we now generate the metrics, we can see that each class has a coupling score of 0. Let's now make
ClassB store an internal variable reference to ClassA. If we go and regenerate the metrics again, we'll

see that ClassB's coupling metric goes up to 1. This is because we were forced a coupling between

ClassB and ClassA. Let's now extend this further by making ClassC store a reference to ClassA and

ClassB. When we regenerate the metrics, you'll see that ClassC now has a coupling score of 2 because

we are referencing both ClassA and ClassB. Don't forget that this doesn't just apply to references

between classes, it also includes method cause, base classes, or anything else that couples that

method to another class. Let's now take a look at our final code metric, which is lines of code.

Lines of Code

The next and final code metric is sometimes thought of as a more controversial metric, but it really need

not be. This metric is the lines of code metric, and it simply counts the number of lines of code in the

block of code being inspected. These are not the actual lines of C# or VB code, though, but the

underlying IL code that is generated by the compiler. A higher number for this metric might indicate that

a method or class is doing too much and should be split up and refactored. It is good to think about the

single responsibility principle when looking at the lines of code metric. A class or method should really

just do one thing and do it well. If you have a high lying count, then this might be an indicator that the

class or method is doing too much and it might be hard to maintain. It is important to make sure that if

you are using the lines of code metric, that we are using it as a measure of software complexity and not

a measure of productivity by measuring how many lines of code a developer is writing. If you take a look

at the screenshot on the screen now, you can see the lines of code metric plotted against each project

in the solution. This view is useful if you want to compare lines of code between projects. If we drill down

again into the cryptography class, you can see the lines of code measured in IL Assembler against each

method. This then becomes a useful indicator of complexity where it could indicate that a class or

method is doing too much work. A class or method should ideally do just one thing and do it well. A high

number of lines of code could indicate that the class or method has many more than one concern and it

needs breaking down. Let's again look at a simple example that we can build up. On the screen, you
can see a very simple method called Main. This method currently isn't doing anything. If we look at the

code metric, and we can confirm it is doing nothing because the metric for lines of code is 0, let's now

add a simple variable definition and assign it the value of 0. If we go and regenerate the lines of code

metric, we'll see this increase to 1. Let's now add a calculation to our variable that will then print the

result to the console window. If they go and regenerate the metric, we can see the lines of code has

gone up to 3. We've now covered the main metrics included in Microsoft's Visual Studio. Earlier on in

this module, I set about defining some sensible defaults to look out for as warning signs when using the

metrics. Let's take a look at some of these now.

Using Metrics to Spot Problems

We have covered the metrics that were included in Visual Studio, but as you have seen, they are

actually very easy to generate and read. But when you're reading the metrics, what constitutes a good

number and a bad number? Unfortunately, this is very hard to give a definitive right answer to and it

depends on your project and what you are doing. But what I can do is tell you the numbers I've used

with my teams over the years. And you can use that as a starting point. You may decide to use lower

numbers, or you may decide that higher numbers are more appropriate for your project. See, it's very

subjective. But let me give you an example from my experience and you can take it from there. Let's

review the metrics again and state some sensible defaults. Maintainability index calculates an index

value between 0 and 100 that represents relative ease of maintaining the code. A higher value means

better maintainability. Color coded ratings can be used quickly to identify a trouble spot at a glance in

your code. A green rating is between 20 and 100 and indicates that the code has good maintainability.

The yellow rating is between 10 and 19 and indicates that the code has moderate maintainability. And a

red rating is a rating between 0 and 9 and indicates low maintainability. These are not numbers that you

can set yourself, but I calculated using the Halstead algorithm that we discussed earlier in the module.

Cyclomatic complexity is a software measurement metric that is used to indicate the complexity of a

program. It directly measures the number of linearly independent paths through a program source code.
Cyclomatic complexity may also be applied to individual functions, modules, methods, or classes within

a program. A higher number is bad. I generally direct my teams to keep this value below 7. If the

number creeps up high, it may mean your method is getting too complex and you could do with

refactoring by extracting code into a separate well-named method. This will help increase the readability

of your code. Depth of inheritance is defined as the maximum length from your class to its roots node in

the hierarchy. A lone number for this depth implies less complexity, but also possibility of less code

reuse through inheritance. High values for depth of inheritance means potential for errors is also high.

Low values reduce the potential for errors. I find keeping the value below 5 is a good measure. Class

coupling is a measure of how many classes a single class uses. A high number is bad and a low

number is generally good for this metric. Class coupling has been shown to be an accurate predictor of

software failure. An upper limit value of 9 is a good guide to follow. And finally, the lines of code metric

indicates the approximate number of lines of code in a class or method. The count is based on the IL

code and is therefore not an exact number of lines of code in the source code file. A very high count

might indicate that a type or method is trying to do too much work and should be split up. It might also

indicate that a type or method might be hard to maintain. And trying to give a good value is very hard to

quantify. But I've used a value of 40 before when looking at code, but it is certainly not a hard and fast

rule. It is important to make sure that if you are using the lines of code metric, that you are using it as a

measure of software complexity and not a measure of productivity by measuring how many lines of

code the developer is writing. Now that we've talked about metrics a lot, let's do a demo to show you all

this working before we move onto our next module about static code analysis.

Demo

Loaded up into Visual Studio, I have a solution for module 2 of this course. And in this solution, I have a

number of projects to help demonstrate each of the code metrics. So first of all, we have a project for

cyclomatic complexity, then I have a project for demo in depth of inheritance, and another project for

demo in class coupling, and then finally, a project for demo in the lines of code metric. So first of all, let's
show you how to bring up the code metric's results. So let's just make sure that the project is first built.

Then I'm going to go to the Analyze menu and then I'm going to go over to this menu here, which says

Calculate Code Metrics, and then I'm going to click on For Solution. Now this has gone off and

calculated the code metrics for our solution. So, let's first take a look at cyclomatic complexity. So, if I

open up the program. cs file for that project, you can see here we have a very simple console

application, and I have a static main method that's been defined. Now in this method, I first declare a

variable called X and then I have an if statement which checks if the X is equal to 1. So it's a very simple

piece of code, it doesn't actually really do anything. But what we've done is we've added a single branch

complexity with the if statement. So, let's first go to our cyclomatic complexity project in the code metrics

result and open it up. And I'm going to drill down, so first of all we have the Cyclomatic_Complexity

namespace, then we have Program, which is the name of our class, and then we have Main, which is

the method we're looking at. So, if we now go along this line, we can see that cyclomatic complexity for

this method is currently set to 2. So that's the initial complexity of the method name itself, or the method

itself. And then an additional one added to that metric for the if statement. So let's now add another if

statement, which I already have prepared here. So now I've added that in, I can click on this button to

recalculate the code metrics. Now you can see that the cyclomatic complexity for our main method has

increased from 2 to 3 because we've added this additional piece of complexity. So now let's add this

switch statement which adds four case statements. Now I've recalculated the metrics, and you can see

that we have now jumped up to a cyclomatic complexity of 7. And that's because we have the switch

statement and all of the individual case statements, which is an individual branch of complexity in the

code. So we've just demonstrated that by changing our code by adding additional if statements and

switch statements that that complexity level is going to go up. So let's now look at our second project,

which is depth of inheritance. So, I'm going to load up the code, and we can see here that we have

three classes that have been defined. So, I'm just going to go and comment out these bits here first. So,

we have ClassA, ClassB, and ClassC. And now I'm going to go and recalculate the code metrics

because I've just changed the code. I'm going to open up our depth of inheritance project. I'm going to

go into the namespace, and then we can see here that for ClassA we have a depth of inheritance of 1,
for ClassB we have one and of ClassC we have one. So now, let's first of all go and make ClassC

inherit from ClassB. So, let's recalculate those metrics. So we've just changed ClassC, we've made it

inherit from ClassB, so we can see that our depth of inheritance has gone up to 2 because there's 2

levels in the class hierarchy. So now let's go and change ClassA so that it also inherits from ClassC.

Now by virtue of inheriting from ClassC, we're also inheriting from ClassB. So, let's see what that looks

like. Okay, so ClassC still has a depth of inheritance of 2. And ClassA now has a depth of inheritance of

3 because ClassA inherits from ClassC, and ClassC in turn inherits from ClassB. So that's a very simple

example that shows that by changing the inheritance level for a class that that metric then gets

increased. Okay, let's now look at our next metric, which is class coupling. I'm first just going to go in

and comment out these few lines of code. Okay, so again, we have three classes, we have ClassA,

ClassB, and ClassC. So, let's just recalculate these metrics, I've just changed the code, and let's drill in.

So, we're going to go down into our namespace, and then here we can see that we have, inside of our

program class, we have our three sub classes, so ClassA, ClassB, and ClassC. So now if we look at the

class coupling scores for each of those classes that are currently set to 0 and that's because those

classes don't do anything, they're not talking to anything else. So let's go back in, let's go re-add in this

line in which we initially commented out. So, ClassB now has a reference to ClassA, and we've created

that reference by creating an instance variable. So, let's recalculate our metrics and ClassB. So, we can

now see that the class coupling score for ClassB has increased to 1 because we are including a

reference to ClassA. So, let's re-illustrate this point. So, ClassB is now going to have references to

ClassA and ClassB. So, let's recalculate that metric and come down and look at ClassC in the code

metric's results. So we can see here that the class coupling for ClassC has now gone up to 2 because

we have a reference to ClassA and a reference to ClassB. So you can now see that those classes have

now become dependent on each other or coupled by introducing those instance variables. So now let's

look at our final metric, which is lines of code. Okay, and then I'm just going to comment out these lines

of code so we have somewhere to start with. So, let's recalculate our metrics, drill into lines of code,

down into the namespace, and into our class called Program. So if we look at our method called Main,

we can see that Lines of Code is currently set to 0. So let's start off by adding in an instance variable a,
which we're going to set to 0. And let's recalculate the metric. And you can see down here that our lines

of code has increased to 1. So now let's do something with that variable, so we're going to do a simple

mathematical expression. So we'll comment that out and just recalculate the metrics. So, you can now

see that Lines of Code is increased to 2. And finally, just to show it again, we're now going to print the

content of variable a out to the console window. So let's just recalculate the metrics and you can see

that our Lines of Code has increased to 3. So those were four sort of quite simple demos, but they

illustrate the points, or they illustrate how these metrics all work. So, let's now move onto our next

module where we're going to look at static code analysis within Visual Studio.

Using Static Code Analysis

Introduction

Hi, my name is Stephen Haunts, and welcome back to my course, Improving Brownfield. NET Apps

with Code Analysis and Metrics. In this module, we're going to take a closer look at the static code

analysis tools that are available to you in Visual Studio. We will then be covering the following topics.

First, we'll look at what static code analysis is, then we'll talk about why you'd want to use static code

analysis, then we'll discuss how to enable the analysis tools, followed by a deeper look at the different

categories of rules available. Next, we'll take a look at suppressing certain rules in your code base. And

then we'll finish up with a demo. This is not meant to be an in-depth look at every single rule included in

the rule sets. If we did that, then this course would be about 8 hours long. We'll look at the general

categories as well as some specific examples and where to look in a documentation if you come across

an unfamiliar error. Let's start by looking at what static code analysis is.

What Is Static Code Analysis?

Visual Studio provides a feature called Static Code Analysis that is designed to analyze your code

looking for problems with design, security, performance, globalization, and interoperability. The static
code analysis system has many rules built into it that will be run against your code and generate a

report of code that fails the rules. We'll take a look at these different categories of rules later on in this

module. The rules out of the box in Visual Studio are designed to target best practices as defined by

Microsoft for what constitutes good code and design. So, what do we mean by static when we talk

about static code analysis? By static, we mean that the code is analyzed without it being executed. This

means that it is an offline process executed by a tool in the Visual Studio ecosystem, as opposed to

code being executed and then analyzed. A system that analyzes the code by running it would be

referred to as a dynamic code analysis system, which the Visual Studio tools are not. Because static

code analysis happens as an offline process, that means this analysis process is good to run as part of

your build pipeline. So, every time your code is built, you could run the static code analysis before you

execute your unit tests.

Why Use Static Code Analysis?

We have talked about what static code analysis is, but why is it a good idea to use it? Let's distill this

down into a few points. First, static code analysis is good because it will run its analysis rules all over

your code, and it will do it fairly quickly, which makes it repeatable. This analysis will be checking for any

styling or vulnerabilities in your project, as defined by a set of rules. By automating this process, you can

easily detect and fix issues as they appear instead of finding issues in production. The second benefit of

static code analysis is that using the tools in Visual Studio, you can define custom rule sets to be

executed against different projects. Not every project is crafted the same, and some categories of rules

you may determine to be not important to your project. If any of the rules that you do choose are broken

though, then this can be made very apparent by breaking the build, forcing developers to address the

issue straight away. By running static code analysis and fixing any rules, over time the quality of your

code will improve. Static code analysis can have its bad points though. If you have all the rules turned

on, then this can generate a huge amount of warnings or errors. Some of these might be false positives,

which can hide bigger issues in the sea of error messages. The static code analysis rules in Visual
Studio do come with some prebuilt templates that lets you start off with the minimum rule set, which can

gradually increase in coverage over time. This is a good way of introducing static code analysis, start off

gradual, and then increase over time. I once worked for a financial services company that had a legacy

point of sale system and back-end services. For the services, we decided to start using static code

analysis and we jumped straight into the four rule sets. The amount of errors that were generated was

very disheartening. After spending a week trying to work through the errors, we then decided to

customize the rule set, something more appropriate to our needs, which helped reduce the size of the

problem. Once the developers had gotten used to working with that rule set over time, we gradually

added in new rules. That highlights another interesting issue. Some of these errors that come from the

analyzer might be new to the developers, so they need to be given some time to adjust to them. If you

are starting off with a nice, new Greenfield project, then this is easier. But if you have a huge legacy

Brownfield application, then the rules can be counterproductive if you go in too complex too quickly. So,

the overall message here is start gradual and then increase over time. Now that we've looked at what

static code analysis is and why we should use it, let's take a look at how to enable it in your project.

How to Enable Static Code Analysis?

Enabling static code analysis is very straightforward in Visual Studio. To enable it, you first need to go to

the properties of your project by right-clicking on the project in the Solution Explorer. This will bring up

the Properties page. On the left of the window, you'll see a series of option tabs. Click on the option that

says Code Analysis. This will display the window that you can see on the screen now. If you want to

enable the running of static code analysis on your build, which I recommend you do, then click the

checkbox that says Enable Code Analysis on Build. This will now execute the analysis each time you

run a build. The checkbox under this will suppress showing code analysis results from autogenerated

code. So, this might include WCF generated service proxies, for example. In the middle of the window,

we have an area for specifying raw sets. If you click on the drop-down box, you can see all the rules that

are provided by Microsoft. Let's look at what some of these are with the definition set by Microsoft. I
won't cover every single rule set, but what we will cover will give you a very good idea of what is

available. You can access the documentation for these descriptions with the link that's on the screen at

the moment. First, we have the Microsoft All Rules set. Microsoft defines these as the rule set that

contains all the rules. Running this rule set may result in a large number of warnings being reported.

Usually these rule sets get a comprehensive picture of all the issues in your code. This can help you

decide which of the more focused rule sets are most appropriate to run for your projects. Next, we have

Basic Correctness Rules. Microsoft defines these rules as rules that focus on logic errors and common

mistakes made in usage or framework APIs. Include this rule set to expand on the list of warnings

reported by the minimum recommended rule sets. Next we have Basic Design Guideline Rules.

Microsoft defines these rules as focusing on enforcing best practice to make your code easy to

understand and use. Include this rule set if your project includes library code or if you want to reinforce

best practices for easily maintainable code. Then we have Extended Correctness Rules. These rules

expand on the basic correctness rules to maximize the logic and framework usage errors that are

reported. Extra emphasis is placed on the specific scenarios, such as COM Interop and mobile

applications. Consider including this rule set if one of these scenarios applies to your project or to find

additional problems with your project. Then we have the Extended Design Guideline Rules. These rules

expand on the basic design guideline rules to maximize usability and maintainability issues that are

reported. Extra emphasis is placed on naming guidelines. Consider including this rule set if your project

includes library code or if you want to enforce the high standards for writing maintainable code. Finally,

let's look at the Globalization Rules. These rules focus on problems that prevent data in your application

from displaying correctly when using different languages, locales, and cultures. Include this rule set if

your application needs to be localized or globalized. The Rule Set drop-down box lets you pick any of

these rules, or you can pick multiple rules sets that feel appropriate. If you are feeling quite brave, then

try going for more exhaustible rules. Whenever I start a brand-new project, I always go for the All Rules

by default. If I'm tackling a legacy project, then I introduce basic correctness, followed by basic design.

Once they are both bedded in, I then add the extended correctness rules. Once these are incorporated,

I will then add on the globalization rules. When we access the properties page to enable static code
analysis, we went in by the Property menu on the Solution Explorer. The other way to access them is

from the Analyze menu. Select the Configure Code Analysis menu and then the For, followed by

whatever your project is called option. When you have selected a rule set to use from the properties

window, you can click on the Open button, which will then open up the rules editor. This editor view will

show you a tree containing all the rules in that rule set. From here you can enable and disable rules. If

you are using one of the provided rule sets and make a change, then when you close that window, you'll

be prompted to save the rule set. This is how you can create your own custom rule sets. If you do this,

then save the rule set alongside your solution and check it into your source control repository to ensure

everyone on your team uses the same set of rules. If you look at the example on the screen now, the

Action column on the right specifies what happens when the rule is triggered. The example on the

screen has its rule set to warnings. This means that when this rule is triggered, then it will be reported in

the error list in Visual Studio when you do a build, but a warning won't fail the build, it will just let you

know that there's a problem. If you want the build to break because you deem the warning bad enough,

then you need to set this to Error. If you set the action to None, then this will disable the rule completely.

There are many different categories of rules. You can filter down these by using the category column

filter. If you click on the column header, then the rules will be grouped by category and sorted. If you

want to hide particular categories so that you can break down the number of rules in the view, then you

click on the little arrow on the Categories column, which will display a tick list of categories for you to

choose from. When static code analysis is set up using the steps shown in the previous screens, the

analysis rules will be executed when you build your project. If you want to run these rules without

building your project, then you can do so from the Analysis menu in Visual Studio. When the menu is

open you then use a submenu Run Code Analysis and then either select On Solution or run against the

whole code solution. When you run the static code analysis rules as part of a build, or by manually

triggering them, you'll see the results in the error list window like you do with a normal build. If the rules

were set to report warnings, then you'll see them appear under the Warning section. This won't fail a

build, but serves as a guideline to the developers. If the rule was set to error, then it will break the build

and appear as a build error in the errors category. If you apply static code analysis to a large legacy
project for the first time, then you are likely to see a lot of errors or warnings appear. This can feel quite

scary at first. Sometimes the description message for the rules can be self-explanatory, and these are

easy to fix. When you double-click on the error, it will take you to the affected code and you can make

the correction. Other times the rule description isn't that obvious as to what you're supposed to do. You

do get better at it over time, but to start with it can feel a little unobvious. When this happens, Visual

Studio tries to help. In the Code column of the error or warning list, you'll see a green label with a rule

identifier in it. So, for example, on the screen you can see one of them it says CA1014. If you click on

this code, the browser will open up and you'll be directed to the official Microsoft documentation for that

rule on MSDN. If you scroll down the page, you will normally see a code sample explaining what to do.

In this example, they're already stating that you need to mark assemblies with the CLS compliant

attributes. These code samples on MSDN for the analysis rules are invaluable when you're trying to get

started with static code analysis. When I first started using them, I was using these help pages at the

time. At the end of this module, I'll do a code demonstration showing you how to use what we have

been discussing in this module so far. Now we've looked at how to enable the static code analysis

engine in Visual Studio, and how to get help with the rules, let's now take a look at some of the different

categories of rules that are available.

The Different Rules Categories

There are a huge amount of rules that come with Visual Studio Static Code Analysis System. What

we're not going to do is go through each of these rules one by one in this course. This would make the

course very hard to watch and it would most likely end up being 8 hours long. What we will do instead is

look at the different general categories as defined by documentation on MSDN. And from there on, you

can dig deeper into the rules if you so wish. The link on the screen will take you to the MSDN page that

lists the main rule categories. From here, you can click on each category and drill down deeper. Let's

quickly look at some of these different categories. First we have cryptography warnings. These rules

warn you of the incorrect use of cryptography in your code. For example, you'll get a warning if you use
older encryption standards like DES in your code instead of AES. Then we have design warnings.

These check that you are following the. NET Framework design guidelines. Next, we have globalization

warnings. These rules help to enforce if you have an application ready to be used by multiple languages

and cultures. Then we have the interoperability warnings. These rules help enforce that you can interact

with older COM clients. Then we have maintainability warnings. These warnings help support library

and application maintenance like avoiding excessive inheritance and complexity. Next, we have mobility

warnings. These rules help you make applications that are more battery efficient. Then we have naming

warnings. These enforce naming rules in your code like Microsoft's standard conventions and spellings.

Then we have performance warnings. These rules look for code that might affect performance. For

example, there's a rule that will warn you if you are using an inefficient jagged array, as that could affect

performance. Next, we have portability warnings. These rules help you make your programs more

portable across operating systems. Then we have reliability warnings. These rules help you produce

more reliable software with rules, such as correct disposing of objects before losing scope. Then we

have security warnings. These rules let you build safer and more secure libraries and applications. Then

we have usage warnings. These rules enforce correct and appropriate usage of the. NET Framework.

As you can see, there are many different categories of rules that cover many different types of usage.

Not every rule will be relevant all the time. If you know something is just never going to be relevant to

you in your project, then you can exclude these rules from the rule set completely. But you need to

exercise some thought around this though. This is something that is good to discuss with your team.

There may be some rules that are relevant most of the time, but in some circumstances, you feel it valid

to break them. In that case, you can suppress rules at compile time. Let's take a look at how that is

done in more detail.

Suppressing Rules

As we have previously discussed, not every rule is relevant all of the time to your project. Overt time

you'll gain familiarity with the different rules and decide what is appropriate. They are there as a guide. If
you know a rule is never going to be relevant to you, then you can just disable it in the rules file. But

sometimes things are not as clear cut and you may want to disable the rule on a more ad hoc basis.

There are two ways in which we can tackle this; using a suppression file or insource suppression. A

suppression file is where you want to stop a warning or error for an entire project in Visual Studio. You

access this by clicking on the error or warning in the error list, opening the Suppress menu, and then

selecting the In Suppression File. This is highlighted in the screenshot on the screen. This will create a

file or mend it if it's already there called GlobalSuppression. cs. This file contains an attribute that

specifies a rule category and the rule ID to suppress. Suppression of rules in a file like this has to be

triggered carefully. I've worked at many companies where developers were under pressure and they

just start suppressing rules all the time. I recommend having a process in place where you regularly

review the contents of this file to make sure the suppression makes sense in the context of your project.

If a rule is being suppressed globally like this, then should it be removed from the rule set? Or has the

rule been suppressed because the developers are under pressure and didn't want to fix it there and

then? If you start to suppress lots of rules that are there to help you build better systems, then you're not

getting the full benefit out of the tool. The next level of rule suppression is the in source suppression.

This is also accessed from the error list like the global suppression. If there is an error that you want to

suppress in a specific place, then right-click on the error or warning and select In Source. This will place

a suppression attribute above your code, as you can see on the screen now. Again, you should be

careful as how often this happens in your code. Now that we have covered static code analysis, let's

look at a demo of using it on some badly written code.

Demo

Okay, so what I want to do now is run through a quick demo that shows you how to enable and use

static code analysis in Visual Studio. So, before we jump into that, let's just look at the sample code that

I have here. So, what we have is a very small program, which is effectively a to-do list which allows you

to add to-do items to that list. And it's a console application, so there's no fancy user interface on it, but
let's just quickly look at the code that we've got. So, first of all, we have a ToDoItem, and this is a public

class and we have a string that's defined in that class, and then we have a public constructor where we

pass in a description of our ToDoItem, and then that is set to the private description. And then we have

a public property which returns that description back to the caller. Then we have a class called

ToDoList, and in this list we are storing a private generic list of to-do items, which we initialized when we

called the constructor for the ToDoList. Then we have a public method called AddItem, and this simply

takes a to-do item and adds it to the list. And then we have a method called GetList, which returns our

list to the caller. And this is all called from the main Program. cs in our main method. So, here we are

generating or creating five to-do items. So, buy milk, buy bread, put the bins out, take the kids to school,

and walk the dog. We then create a to-do list and we add each of these five items to the list. We get the

list, iterate through it, and print each of the to-do items to the screen. So, let's just quickly run that just to

prove that it works. Okay, so here we can see that each of our to-do items has been printed out at the

screen. Great, so this now represents our legacy Brownfield application. So, although this is a small,

trivial example for the purpose of this course, if you imagine this scaled out to a larger system that

you're having to deal with. So now we're going to enable static code analysis. So to do that, I'm going to

right-click on the project and select Properties. And I'm going to go down to the bottom of this list and

I'm going to select Code Analysis from the tabs menu. And what I'm going to do is I'm going to say

Enable Code Analysis on Build, and then for the rules I'm going to be brave and I'm going to set

Microsoft All Rules. So, for the purpose of this small sample, All Rules is fine, but if you have an

application where you have thousands of classes and millions of lines of code, going to All Rules first of

all might be a bit too much, but for the purpose of what we're doing here that is absolutely fine. So what

I'm going to do is I'm going to now rebuild the application and see what happens. (Rebuilding) Okay, so

if we look at the errors list, we can see that we now have six warnings as a result of enabling static code

analysis. That's pretty good, but I don't really want them to be warnings, I want them to be errors so that

the developers working on this don't ignore them. So what I'm going to do is I'm going to open up the

editor for the rule set, and then for each of the rule categories we have here I'm going to change them to

each say error. So, we're taking a full-on approach here, we're setting everything to error. Now we've
done that, and because I've modified the default rule set, it's asking me to save it since it's now a

custom rule set. So I'm going to hit Yes to save that. As we can see, that has been added to our project.

Okay, so now let's do a full rebuild of the solution. Right, so now instead of getting warnings, we have a

full set of errors. So this is great. So now let's go through and go through these errors one by one and

see if we can fix them. So, this first one that comes up is telling us that it wants us to sign the application

with a strong name key. Now, for the sake of this application I don't want to do that, and you know, as a

developer I'm saying that's fine. So what I'm going to do in this case is I'm going to suppress the rule.

So, we go to right-click on it and we click Suppress. Now that I've done that, it's created a

GlobalSuppression. cs file. So if we go and have a look in that file, we can see that that error has indeed

been suppressed. So that is now going to be ignored for my entire application. Okay, so let's look at the

next error. So, you mark Module3. exe with CLSCCompliant(true) because it exposes externally visible

types. Okay, so maybe not totally obvious what's going on there, so let's this time consult the

documentation and see what that says. So, I'm going to click on the warning or the raw code here,

which is CA1014, which is going to load up the MSDN documentation for that rule and we're going to

see what to do. Here we can see we have the cause, we have a rule description, there's some details

on how to fix the violations, some guidance on when to suppress the warning, but most usefully, we

have some sample code telling us what to do. So, it's telling us to add these attributes above the

namespace, which is great. So, I'm going to be a lazy developer, or an efficient developer, take your

pick, I'm going to copy that warning or that sample code, I'm going to close this, go back to Program. cs,

and above my namespace I'm going to paste in that attribute. So, let's do a rebuild. Great, so our errors

have now gone down to five, so we're on the right track. So, let's look at the next one. So, because the

program contains only static members, mark it as static to prevent the compiler from adding a default

public constructor. Okay, so what this is telling us here is it wants us to add or make this class program

static. That's fine, so we can do that. Rebuild again. Perfect. Okay, next one. So, Parameter args of

Program. Main is never used. Remove parameter or use in the method body. So, in the context of what

we're doing here, I don't actually need the arguments, I'm not passing anything in, so I'm going to do

what it suggests and I'm just going to delete them. It's a case of use it or lose it. Okay, next one. So,
Use expression body for properties, so let's double-click on that and see where we are. Okay, so it's

highlighting this line here in my property, which returns the description. And I can see that I have a little

light bulb tip coming up here so it's going to give me some suggestions on what to do. And it's telling me

to use expression body for properties. So, I'm going to take its advice and click that. Okay, and it has

now modified the code for me, so this is the raw system telling me that this is a better way of expressing

this property. So that's great, so let's do a build. Okay, so now we have this one here, so Change List

ToDoItem in GetList to use a Collection of T, a ReadOnlyCollection of T, or a KeyedCollection of K and

V. So, what is this telling me is that as part of our to-do list we have a private list of to-do items. Now,

what we're doing in GetList, we are then taking this private list and then we are returning it to a caller.

Now what this means is that the person who retrieves that list can then go and modify the list. So,

technically, we're breaking encapsulation, and that's most likely not what we want, in this case we don't,

we don't want someone to change the list. So, what I'm going to do here is I'm going to convert it to a

read- only list so that the person who retrieves my to-do items only has a read-only view of it. That's

fine, so I'm going to change just to ReadOnlyCollection. It's going to want me to add a using in there for

that. Great, so I've done that. Okay, so now the next line has been highlighted because a list of to-do

items is not the same thing as a read-only collection. So, what I need to do is create a read-only

collection. I need to pass our list into it. Okay, so now I've done that, I can now see that my calling

program is now complaining, so GetList is now returning to read-only collection, but we're trying to pass

it into a list of to-do items, so I need to change this bit of code here to ReadOnlyCollection. So, add in

our using. Let's just now run a build. It says we have an unnecessary using, which is another rule that's

been triggered, so I don't actually need to add that in. Okay, alright, okay. So now we've done that so

we're now returning a read-only collection back to our calling program is, so that's great. So, we have

one error left. So, Change ToDoList. GetList to a property if appropriate. So, what we have here is we

have a method, a standard C# method is returning our list. So, what this telling us is, well, you know we

provide properties for doing this. So why don't you use a property if it's appropriate? So, in this case it is

appropriate. So, I'm going to turn this into a standard C# getter. And I need to change my calling

program from calling a method to a property. Okay, so now that I've done that it's now giving me, it's
now coming up with other errors. We've gone from one error back to two. So, what's it telling us here?

So, now we've changed this to a property, it's now actually saying that's great that you've done that, but

there's actually a better way, there's actually a better thing you can do. So, as you will notice, we have

the small yellow light bulb come up again, so if we click in it and look at the suggestion, it's telling us to

use an expression body, this property. So, there's nothing inherently wrong with what we did by

converting that method into a property, but it's now saying, well now you've done that, there's a better

way, we suggest you do this. So, I've heeded the compilers and static code analysis advice and I've

done what it said. So, let's just do another quick build just to make sure we've got everything. Perfect, so

we now have no errors, no warnings, and we have a completed application. So now let's just double

check that it still runs, which it does. So, we now have all of our to-do items still returned onto the screen

or printed onto the screen. If we just recap what we've just done. So, we started off with a simple to-do

item list, but it wasn't written that great. So, what we've done is we've turned on static code analysis and

we've let the analysis rule guide us to refactor this application using best practices as defined by

Microsoft. So, we added the All Rules static code analysis category. It gave us a whole bunch of errors,

which we have now fixed. And now our application is in a much better position than when it started. It's

more readable and it uses more best practices as defined by Microsoft.

Wrapping Up

Introduction

Congratulations, you have now completed my course Improving Brownfield. NET Apps with Code

Analysis and Metrics. In this final module, we'll summarize some of the key points you have learned so

far. Let us start with a final recap about general software quality. Software quality is a very important

issue that we as developers have to focus on. Broadly, our testing efforts fall into three general

categories. First, we are testing that our software conforms to requirements, and we do this with a

mixture of a manual and automation testing for an outside-in test that is normally conducted by
specialist quality testers. For internal finer-grain testing, we rely on techniques like unit testing and

integration testing. These are generally performed by the software developers who are implementing

the system. Next, we have non-functional requirements. And these are all about testing areas of the

system outside of the main business requirements. These include things like performance, scalability,

sustain load, and memory leak detection for soak testing. The third category for software quality is

around quality of implementation. And this is about the quality of the code that we write. By that I mean,

is the code readable, maintainable, and conforms to best practice? This has been the main focus of this

course, and we'll look to tackling this for the use of code metrics and static code analysis. Although the

title of this course refers to Brownfield applications, the techniques taught in this course are just as

valuable to Greenfield application development too. But what does Brownfield and Greenfield actually

mean? Brownfield development is where you're extending or supporting older legacy systems that are

already in production. These systems can vary in quality as they've been extended and sometimes

badly extended at pace. This means Brownfield development can be tricky, especially as these systems

tend to lack a lot of unit tests. Greenfield developments, on the other hand, is about developing brand-

new projects. Most developers prefer Greenfield work, myself included. But this is not as common as

maintaining older Brownfield code bases. Let's now recap what we learned about code metrics in Visual

Studio.

Code Metrics

Code metrics in Visual Studio are a way to allow developers to help measure quality in the code they

are writing. These metrics give a visual indicator to the developer to help them understand which parts

of the code could be refactored to aid readability. The metrics we covered in this course help to give you

immediate feedback on your code to allow you to see where areas of code might be hard to understand.

The first metric we looked at was the maintainability index. This is essentially a traffic-light style metric

that gives you an at-a-glance view of potential problems in your code. If the class or method is green,

then quality is generally good. If it is yellow, then there are some potential problems that you need to
look at. If it is red, then there are some serious issues that you need to address. When calculating this

metric, only the cyclomatic complexity and the lines of code are taken into account directly, and some

other metric values are not exposed directly by the tools in Visual Studio. There's non-exposed values

called the Halstead Complexity Measure, whereby only the Halstead volume is used for the calculation

of the maintainability index. The Halstead Volume metric was created by Maurice Halstead in 1977 to

create an empirical method to measure software development efforts. His intention was to identify and

measure software properties, as well as to show relationships between them. The next metric we

looked at was the cyclomatic complexity metric. The cyclomatic complexity code metric measures the

structural complexity of the code block being inspected. This metric is determined by calculating the

different number of code paths, like if statements and switches, through the block of code being

inspected. A block of code that has more complex paths through the code will require many more unit

tests to be developed because the overall complexity has increased. The lower the number of this

metric, the better. The next metric we covered was the depth of inheritance measure. The depth of

inheritance metric indicates how many classes are inherited from the object being inspected back to the

root of the hierarchy. The larger the number indicates the increased complexity due to the amount of

classes being inherited, and the deeper this hierarchy, the harder it could be to understand where

different methods are defined. The next metric we looked at was a class coupling measure. This metric

measures the coupling to other classes with variables, method calls, base classes, or anything else that

can couple the classes together. Ideally, you'll want to have low coupling between classes, as this

means they are easier to reuse and maintain. This is, of course, quite subjective and it depends on what

it is you are trying to do. But as a general rule, a lower class coupling number is more desirable. The

next and final code metric is sometimes thought of as a more controversial metric, but it really need not

be. This metric is the lines of code metric, and it simply counts the number of lines of code in a block of

code being inspected. These are not actual lines of C# or VB code, but the underlying IO assembly

code that is generated by the compiler. A higher number for this metric might indicate that a method or

class is doing too much and should be split up and refactored. It's good to think about the single

responsibility principle when looking at the lines of code metric. A class or method should ideally do just
one thing and do it well. If you have a high line count, then this might be an indicator that the class or

method is doing too much and might be harder to maintain. It's important to make sure that if you are

using the lines of code metric, that you are using it as a measure of software complexity and not as a

measure of productivity by measuring how many lines of code the developer is writing. We have

covered the metrics that are included in Visual Studio. And as you have seen, they are actually very

easy to generate and read. But when you are reading the metrics, what constitutes a good number and

a bad number? Unfortunately, this is very hard to give a definitive right answer, it depends on the project

and what you are doing. But what I can tell you is the numbers that I have used with my teams over the

years, and you can use these as a starting point. You may decide to use lower numbers, or you may

decide that higher numbers are more appropriate for your project. Let's review what we post as sensible

values based on my own experience to aim for as a starting point. For cyclomatic complexity, I

suggested that a good measure to use for an individual method was a number lower than seven. This

should help you focus on keeping methods short and therefore making them easier to test. For depth of

inheritance, I recommended a value lower than five. This means there are five levels of class

inheritance, which is generally more than sufficient, but your mileage may vary. For the class coupling, I

set a maximum recommendation of nine or less. This means that a class or method should have less

than nine occurrences being coupled with another class. And then finally, we have the lines of code

metric, which I set to a limit of less than 40 for lines of code in a method. As I said previously, these are

just guidelines based on my own experience and they're not hard and fast rules. Just use them as a

basis to start from and then adjust as appropriate for your solution. Now that we've done a quick recap

on code metrics, let's take a final look at static code analysis.

Static Code Analysis

Visual Studio provides a feature called static code analysis, as it is designed to analyze your code,

looking for problems with design, security, performance, globalization, and interoperability. The static

code analysis system has many rules built into it that will be run against your code and generate a
report of code that fails the rules. The rules out of the box in Visual Studio are designed to target best

practice as defined by Microsoft for what constitutes good code and design. So, what do we mean by

static when we talk about static code analysis? By static, we mean that the code is analyzed without it

being executed. This means this is an offline process executed by a tool in Visual Studio ecosystem, as

opposed to the code being executed and then analyzed. A system that analyzes the code by running it

would be referred to as a dynamic code analysis system, which the Visual Studio tools are not. Because

static code analysis happens as an offline process, that means this analysis process is good to run as

part of your build pipeline. So every time your code is built, you could run the static code analysis before

you execute your unit tests. We talked about what static code analysis is, but why is it a good idea to

use it? Let's distill this down to a few points. First, static code analysis is good because it will run its

analysis rules all over your code, and it will do it fairly quickly, which makes it repeatable. This analysis

will be checking for any styling or vulnerabilities in your project. By automating this process, you can

easily detect and fix issues as they appear instead of finding issues in production. The second benefit of

static code analysis is that using tools like Visual Studio, you can define custom rule sets to be executed

against different projects. Not every project is crafted the same, and some categories of rules you may

determine not to be important to your project. If any of the rules you choose are broken, then this can be

made very apparent by breaking the build, forcing developers to address the issues at the time. Over

time, the overall quality of your code should improve. The rules provided by Visual Studio are split into

many categories that you can see on the screen. These are designed to give you a good range of

coverage across best-practices defined by Microsoft. As we have previously discussed, not every rule is

relevant all the time to your project. Over time you'll gain familiarity of the different rules and decide what

is appropriate. They are there as a guide. If you know a rule is never going to be relevant to you, then

you can just disable it in the rules file. Sometimes things are not as clear cut and you may want to

disable the rule on a more ad hoc basis. There are two ways in which we can tackle this, using a

suppression file or in source suppression. Developers, in my experience, when under pressure will start

suppressing rules instead of fixing them, so it's important that you regularly look for your suppression file
or search the suppressed messages in your code base to make sure that suppressed rules make sense

in the context of your application.

Finish

You have now reached the end of this course on code metrics and static code analysis, and I thank you

for taking the time to allow me to teach you about these fantastic tools provided within Visual Studio. If

you'd like to contact me privately, you can do so from my blog called Coding in the Trenches at www.

stephenhaunts. com. I love connecting with people, so if you have enjoyed this course, then I would

love to interact with you via Twitter. You can follow me on Twitter by using the handle @stephenhaunts.

If you have enjoyed this course, then I would very much appreciate you using the ratings button on this

course page at Pluralsight. Also, it would be great to hear your feedback in the course discussion. My

name is Stephen Haunts, and you have been watching Improving Brownfield. NET Apps with Code

Analysis and Metrics. Thanks for watching.

You might also like