Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 95

Introduction To Static Analysis

Introduction

Hi, this is Erik Dietrich presenting on behalf of Pluralsight. Welcome to this course entitled Practical

NDepend. This first module provides background on the concept of Static Code Analysis that is vital to

understanding what NDepend does. The focus of this course is on doing Static Analysis with

NDepend, but before we get to the specific tooling it's important to understand what Static Analysis is, in

general. I will cover that in this module by going over the following topics. First, I'll talk about Static

Analysis in broad terms, for definition purposes. Next, I'll talk about the idea of upfront Static Analysis

vs. after the fact Inspection of results. I will also talk about the difference between Static and Dynamic

Analysis of code bases. After that, I'll talk about the concept of Simple Code Source Parsing

vs. Compile-Time Code Analysis. With the background in place, I will discuss the different types of

Static Analysis that are performed and then, I'll give some examples of Static Analysis

Metrics. Finally, I'll wrap up with a justification of why one should bother with Static Code Analysis at

all. This course assumes that you are an intermediate level programmer or above and also that you are

comfortable with the. NET Framework and. NET Programming. If you are a beginner, I would suggest

looking through the Pluralsight Library for introductory programming courses, as well as the design

patterns and solid principles libraries. All coding in this course will be done in C#, but NDepend is

capable of working with any. NET language, so if you program in VB in Manage C++, you can still get

plenty out of this course.

Static Analysis in Broad Terms

Rather than the traditional lecture approach of providing an official definition and then discussing the

subject in more detail, I'm going to show you what Static Analysis is and then define it. Take a look at

the following code and think for a second about what you see. What's going to happen when we run this

code? I bet you saw this coming. In a program that does nothing but set x to 1 and then throw and
exception if x is in fact 1, it isn't hard to figure out that the result of running it will be an unhandled

exception. What you just did there was static analysis. Static analysis comes in many shapes and

sizes. When you simply inspect your code and reason about what it will do, you are performing static

analysis. When you submit your code to a peer and have her review, she does the same thing. Like you

can your peer, compliers perform static analysis though automated analysis instead of manual. They

check the code for syntax errors or linking errors that would guarantee failures and they will also provide

warnings about potential problems such as unreachable code or assignment, instead of

evaluation. Products also exist that will check your source code for certain characteristic and style list of

guideline conformance, rather than worrying about what happens at runtime and in Mange

Languages, projects exist that will analyze your compiled IL or byte code and check for certain

characteristics. It is these latter forms of automated analysis, that will occupy the majority of this

course. The common thread here is that all of these examples of static analysis involved analyzing your

code without actually executing it.

Analysis vs Reactionary Inspection

People's interactions with their code tend to gravitate away from analysis. Whether it's unit tests and

TDB, integration tests, or simply running the application to see what happens programmers tend to

run experiments with their code and then to see what happens. This is known as a feedback loop and

programmers use the feedback to guide what they're going to do next. Well obviously some thought is

given to what impact changes to the code will have, the natural tendency is to adopt, an I'll believe it

when I see it mentality. We tend to ask, what happened? And we tend to orient our code in such a

way, as to give ourselves answers to that question. In this code sample, if we want to know what

happened, we execute the program and see what prints. This is the opposite of static analysis in that

nobody is trying to reason about what will happen ahead of time, but rather the goal is to do it, see what

the outcome is, and then react as needed in order to continue. Reactionary inspection comes in a

variety of forms such as debugging, examining log files, observing the behavior of a GUI, etc.
Static vs. Dynamic Analysis

The conclusions and decisions that arise from the reactionary inspection question of, what

happened? Are known as dynamic analysis. Dynamic analysis is more formally inspection of

the behavior of a running system. This means that it is an analysis of characteristics of the program that

include things like how much memory it consumes, how reliably it runs, how much data it pulls from the

database, and generally whether it correctly satisfies the requirements or not. Assuming that static

analysis of a system is taking place at all, dynamic analysis takes over where static analysis is not

sufficient. This includes situations where unpredictable externalities such as user inputs or hardware

interrupts are involved. It also involves situations where static analysis is simply not computationally

feasible, such as in any system of real complexity. As a result, the inner play between static analysis

and dynamic analysis, tends to be that static analysis is a first line of defense designed to catch obvious

problems early. Besides that, it also functions as a canary in the mine, to detect so called code

smells. A code smell is a piece of code that is often, but not necessarily, indicative of a problem. Static

analysis can thus be used as an early detection system for obvious or likely problems and dynamic

analysis has to be sufficient for the rest.

Source Code Parsing vs. Compile-Time Analysis

As I alluded to in the static analysis and broad terms section, not all static analysis is create

equal. There are types of static analysis that rely on simple inspection of the source code. These

include the manual source code analysis technique such as reasoning about your code or doing code

review activities. They also include tools such as type cop that simple parse the source code and make

simple assertions about it to provide feedback. For instance, it might read a code file containing the

word class, and see that the next word after it is not capitalized and return a warning the class name

should be capitalized. This stands in contrast to what I'll call compile time analysis. The difference is that

this form of analysis requires an encyclopedic understanding of how the complier behaves or else the

ability to analyze the complied product. This set of options obviously includes the complier, which will
fail on showstopper problems and generate helpful warning information as well. It also includes

enhanced rules engines that understand the rules of the complier and can use this to infer a larger set

of warnings about potential problems than those that come out of the box with the complier. Beyond

that is a set of IDE Plugins that perform asynchronous compilation and offer real time feedback about

possible problems. Examples of this in the. NET world include re-sharper and code rush and finally

there are analysis tools that look at the complied assembly outputs and give feedback based on

them. NDepend is an example of this, though it includes other approaches mentioned here as well. The

important compare and contrast point to understand is that source analysis is easier to understand

conceptually and generally faster, while compile-time analysis is more resource intensive and generally,

more through.

The Types of Static Analysis

So far, I've compared static analysis to dynamic and ex-post-facto analysis, and I've compared

mechanisms for how static analysis is conducted. Let's now take a look at some of different kinds of

static analysis from the perspective of their goals. This list is not necessarily exhaustive, but rather a

general categorization of the different types of static analysis, with which I've worked. Style checking is

examining source code to see if it conforms to cosmetic code standards. Best practices checking is

examining the code to see if it conforms to commonly accepted coding practices. This might include

things like not using go statements or not having empty catch blocks. Contract programming is the

enforcement of pre-conditions and variance in post-conditions. Issue and bug alert is a static analysis

designed to detect likely mistakes or error conditions. Verification is an attempt to prove that the

program is behaving according to specifications. And fact-finding is analysis that lets you retrieve

statically information about your applications code and architecture. There are many tools out there that

provide functionality for one or more of these, but NDepend provides perhaps the most comprehensive

support across the board for different types of static analysis goals or any. NET tool out there. You will
thus get to see in depth example of many of these, particularly, the fact-finding and issue alerting type of

analysis.

A Quick Overview of Some Example Metrics

Up to this point, I've talked a lot in generalities, so let's look at some actual examples of things that you

might learn from static analysis, about your code base. The actual questions you could ask and

answers are pretty much endless so this is intended just to give you a sample of what you can know. Is

every class and method in the code base in Pascal case? Are there any potential null dereferences of

parameters in the code? Are there any instances of copy and paste programming? What is the average

number of lines of code per class? or per method? How loosely or tightly coupled is the

architecture? And What classes would be the most risky to change? Believe it or not, it is quite possible

to answer all of these questions without compiling or manually inspecting your code base, in time

consuming fashion. There are plenty of tools out there that can offer answers to some of these

questions that you might have, but in my experience, none can answer as many in as much depth and

with as much customizability as NDepend.

Why Do This? What Does This Prove?

So all that being said, is this worth doing? Should we do static analysis? Why should you watch the

subsequent modules if you aren't convinced that this is something that's even worth learning? It's a valid

concern, but I assure you that it is most definitely worth doing. Here's some benefits. The later you find

an issue, typically the more expensive it is to fix. Catching a mistake seconds after you make it, as with

a typo is as cheap as it gets. Having a QA catch it a few weeks later, after the fact, means that you have

to remember what was going on, find it in the debugger, and then figure out how to fix it, which means

more time and cost. Fixing an issue that's blowing up in production, costs time and effort, but also

business and reputation. So anything that exposes issues earlier save the business money and static

analysis is all about helping you find issues or at least potential issues, as earlier as possible. Beyond
just allowing you to catch your mistakes earlier, static analysis actually reduces the number of

mistakes that can happen in the first place. The reason for this is that static analysis helps

developers discover mistakes right after making them, which reinforces cause and effect a lot

better. The end result, they learn faster not to make the mistakes they've been making causing fewer

errors overall. Another important benefit is that maintenance of the code becomes easier, by alerting

you to the presence of code smells, static analysis tools are giving you feedback as to which areas of

your code are difficult to maintain, brittle, and generally problematic. With this information laid bare and

easily accessible, developers naturally learn to avoid writing code that is hard to maintain. Exploratory

static analysis turns out to be pretty good way to learn about your code base as well. Instead of the

typical approach of opening the code base in an IDE and poking around or stepping through

it, developers can approach the code base instead by saying, show me the most heavily used classes

and then, which classes use those. Some tools also provide visual representations of the flow of an

application and its dependences, further reducing the learning curve developers face with a large code

base. And a final and important benefit is that static analysis improves developers' skills and makes

them better at their craft. Developers don't just learn to avoid mistakes, as I mentioned in the mistake

reduction bullet point, but they also learn, which coding practices are generally considered good ideas

by the industry at large, and which practices are not. The complier will tell you things that illegal and

warn you that others are probably errors, but static analysis tools often answer the question, is this a

good idea? Over time, developers start to understand the subtle nuances of software

engineering. There are a couple of criticisms of static analysis the main ones are the tools can be

expensive and that they create a lot of noise of false positives. The former is a problem for obvious

reasons and the latter can have the effect of counteracting the time saving by forcing developers to

weed through non-issues in order to find real ones. However, good static analysis tools mitigate the

false positives in various ways that we will look at, an important one being to allow the shutting off of

warnings and the customization of what information you receive. NDepend, turns out to mitigate both. It

is highly customizable and not very expensive.


Summary

In this module, I started off by defining static analysis and covering it in broad terms. I then discussed

the difference between up front analysis and inspecting results after the fact. From there, I took a dive

into the difference between static and dynamic analyses of code bases and then I talked about the

difference between static analysis via source code parsing and static analysis via compile time

analyzing. With basic differences sorted out, I covered some different types of static analysis in terms of

the goals of the activities. Then I gave some examples of metrics that can be evaluated using static

analysis and finally, I concluded with an argument as to why this practice is beneficial and worth doing.

A Gentle Introduction To NDepend

Introduction

Hi, this is Erik Dietrich presenting on behalf of Pluralsight. Welcome to this course entitled Practical

NDepend. The second module is an Introduction to the Basics of NDepend. This second module, builds

on the introduction to static analysis by introducing NDepend itself. First, I'll do what you might expect

with any introduction and provide some background and an overview of NDepend. Next, I'll talk about

the specifics of the version that I'm using for the purpose of this course. With the backstory told, I'll show

you how to install NDepend and then I'll talk about the different operational modes of the tool. Next up,

I'll cover the NDepend start page and the dashboard for operating, and then I'll walk you through your

first NDepend project. In order to have a code analysis project, you need to choose what you're going to

be analyzing and then I'll take you through performing the actually analysis. Once analysis is

complete, I'll go through some highlights of what it produces and finally, I'll provide some links to some

NDepend documentation and tutorials.

NDepend Backstory and Overview


In the last module, I covered the subject of static code analysis in general including what motivates its

use. But whatever specific goal you might have in mind, with regards to your code base, fact

finding, tracking down bugs, promoting maintainability etc., it all boils down to hunting for information

about your code base and NDepend, was created to help with just that. It aims to take your extremely

complex and information dense software project and allowed you to extract important information from it

easily. Patrick Smacchia, the creator of NDepend, starting writing the tool over 10 years ago. He was

working on a large and messy code base, while simultaneously reading Agile Software Development

Principles, Patterns, and Practices by Bob Martin, in which there are descriptions of way to analyze a

code base. Patrick wanted to see the metrics Bob described applied to code base he was working on

and the first kernel of NDepend was born. He opened sourced the tool and it started to become

popular, garnering feature requests, from the community. He eventually commercialized the tool and its

first official release was April 26, 2004. In the near decade that has followed, the product has matured

and expanded significantly growing into the comprehensive and indispensable code base analysis tool

that it is today. Currently, NDepend is mainly a static analysis tool and it covers a lot of ground in that

space. It allows you to generate code quality metrics and see how well your code base conforms to

predefined rules, but it also allows you to write your own custom rules. With NDepend, you can visualize

your code base and its dependences, using a series of matrices and graphs as well and it does more

than just providing snapshots of your code base at moments in time. It integrates into your build, and

allows you to monitor the evolution of your code, and to decide to whether you're think that you're

making progress toward quality or not.

This Version Is A Beta

For the majority of the course, I will be using version 5 of NDepend. When I started recording, this

version was in Beta. All major GUI and flow work has been completed as I work on this. So anything

that changes between now and release time, will be relatively minor. You aren't seeing a different

product than the one that will be live, but there may be slight differences the way there would be if a
video were made about version 1 of a project and you had the updated version 1. 1 when watching the

video. I just want to be clear upfront that this is in beta, so that you understand that why I will switch

between versions and the clips about installing and finding information on the website. I am

demonstrating in version 5, but for the purposes of NDepend site and documentation, version 5 is not

yet available at the time of this modules recording.

Installation

Now without further ado, let's actually go and install NDepend. What I'm going to do is launch a new

browser window and then Navigate to the NDepend website. And here I'll be confronted with a series of

menu options, I'm going to go to the download one and then, this is pretty straightforward and simple

here. You have the option of STARTING a 14-DAY TRIAL or you can DOWNLOAD

the PROFESSIONAL EDITION if you enter your license key. I'm not going to enter my license key here,

so I'll opt for the 14-DAY TRIAL. I'll put in my email address and then I'll click the button. And what this

does is it dispatches an email to me, but it also gives you the ability to download right from the link

so, the email itself looks like this and it gives you a little bit of instruction as to how to go about

downloading and how to get started and all that. For the purposes of this demonstration, I'm just going

to click here and say download. And as you can see in Chrome, it's going to start a download. However,

this is going to take a few minutes, so I have actually already downloaded it and I will go with that

version and I'm going to Cancel this one. Now that NDepend is downloaded, I'm going to open up the

Downloads folder and see that here it is in zip format. And then, what I'm going to do is install it by

putting it into a directory and unzipping it and this is a directory I've already picked out, which is an Apps

directory and I'm going to create a New Folder and I'm going to call this NDepend 5, and then I'm going

to bring this guy over and Cut it from my Downloads folder and put it in here and then I'm going to do

and Extract All and just have it go into this flat directory not into a subdirectory of it and this will be

the independent installation folder itself. Basically, the way this works is that there's no MSI installer

and that's because the NDepend user base actually prefers it that way and once you get used to it, it's
kind of nice and it's a lot simpler in some ways because there's no registry keys, there's nothing like that

going on, it's kind of what you see is what you get from the download. So once, you install it, all the

executables appear and you'll see that there's nothing magic going on behind the scenes. All these

executables, the installer, the whole nine yards, that's it, you don't have to worry about anything else. So

there are now a series of executables in this folder and you have NDepend installed. That's all there is

to it. There is however one more item after the install that you're going to want to take care of before

you really get started in earnest and that's the licensing angle. So if I double click on the

VisualNDepend. exe I'm going to be prompted that I'm ready to use the trial version, although the first

thing that's going to come up is this screen about verification of publishing. You're going to fine, you can

unclick Always Ask before opening this file because you're going to be running this a lot and you're not

going to want to see this message every time, so I'm going to unclick that and now what I'll see is a

messaging indicating that this is the evaluation version. However, I'm a licensed user so I don't want to

use the evaluation version. What I want to do is actually install my license file, so I'm going to Exit here

and then I'm going to show you how to do that. When you purchase your copy of NDepend, you will

receive an email containing an actual license file. This comes as an XML file and for example

purposes, I have it in my Downloads folder, right here. All you have to do is Cut it out of this Downloads

folder and then you just Paste this into the root directory of your NDepend install and you are now a

licensed user. So if I go back and I launch VisualNDepend, I will no longer see a message about a trial

version. I will now be a fully licensed user, launching the normal NDepend version. And there you have

it, no more licensing messages, you're all set and ready to go.

Different Operational Modes

Now that NDepend is installed, let's take a look at the installation directory. Notice that there are four

executables. The first one to consider is NDepend. console. exe, which is as simple as it gets. This is

NDepend's equivalent of building a. NET assembly using MS Build at the command line. In other

words, you use the NDepend Console executable to perform the core functionality of NDepend, but
without a visual interaction. This is how you might use it say on a build machine. The NDepend Console

executables takes various command line arguments that allow you to customize it's functionality. Let's

take a look. What I'm actually interested in here is NDepend. Console. exe, but we're not going to

execute that file here within the shell, it's actually going to be necessary to open up a command line

window to do this and then execute it from the command line, as I mentioned the way you would in a

build or on the build machine. So here we are in the NDepend directory, which corresponds to this

folder right here. So I can actually execute the NDepend. Console. exe from right here. And what we

get here is a message that kind of explains what's going on. If I scroll up just a bit, you get some version

information about NDepend up at the top and then what you might expect from a command page or

a general command line help statement, which is to say here are some options that you can execute

from the command line. One thing to note is that when you are executing this from the command

line, you're going to generally follow the path NDepend. Console. exe followed by your target NDepend

project and then any of these options here like ViewReport, or Silent, or HideConsole those will come

after the target file. So I happen to have a NDepend project --- the path to NDepend project in my paste

buffer here, so let's go ahead and see what that looks like, just as an example and we'll cover this more

in detail in the section about integrating NDepend with your build. But just to see what happens

here, there is the console command there's my project and let's do the ViewReport option and we will

kick that off and what you're going to see is actually the output of this project here. There are some info

messages, some warnings; this is a project for a moderately sized application, that I wrote at one

point. So you're actually seeing NDepend go through the process of analyzing this and spitting out all

these messages to the console instead of in Visual Studio, as you might get more used to seeing them

over the course of time and then, up pops the report as per our command line option here and again, I

will this is another thing that isolation level come back to in more detail later, which is the NDepend

report But in this section, I just wanted to demonstrate what it meant to run NDepend from command

line so that you could just get a brief glimpse of what that would look like. In addition to the console,

there is the NDepend. PowerTools. exe, which is a set of open source static analysis tools that come

along with NDepend. These make use of the NDepend API, which is a way to integrate NDepend's
analysis functionality, into your own custom applications. The tools showcase the API and have some

cool features like searches for dead code, searches for duplication etc. This is more or less bonus

functionality that you could tweak, if you would like. So let's take a look at that. So what we're going to

look at here is the NDepend. PowerTools. exe executable. like the NDepend. Console. exe, this can be

run from the command line. However, unlike that executable it doesn't need to be. So I'm going to

actually just launch it here from within the shell, and then maximize the window that comes up. Now this

is the NDepend power tools, it's pretty simple and straightforward interface that gives you access to a

number of different utilities and like I said, these all make use of the NDepend API, so not only are they

useful, but you can get a feel for what this API looks like and as an example, let's go to A here, query

the code with CQ link and you get some choices here we can do querying against an existing NDepend

project, Visual Studio Solutions, or. NET Assemblies. For the sake of this quick example, I'm just going

to pick this same thing that I picked in the last brief demo, and I'm going to launch my query explorer

tool here. And this is a CQ link, which has some cool stuff in it. It basically says here that we're going to

warn if there are any types from just mycode. types, which is by the way a really cool feature of

NDepend that lets you separate the code that you've written from library code. So, warn if any code --

any type in my codes types is more than 500 lines of code or more than 3, 000 line of IL code, so

basically, have I written any very large classes here. I'm going to hit F5 to run it and I'm going to see that

it looks like I haven't, go me. So let's just get a feel for what maybe happens if I were to make this

number smaller, let's try running it with 100 lines of code and there's one, so it is working. We are

getting to see through the PowerTools actual real live information about a code base and there's any

number of other PowerTools as you could see from the main menu in there. I'm not going to go through

those all in detail at this point. The kind of rules and querying and a lot of this functionality is interwoven

with the tool itself, but it is always interesting to play around with the PowerTools and you can even add

onto them or tweak them or completely write your own if you wanted. Next up, consider the

NDepend. Install. VisualStudioAddin. exe. This is the executable, that you will use to configure the

NDepend Plugin, for Visual Studio. You'll only want to run it when you want to change which versions of

Visual Studio have the NDepend plugin. For the purposes of this course, we will work mainly with Visual
Studio, using the plugin. Let's take a look at the install. So from here, I'm going to click on the

NDepend. Install. VisualStudioAddin. exe and observe what comes up, which is a map of the different

versions of Visual Studio for which I can install NDepend. In the case of my current situation here on

this machine, you'll notice that only one Install button is enabled, that's the Visual Studio 2012 Install

button and the reason for that is that that's the only version of Visual Studio that I have installed on this

machine. So let me Close this and come back to that to in a moment. For now, I'm going to bring up

Visual Studio that I have running to demonstrate something which is that there's no NDepend

menu option right now in Visual Studio. So as you can see, nothing going on there. I'm going to close

Visual Studio and then I'm going to re-launch the installer and actually install for Visual Studio

2012. And it says the installation has been successful. And now, you'll notice that this is updated here

where I can Re-Install or Uninstall and then you also have this option here to Disable NDepend

Shortcuts. This is something that you would want to do if the NDepend --- keyboard shortcuts were

getting in the way of Visual Studio shortcuts or other plugin shortcuts that you liked on the keyboard, but

it's probably best to leave then enabled by default unless you have a reason not to. There's also a link

down here where you can see a Getting Started video, it'll actually take you to the NDepend

website. So I'm going to Close this and then I'm going to Re-Launch Visual Studio and we'll see that in

Visual Studio, there's going to be a new menu item now for NDepend. And that's going to be the case in

any of the versions of Visual Studio listed here whether it's 2010, 2008, 2013, and once it gets done

launching you'll actually see the NDepend menu pop-up and that's how you'll know it's been

successfully installed, and there it is. If we we're to now go back and uninstall NDepend's Visual Studio

plugin, we would see Visual Studio back in the state that it was when I started showing this section. The

last operating mode that I'll show is Visual NDepend mode. Visual NDepend mode is the stand-alone

tool for interacting with NDepend that operates outside of Visual Studio. You might find this useful if you

are using a different ID that Visual Studio or if you simply prefer to keep the number of plugins within

Visual Studio to a minimum. Let's take a look at that. Not surprisingly, to get to Visual NDepend's

executable, we're going to click on VisualNDepend. exe, which will launch Visual NDepend. If you're

anything like me, you don't want to keep seeing this all the time, so you can go ahead and uncheck that
box and just let NDepend run whenever you want it to without being bothered, it's perfectly safe I assure

you. So, this launches Visual NDepend and it also launches the start page, which I'm going to cover in

the next section, so I won't go into too much detail here, but suffice it to say, what we're looking at here

is a completely stand-alone self-service version of NDepend that you can use to run on your code

base. Here you have options to Create New Projects, or Open Projects, or do some analysis, and

various other things and then other of these menu options become enabled once you have a project

loaded and some actual code that it's taking a look at, but for the rest of this module, I'm going to be

focusing mainly on the Visual Studio plugin, just because that's the place that it's most likely you're

going to be using NDepend if you are a heavy Visual Studio user, but just know, that anything you want

to do through the plugin, you can generally also do in Visual NDepend, it's a very useful tool especially if

you don't want all the baggage that comes along with Visual Studio for some reason and you want kind

of a lighter weight way to analyze your code.

NDepend Start Page

Now that we've seen the different NDepend operating modes, let's take a look at the NDepend Start

Page, from within Visual Studio because this is the most likely scenario in which you'll encounter it if

you're sort of a garden variety user. I'm just going to launch Visual Studio without any sort of project

open and then I'm going to go into the NDepend start page. Now once it loads, you know, not

surprisingly, just like Visual Studio loads its own start page, NDepend also has the same sort of

concept, where you have kind of a quick go to set of things here. So, the first thing to notice is that up in

the upper left hand corner you can see, which version of NDepend your running and what version it is

in terms of release and installation levels. So, this is version 5. 0 and it's the Professional paid

Edition. Right below that, you have sort of standard links that you might expect on a start page like the

ability to open a project or Create a new one, as well as the ability to dive right into Analyzing or doing

Comparisons. And below that is the sort of quid essential, you know what are the Recent things you've

worked on and a quick jumping off point for that. And then, you've got Getting Started, which actually
links to a number of useful places on the NDepend website and you've also got this kind of How Do

I help link in there as well. Below that are some Addin's that you can install, the Reflector Add in and the

NDepend for Visual Studio Add in that we had taken a look at before and when you launch one of these

guys, it actually pops open an options window and you can see here that it pre-selects the option that

you're instated in. So if we wanted to do some disassembly we could install this Add in here. And then

finally, this is an interesting think the lower right corner of the start page here is the little green circle that

tells me I'm running the latest version of NDepend and it's kind of cool to have a definitive and quick

source of truth as to whether you've got the most recent bits or not. And then, you can also get to the

release notes from here if you'd like. And if you are not running the most recent version of NDepend,

you have the ability to download it.

The Dashboard

Next up, let's take a look at the NDepend Dashboard. Even if you are an NDepend veteran, this is going

to be a new and cool feature for you. The Dashboard is brand new and it's going to be the nervous

center for tracking the status and quality of your code over the course of time, within your project. So,

let's open a project, using the AutotaskQueryExplorer here and as you could tell by the brief kind of

blink out and refresh, we've actually opened a project, so now we have all options available to us. I'm

going to go to the Dashboard and there's going to be few things here that are going on, a few main

categories of things. The first thing that should jump out at you immediately is what we're seeing here

with the stats, IL instructions, Method Complexity etc. This tells you some basic kind of at a glance

statistics about your code. Next, at the top you'll notice that there's some kind of You are Here, sort of

information where you see the Project Name, the actual file of the Project that's loaded in NDepend, last

Analysis Date, etc. This is kind of a quick way to orient yourself with what's going on. Another thing that

you'll notice is that there are a handful of links and buttons sprinkled throughout this area that you let

you take various actions. You can drill into Code Rules and the Rules Explorer from Code Rules section

or you can import code coverage data to get your unit test coverage imported into NDepend's
analysis. This particular project, I have an Imported coverage data, but you could click one of these

links to do just that and it brings up a window and the other links that are sprinkled throughout here do

similar sorts of things. And then, as you scroll down, you see a really cool series of graphs. These are

designed to show you how your code base is developing and changing over time and --- this is a real

difference maker in the new version of NDepend and in general. Instead of just having a general feeling

of foreboding about your code, you can actually look at and say, wow, our unit test coverage is slipping

and average method complexity is growing over the last six months. And not only can you take the

trending charts offered here, but you can actually define your own. This is really kind of your cockpit for

code quality. And one thing that I do want to point out briefly is that I'm recording on a pretty low

resolution for the sake of recording integrity on all monitors, but on a much higher resolution, you

actually see these graphs kind of stacked up and the top. So if you're on 1900x1200 or just something

like that, pretty much all of this information is there, right at a glance.

Your First Project

So now, that you've seen the NDepend Start Page and the Dashboard Page and have sort of a feeling

for what the tool has to offer, you're probably interested in getting started with your first project. Now, I'll

say at this point that there are various ways to do that. The tool is pretty flexible and there's a lot of entry

points. For example you can do this through Visual NDepend, you can do this through the NDepend

menu at the top, but for the purposes of this demo, I've opened back up the NDepend Start Page, since

that's kind of what it's intended for is initial use and a jumping off point and I'm just going to create an

NDepend project through the NDepend start page, which will actually kind of let you understand from a

file system perspective of what's going on too and perhaps take some of the mystery out of it, the way it

might happen if you created specifically, through the plugin and had everything done

automatically. When I click on New NDepend Project, I'm presented with what's sort of standard file

location browser window I am going to navigate to a folder I have now in my Copy Buffer and that's the

one that contains this AutotaskQueryExplorer project. So I'm going to select this folder and for the name
of the project, I'm just going to call it BrandNewNDependProject and I'm going to click OK. And really,

that is all there is to creating a brand new project. That's going to know exist on disk and we're ready for

the next section, which is choosing code to analyze.

Choosing What To Analyze

Now that we're getting ready to choose what to analyze, you'll notice that there isn't really anything

seemingly available here, so far. You do have the option, from this NDepend project properties page of

browsing two assemblies, that you might want to analyze. NDepend is quite flexible, not only does it let

you set up a project, an NDepend project, in order to have things to analyze, but could also just do one

off analyses of assemblies at any point that you'd like too. You have another option here that says Add

Assemblies From Visual Studio Solution and that's kind of what we want to do here, is the easiest thing

to do. We're looking at the AutotaskQueryExplorer project as something that we want to analyze. So I

just chose that and now, all of a sudden I have these options of different things that I might like to

analyze. So, what I want to do is choose all of the Assemblies in this solution and then I want to go

ahead and have those be part of this NDepend project that I've just created. So I'm going to Select them

and I'm going to Save and I did that by hitting Ctrl+S and you can see now that in the NDepend project

properties, I have all of the projects within my solution now added to the NDepend project and I've

effectively here chosen what it is that I want to analyze. If I were now to go ahead and Click this Run

Analysis on Current Project, it would run the analysis on my solution and this is also saved for later, so if

I go up here and say let's Close this Project, the BrandNewNDepend analysis project and I get out

of here and then I re-launch the project, we can see that back here in the project properties

my selections for what to analyze are still valid. It's going to analyze the same stuff over and over

again, unless you change what it should analyze from this Project Properties window. Now at this point,

before moving on from the choosing what to analyze section, I would like to point something out. There

is a slightly more straightforward way to go about choosing what to analyze if you're just getting started

with a project. I wanted to show you what I did, so that you would understand the idea of creating
an NDepend project on disk and then attaching it to a solution and choosing what to analyze, but there

is a quicker way, which is what I'll show you now. I'm going to close the project properties window and

then I'm going to actually close the Project itself, the NDepend project and I'm going to Open up the

Recent Project AutotaskQueryExplorer and I'm going to select and say that I want to actually detach

this NDepend project from this solution. So, I'm going to go down here to Detach Project and there is

the dialogue again indicating that the solution file is being changed. So now, this project has been

detached from the current solution. So now, I'm going to close this project properties page and I'm going

to go the NDepend Menu and I'm going to tell it that I want to attach a New NDepend Project to a

Current Visual Studio Solution. Now, you'll observe here that this is a bit of a shortcut because instead

of having to separately create the NDepend project and then modify its properties and go find which

Visual Studio solution I wanted to have analyzed, now if I say I want to attach a brand new project to an

existing solution, it actually pops up all the different Assemblies within that solution. So here, in order to

do this in my new solution, I'll I have to do is select the Assemblies that I want and then I can

analyze them and this is generally if you're going to attach an NDepend project to a solution that you

would have in Visual Studio, This is probably the easiest way to do it and it's the way that I generally do

it. I just wanted to show both ways for the sake of clarity.

Your First Code Analysis

Now it's time to show you your very first code analysis and I have this same window open and I'm just

going to Cancel out of here and go back and reopen this BrandNewNDependProject, but first I'm going

to go and I'm going to attach the BrandNewNDependProject, back to the solution and as we've seen

before, that's going to prompt the project to re-loaded with the reattachment and so here we are back in

the Project Properties window and what I'm going to do is perform a code analysis from

here. Now, there are lots of ways throughout the course of NDepend that you can perform analyses and

this is one of them, this play indicator that mirrors the Visual Studio is what's you're going to use to run

analysis and you have a couple of options here, you can run an analysis or you can run the full analysis
with the build report. I'm going to click on Run Analysis and what's going to happen is a window is going

to pop-up here that you can dock, using the normal Visual Studio way of doing things, and it's going to

show you some information about what just happened and what's going on with the analysis. Now

what's happened with this analysis is that a familiar looking window of errors and warnings and what not

has been generated and this indicates what actually happened, what NDepend is spitting out is it's

analyzing. So, here are some messages. It's telling you that it Began analysis and that we

haven't loaded a base line and so on and so forth and just like what you're used to in Visual Studio, you

can get rid of say the Messages and see only the Warnings and here are a series of Warnings about

PDB files and it's also telling you, that there are some critical rules violated, NDepend distinguishes

between different kinds of rules and it actually is going to flag the critical rule violations as warnings. In

this partially analysis, we can also see that there aren't any errors and this is going to be the initial

interaction with the analysis, where you're going to see sort of critical information about it, much the

same way you do with the build of your source and then you'll see more about as you actually run the

application same goes here, errors are things that are going to stop the analysis from proceeding

and then you have Warnings and Messages and Interactive Warnings and this is going to be your

Dashboard for all of that and that's really kind of it for the actual mechanics of the analysis itself, but the

output of the analysis the results, there is plenty to that and I'm going to cover that briefly and then a lot

more later, but in this next section, where we take a brief tour through the analysis report.

A Quick Tour through The Build Report

What you're looking here is the NDepend report. This was generated during the analysis that we just did

and it popped up off screen, so I brought it on screen to show you. At the top, there's the report

summary and kind of a handy header, which contains information about the application and the

analysis, when it was done, how long it took, and so on and so forth. There are also some links up at

the top and a couple of these have nice pop-ups here where you get some information. Here's where to

start, things to do, things to look at and here are some tips as well and then pop-up right in the frame
here and then this guy, I'm not going to click on right now, but this will actually take you to the NDepend

documentation. You'll notice at the left here there's a navigation option, this will be available to you

throughout the report and everything that you're doing with it. This is extremely handy and if you drill

down into stuff, which I'll show you briefly here momentarily, you can always get back to the Main Menu

by clicking up at the top and you will wind up here. You will also, at a lot of these screens, seed bread

crumb trials at the top here to get you back as well. The next section down contains diagrams, which is

a pretty interesting thing to take a look at right here in the context of this HTML based report. You have

the option to view these as scaled, in which case they'll pop-up here within the page and then you can

X out of them or you can pop them up in a full mode, which is actually going to open up another

browser tab and show them to you with a bit more scale and you can zoom in and everything. What

we're actually looking at here in this case is a Dependency Graph. This is your application and its

assemblies and how they depend on each other and how they depend on third party assemblies. You

also have this Matrix where it breaks this out for you in a grid like form, it's the same information and

then, you have this Tree Map view, which shows you some information about your code base that I'll go

into a lot more detail about later and there's other views as well also, these will be covered later. Down

here, we have Application Metrics, which is showing you some interesting information about the

code, how many Lines of Code there are, Types, how many Comments there are and so on and so

forth. And then going down a little bit further there's a Rules Summary, which talks about actual code

rules that you should be following. So in the case of this, we have here Base classes should not use

derivatives. So if we take a look at this, it actually drills into greater detail and it has the language of

a query here and the query defines a rule and then it has types in your code base that run afoul of this

query. So we've drilled in here for some more detail, some statistics, some violations, all of which I'll get

into more detail about later. But at any point, we have the option to go back and to go back to the top of

the Main Menu, so you can navigate around pretty freely throughout the course of this report. And

there's one more property of this report that I'd like to mention, I alluded to it a little bit, but it's important

to note that this is an HTML file based report, locally on your disk and what that means is that it's very

easy to port this, send information about it place or store them, whatever you'd like to do with it. So you
can publish this to a server and let people take a look at the actual report going forward throughout the

course of your application. Day to day historical information about what's going on and you don't need a

web server or anything like that to do it, it can be a file share and you can just look at it in a browser. So

this report is very flexible, very easy to use, and very handy to have around.

NDepend Documentation and Tutorials

Now that I've walked you through getting started and getting your feet wet with a gentle introduction to

NDepend, I'm going to wrap up with some sources of documentation, tutorials, and general information

about the tool. First of all, not surprisingly, the actual website is an excellent source of information, but

it's a great source of information in more ways than just simply being the products website. As you can

see in the graphic here, the documentation section is actually very involved in hierarchal from the menu

NDepend site has tutorials in the form of videos, as well as extensive documentation. It's really a terrific

resource. And then of course, there's the NDepend FAQ, which helps you get started as well. NDepend

is also on Twitter, @ndepend and there's a Stackoverflow NDepend tag as well, that tends to pretty

helpful. In addition, you'll be able to find various blogs Goggling around the internet about NDepend. It's

a pretty well documented and trafficked product that is heavily in use by a number of prominent. NET

architects. So you can certainly find information and actually a lot of pretty ringing endorsements, it's a

very well regarded tool. The link that comes next is about the NDepend API, which is how you can

interact with NDepend programmatically and that is a pretty cool way to be able to query your code

base and really customize the tool as much as possible. And then the last two links are interviews with

the creator of NDepend that talk about the backstory and some interesting information. One is a

transcript of an interview and the next one is a Hansel Minutes Podcast and that's where I got a good bit

of my information about the backstory of NDepend, as well as some additional information that you

might simply find interesting.

Summary
In this gentle introduction to NDepend, I started off, with NDepend's backstory and overview. Next, I

talked about how this is a version that is currently in Beta for my demonstration purposes and then I

went over how one would install NDepend. Once installed, I covered the different operating modes of

the tool, and then I dove into the Visual Studio plugin version, by showing off the NDepend Start

Page. From there, I covered the new and more enhanced Dashboard and then I talked about creating

your first NDepend project. From there, I talked about choosing what assemblies in your solution to

analyze and then, I went through a code analysis itself. After that, I did a quick tour through the Build

Report, generated by the analysis and finally, I wrapped up with the NDepend documentation and some

tutorials.

Querying Your Code Base

Introduction

Hi, this is Erik Dietrich presenting on behalf of Pluralsight. Welcome to this course entitled Practical

NDepend. This third module is all about the concept of Querying Your Code Base, using

NDepend. This third module is all about using NDepend to query your code base. First I'll describe what

I mean philosophically, by querying a code base. Then I'll talk about how yes, you can actually do that

with NDepend. I'll then introduce the NDepend CQLinq construct by showing you how to write SQL

style queries. And of course, where's there's SQL-style LINQ, there's Fluent interface LINQ, so I'll show

you how to use that as well. Up next, is a demo where I'll show you actual examples of interacting with

the code base using CQLinq. With an understanding of CQLinq in place, I'll show you some actual

NDepend code rules that come out of the box and then I'll actually show you how you would go

about creating your own code rules. Finally, I'll pull back a bit and discuss the importance and

ramifications of querying your code.

What Do You Mean By 'Query'


You may, at some point, have heard people talk about declarative and imperative styles of

programming. A cliff notes abbreviated version of the difference between these two concepts is that

declarative programming expresses what but not how, whereas imperative programming expresses

how and leaves the reader or person running the program to figure out what. As a concrete example,

considering programming in a language like C# or Java, versus writing queries in SQL. With the former,

you use classes, methods, and patterns to tell the complier how to do something, whereas with the

latter, you simply tell the RDBMS what it is you're looking for and you let the tool figure out how to take

care of the details. At the core of declarative programming, lies data, whereas the core of imperative

programming lies algorithms. We are used to equating code with algorithms and procedures, our work

product. But what if we were to think of it as data? What if we were to think of our code, not in terms of

files on disk, but as a collection of concepts such as method, variable, type, and namespace? and what

if we were to access that data declaratively? NDepend answers those questions and it answers them

convincingly. It allows you to treat your code as data and ask it questions the same way that relational

database allow you to ask questions about your relational data. It allows you to say, show me all the

methods that are 10 lines or fewer, in the same way that an a RDBMS allows you to say show me all

the customers with an outstanding balance of 10 dollar or more. NDepend allows you broad freedom to

query your code.

You Can Do That?

I've give this slide the title, You can do that? because quite frankly this was my reaction years back

when I first heard of NDepend and what it did. I don't mean the idea that you can figure out by hooker

by crook, the answers to questions about your code, I mean the idea that you could do it all without a

spreadsheet or some kind of scratch paper to make notes on and without a lot of time and whatever the

IDE you have has as an equivalent of find in all files. So when I discovered this, I was blown

away. Perhaps you're savvier now then I was then and you're not blown away, but I assure you, if

you're not familiar with the tool, that you're going to be impressed. NDepend will answer all of the
questions you might have about your code base and it will also answer questions you never thought to

ask, but really need answers to and you'll find it's surprisingly easy to do so. The reason that it's so easy

is because NDepend actually introduces a SQL like declarative querying language and a set

of semantics for assessing your code base. The style of it is quite familiar to anyone comfortable with

LINQ of SQL, but instead of using selection and projection across rows and columns, you filter, based

on assemblies, namespaces, types, methods, and fields. So yes, using NDepend you can absolutely

query your code base.

Introducing CQLinq: SQL-Style Queries

This next slide is about CQLinq, but in order to understand what that is and what it means, some

background is in order. In 2008, Microsoft rolled out a technology called LINQ or Language Integrated

Query. Modeled on traditional SQL, LINQ aimed higher on terms of integrating queries semantics into a

programming language. Historically, a programmer in a language like C# would write C#, except when it

came time to talk to the database. For this, the programmer would create parameterize strings and pass

them to the database engine, relying on a driver to return results. LINQ takes that query and moves it

into the actual language meaning so called first class queries are supported in C#. In other words, the

SQL concepts of select from, and where become actual language keywords. In light of this, you're

probably asking yourself which database is supported, I mean if the C# language authors are moving

query semantics into C# itself, you might wonder if they've decided to support SQL Server and nothing

else. But no, that isn't how it works at all. In fact, if you're familiar with the language concept of

interface, LINQ is actually an interface. It's a common API against which you can work with a lot

of different kinds of persistent models such as XML files, SQL Server databases, plain old object

collections of memory, and more. In fact, you can write LINQ queries against anything for which there is

a LINQ provider implementation. A LINQ provider is an implementation of I-queryable interface. I-

queryable defines what you need to implement and then you, as the implementer, defined how it

actually works. Using this, people have created all sorts of interesting tools, such as LINQ-to-Twitter and
LINQ-to-Amazon and with the more recent changes to NDepend, LINQ-to-NDepend or maybe LINQ-to-

code is now in that list with the name CQLinq. You can actually write LINQ queries against your code

base.

Fluent Interface Queries Using CQLinq

When it comes to LINQ, there is another option. What I showed in the previous slide is the language

keyword construct, generally known as the LINQ query syntax. The other option is the LINQ extension

methods. In C#, an extension method is a special case of a static public method, where an instance is

passed to the method, but this is expressed in a way that makes it look like an instance method

call. Common examples of this include methods like ToList for arrays. ToList is not actually an instance

method of the array class, but rather a special static method that takes an array as input and returns a

list. It's called an extension method. LINQ has a series of extension methods that make it extremely

powerful, particularly because they let you chain calls together. So instead of a query resembling a SQL

statement, you can take a list and then call where and select on it, to express selection and

projection. In pretty much, every code base I've seen over the last few years, people have come to

favor the LINQ extension methods over the LINQ query syntax and it seems to of becomes a defacto

standard. Of course, there may well be exceptions, but I personally only ever use the extension

methods anymore and really, that's all I see. For people like me, NDepend and its CQLinq also support

this extension method Fluent Interface style of querying. This lets you write the query from the previous

slide, as seen here. Given the relative popularity of this style of querying, this is what I'm going to favor,

for the rest of this course.

Let's Take CQLinq For A Test Drive

So without further ado, let's take CQLinq for a test drive. I have once again fired up my open source

project, the AutotaskQueryExplorer for a quick demonstration here. But that's the only thing that I've got

going. This is a version where I've rest everything to the way it would be normally. By default, there is
no NDepend project attached to this. The reason I did that is so that can show how easy it is to attach

a project and get started with querying right out of the gate. So I'm going to go back here to Attach a

New NDepend Project to the Current Visual Studio Solution, and I'm going to get this familiar looking

window where it pops all of the Assemblies within the solution and I'm going to tell it to analyze

six Assemblies and I'm going to uncheck the build report, for the simple fact that that's not really what

I'm looking for at the moment. So I'm going to let it analyze and them I'm going to be in a position where

I'm ready to start doing the query exploring and to actually some queries. So once that's finished, I've

got my full suite of NDepend options at the Menu and I'm going to go in here to Rule and I'm going to go

and say let's Create a New Code Query and this is going to open window which I can center here and

this is going to be where I can start creating actually queries, live against this code base. As you can

see, this is not exactly complete. NDepend starts you off here with the keyword from and this is favoring

the query style CQLinq. I'm going to opt instead for the fluent interface style, that I mentioned before. So

either way, looking down at the bottom, you can see here pretty clearly that there is a syntax error and

the reason you're seeing that, even though I haven't tried anything yet is because this is actually

continuously circling around to build and query your code base as you go. So you this is actually

interactive. So I'm going to get rid of the from here and I'm going to say that I think I want to do a query

about Methods and the reason that I want to do this is because I seem to remember that every time I

create a task project, one of the first things I do when I'm using MS test is to create a series of

extensions in order to have an extend assert class to address some shortcomings that perceive in the

MS framework one of which is the ability to have an assert. throws, so I want to see if in fact I actually

created a method along these lines in the code base. So, what I'm going to do is I'm going to query all

of the possible methods here and then I'm going to start narrowing things down using the CQLinq

semantics. So, I recall that the method I'm looking for has the name throws so what I'm going to do is

I'm going to go after the methods where, using this Lambda expression I can say methods. name

NameLike and we'll put Throws in there and as you can see, this actually updates live as we go, but that

gives me more results then I want. It looks like I have very Unit Test method names in here and what

not. So if memory serves, I believe this is also a generic method, so I'm going to add an and condition in
here and say IsGeneric and see what comes back. Now there's only one match and let's go see what it

is. And there it is right there in the code base I can pull this up and see that I'm in class called

ExtendedAssert and this is a public static void throws and that's exactly what I'm looking for actually. So

I have used very quickly the NDepend CQLinq in order to find something of interest in the code base

here. So at this point, let's say that I'm pretty happy with my result here. I found the method that I'm

interested in finding, but I'd like to drill a little deeper and see where this method is actually used, this

utility method in my code base. I want to see what tests their using and so what I'm going to do is close

this window and go back to my query window here and I'm going to add a little bit onto this query. You

can, in general, drill further into things and make your results more interesting by expanding and

chaining together these queries and I'm going to do exactly that right here. So what I'm going to do is

I'm going to say that I want to select, not just the default, which is the actual method, but I want to Select

the method and also something in the method called MethodsCallingMe and that's the end of my Select

and now down here in the results section I actually have the method again, but then I can go and see

where all I'm being called, by hovering over. And here we have a window that pops-up all these

interesting matches, which I can now go and see yes, sure enough, here this method is being called. So

you can drill pretty far into things here using this style of querying and getting additional results chaining

more onto the query, exploring in more detail. You can navigate pretty nimbly through your code base

and so some extremely interesting things.

Out Of The Box Code Rules

I just demonstrated a process whereby you fire up a window and start writing queries against your

code, the way you might in a database. This is great for add hot query out of the code base, out of

curiosity, but now you're probably thinking that you'd like to store these for future use, and probably that

you'd like to start defining them as ways to provide feedback about the direction and quality of your

code base. That is, you're probably thinking about how you can define rules for you code base in a way

that you never have previously. Well, you are certainly free to do that you have the ability to do it quite
easily, but you don't need to reinvent the wheel. Out of the Box, NDepend defines a number of code

rules that are basically, just stored queries, with a built-in mechanism for generating warnings based on

the results of the queries. This is accomplished via a warnif keyword. So when you install

NDepend, you have a number of these predefined query or rules and you can browse through them, as

shown in the picture, to see how your code stacks up. These rules are largely non-controversial and

based on well-defined and widely excepted design metrics. For instance, methods and classes should

be neither too large nor to complex, dead unused code should be avoided, methods that could be

private should be private, etc. So let's take a look in more detail. So I'm back in Visual Studio, with my

NDepend project and my solution for the AutotaskQueryExplorer, open once again and what I'm going

to do now is I'm going to navigate to Rule once again, but here, I'm going to and I'm going to view Rules

Violated. In general, this is going to pop-up a screen. It popped it up out of frame here, but I brought it

back in and I'm going to anchor it in the middle of my Visual Studio window and what you have here is

whole series of different rules that you can look at, so in this case, the AutotaskQueryExplorer project

has actually none of these code quality rules violated at all, nor are there any regression rules because

this is the first go around with it, but there are a few rules here that are violated in terms of Object

Oriented Design and looking through, this is just meant to be a quick overview, but we have rules like

Base class should not use derivatives, Class shouldn't be too deep in inheritance tree, Class with no

descendenage should be sealed if possible, and so on and so forth. These rules are split into

categories, as you can see on the left, called Groups and you have the ability to Create Groups, Delete

Groups, and generally customize things to your heart's content. So, this is just a general overview of the

NDepend Rules Explorer and these are the Groups of things that you can use to define rules about your

code base. There's a similar kind of thing that goes on as to with something like FX Cop, where you can

turn rules on and off and customize them as much as you want, but this is a much more detailed and

granular tool, as you saw from the query window, where you can really drill into your code and see

some impressive and interesting things and set rules on the basis of those things.

Demo: Create Your Own Code Rules


Next up, I'd like to demonstrate how you would actually go about creating your own rule. So far, I've

covered some generally querying and I've shown the rules that come out of the box with NDepend, but

it's actually pretty easy to create your own rules as well. So let's go ahead and actually do that. What I'm

going to do to make this happen is I'm going to click on Create Query here at the top and it's going to

pop me back into a familiar window that will allow us to write and actual query, but I'm going to go

through the process of making it a rule by adding a warnif condition. However, before I get to that, let's

decide what we want the rule to be. And now I'd say, revisiting what we were doing before, in this

demonstration. Let's say that we had a weird and tyrannical person who was doing and administering

code reviews and they decided that there's absolutely no way that there should be any more than five

methods that are generic, called throws and if that does happen, we're going to consider that a critical

violation. So I'm going to say that we want to go back here and we want to find methods where the

methods Name is like Throws and once again, the method is Generic and this time we don't need a

separate clause, we can simply say that, number of MethodsCallingMe is > than 5. And there is indeed

a match for such a method, but instead of just calling it a day that, what we actually want to do is we

want to say at the beginning here, warnif count > 0 and there's a match again. So at this point, we're

going to say this is going to be a critical rule violation and we're going to Save it. And now, it's saved

and stored as a rule that we can use for later. So where does that show up? Well, if we go back here to

the queries and rules Explorer, here is our newly created rule and clicking on, it takes us right to it in

that window. Now as you can see here, we have one thing left to do, which is we probably don't want to

leave this as called TODO short description, so the way you name your rule is by filling in this name

tag. So let's call this Weird tyrannical rule and then we'll Save it, I just hit Ctrl+S to do that this time, and

here it is updated and now, whenever that is violated, from now on, you will see a critical rule violation

flagged. If you decide that I don't actually want, Weird tyrannical rules in your code base, you can Right

Click on it and say Delete and then you confirm and it goes away. And that's about it for creating rules

for NDepend. It really is that simple. You can create some pretty nice rules right off the bat to customize

your code, to show you things that you might want to know about if they're going to be rules of yours

that are violated. One other thing I'd like to discuss about the queries and rules explorer in creating your
own queries and rules is what's the difference between a rule violation and a critical rule violation, aside

from the little red indicator. Well, one important difference is that when you're running the analysis from

the command line, a critical rule violation has a different return code. So if critical rule violations exist,

the Command line console app returns 1, whereas no critical rule violations returns 0. This is very

important if you want to setup a critical rule violation as something that will trigger a failure of your

build. You distinguish something as a critical rule violation in an integrated build, if you want that rule

violation to mean that the build fails. But if you just kind of want a warning, which is a normal rule

violation, that will not trigger a fail of the build.

Understanding The Ramifications Of Code Querying

Now that we've seen how to create queries and how to turn them into rules within your code base, it's

time to stop for a moment and consider the ramifications of what we're doing here. To really get your

head around it, think about how you would have gone about answering the question, how many public

static methods are there in my code base? I imagine that this would involve an immediate Ctrl+Shift+F

or a Find All, into which you type the words Public Static. From there, you'd skim through the results

ignoring public static properties and classes and taking off on paper each method. Maybe, you'd get

more sophisticated and write some little program or shell script that examine the source code files for

you and reported the count back to you, but that's probably about it. But what about when there aren't

simple finds that you can do? What about the question, how many methods in my code base have more

than 20 lines of code? What Find All do you do for that? The answer quickly starts to involve concepts

like the cod DOM, reflection, or other things that are likely to make you say, who has time for

this. NDepend, gives you fast answers to these questions and it also gives you an entire engine for

turning those answers into meaningful and actionable data about the quality of your code. You can

discuss your code base in informed and specific terms sighting actual facts and figures instead of

saying things like, geez, the whole accounting module seems like it's over engineered. You can instead

report to someone interested that it depends on 225 external dependencies, has 28 types with more
than 1, 000 lines of code, and has only 20% test coverage and that is a powerful way to think about and

discuss your code. NDepend is like a detailed code review, but automated code review that also has

time to look at your work and give you feedback, whenever you need it.

Summary

In this module, I started off by talking about what it meant conceptually to query your code base. I then

went onto assure you that yes this is in fact possible. I introduced the concept of CQLinq and its SQL-

Style queries first and then also talked about fluent interface style queries, using extension

methods. Then I did and demonstration of creating a query using CQLinq. I talked about the code rules

and showed off the ones that come out of the box, with NDepend and then I demonstrated how to

go about creating your own code rules. Finally, I discussed the philosophical ramifications of code

querying.

Metrics in Depth

Introduction

Hi, this is Erick Dietrich presenting on behalf of Pluralsight. Welcome to this course entitled Practical

NDepend. This fourth module is a deep dive into the concept code metrics. In this module, I'm going to

go into a great deal of detail about the concept of code metrics and how they naturally arise from the

concept of querying and the idea of rules that I discussed in the previous module. I'll start out talking

about NDepend's Implementation of Code Metrics in general and then I'll talk about the significance of

even Symbol Metrics. I will then discuss Stylistic metrics and the importance of consistency for reading

code. Next up, I'll start up on more architectural metrics talking first about Inheritance Depth and then

Cyclomatic Complexity. I'll talk about Coupling and Cohesion and then I'll cover Method and Type

Rank. From there, I'll move onto Nesting Depth and Abstractedness and I'll round out the detailed

coverage of metrics talking about Test Coverage. Then I'll switch gears a bit and do a demo of
Defining My Own Hypothetical Code Metric. Finally, I'll talk philosophically about the importance this

information.

NDepend's Implementation of Code Metrics

As I mentioned in the last slide, the previous module was a discussion of the concepts of code query

and code rules within NDepend. In this module, I'm going to talk about Code Metrics. A code query is

simply a way of extracting information about your code base; you're asking NDepend questions about

your code. A code rule is a Boolean flag based on the results of query. In other words, you define a

query that that returns a Boolean and then you warn based on the value of that Boolean. In the silly

example from the last module, I defined a rule to warn if an auxiliary test method was used more than 5

times. The query returned true if the count of usages was more than 5 and I instructed NDepend to

warn in cases where that was true. A code metric goes a step further. Queries are ways to ask

questions about the code base and rules are the subset of your queries about what you'd like to

be notified in the case of certain results. Metrics introduced quantifiable judgments about your

code. Instead of true or false, they generally assign numerical scores of some kind or another for

comparison and in a manner of speaking, in some cases, they are opinionated. That is, a lot of metrics

carry with them implicit judgments about the code. An iconic example is unit test coverage of your code

where there is near universal agreement that the closer you are to 100%, the better off your code base

is. So let's take a look at some metrics, in NDepend. Here in Visual Studio, I've launched the

AutotaskQueryExplorer Project once again, that should be sort of familiar by now and in order to get to

the metric section, we're going to click on the NDEPEND Menu and the coming down here, this is the

Metrics Menu. The first item is probably the most comprehensive one, which is the Code Metrics View

and I'll get to that shortly, but these other items you have here Code Quality Metrics, which includes a

series of interesting things, methods that are too big, too complex, types that are too big and so on and

so forth. This is sort of a quick list of things that you might be interested in seeing and then right below it

here, you've got what I'll call the superlatives, which are Largest, Most Complex, Most Coupled, Most
Popular and this is a quick way and it's relatively new, of getting at information that is probably sort of

most quickly of interest and you know, it's sort of like looking at the kind of statics pages and

eye catching rank and things you might see in the text space on blogs and things of this nature. You

might be immediately interested in seeing what in your code base is the Largest or the Most

Complicated. So, you have very quick and immediate access to all of that information here, at this

Menu. And then at the bottom, you can set some Code Metrics Options here and this just includes a

few brief Booleans that you can toggle On and Off such as Visual settings and what not. And finally, if I

open the Menu back up and go back into Metrics, at the bottom here there is a section for

Online Documentation where you can look up information about how to visualize Code Metrics with

Tree Map, which I'm going to show here momentarily and the Code Metrics Definition, which will take

you to a page containing all sorts of helpful information that describes the Metrics in a lot of detail, on

the NDepend website. So now, let's take a look at the actual visual representation of the metrics. I'm

going to go back into the Metric Menu option and what I'm going to do is choose the Largest types in my

application, let's say. These menu options really all open the same view, they just opened it with

different initial parameter sets. So let's take a look at the Largest Types in the application, that's easy to

understand and to reason about. So, the first thing that pops up is a couple of windows in sort of a split

window sense here and I'm going to Hide the Solution Explorer to make this a little bit easier to

see. Right now, I'm on the recording settings of 1024x768, which makes this look pretty cramped. If

you're on a higher resolution monitor, --- it's a lot easier to see things on here, so you will probably have

a different visual experience, than the one you're seeing right here, which is a little pressed for

space. But what we're looking at here is the Tree Map view in NDepend and what this does is it

organized your code into sort of spacial representation, so you see these large rectangular designations

here. These are the largest level is at assembly level and then within them you have rectangles

namespaces, types, and then method and the small rectangles that you're seeing are actually what

we're looking at here, which is method size. So for instance, every little small rectangle within these is a

method and what we're looking at by way of evaluation is the number of IL instructions per method

so, all of these tiny rectangles represents small methods. You can see them popping on the screen as
we scroll around, 5 IL Instructions, 5, 5, 7, 7 etc. If you look down here where there's bigger

rectangles, that's actually a very large method with 80 IL Instructions so you can kind of get an at a

glance visual representation of how your code base shakes out, in terms of the amount of code and its'

really interesting because you can start to see, as you go along, what methods disproportionally eat up

lots of the code, in your code base. If you have a very large rectangle and ton of small ones all around

it, you're seeing that a disproportionate amount of code is in the method represented by that large

rectangle. So now, you might be wondering why when I chose to launch a Metric view of the Largest

types of the assembly, we're seeing things in terms of methods here. Well if you look to the window on

the right, the search results search type by size, at the top is what it says, this is where you're actually

seeing what the metric is that's defined. So in this case, we said let's see the largest types and over

here on the right, it is showing all of the types and it's sorted them in descending order from largest to

smallest. So the metric, when you launch this window that you select, is going to rank the types or

methods or whatever you've selected, on the right. And this gives you a way to actually select how

many or how few things in this code base are currently shown, as selected. So, to illustrate what I mean

I'm going to move this slider here up from --- showing methods with more than 0 IL Instructions, which is

all of them, to like let's go up here to say around 500. Now, you can see that there aren't too many types

that have more than 500 lines of code in them. In fact, there's only these two and their

represented, their very large types up here, these ATWS types. If we then drag the slider back down a

bit, you can see the number of types grow, and here are some additional types represented with their

methods as rectangles. Another thing that you're probably wondering about is this level here at the

top. This is independent from what you're searching and sorting by. Level is what you're setting the

granularity of this rectangular view to be. So right now, it is set to Method, you can set it to Field, which

is going to show you fields in each of your types instead of methods, so both of those are subsets of

type, but now if we start to go up here and saw show us granularity at the level of Type, you can see

that there are fewer rectangles because all we're really being shown now is rectangles at the

finest granularity that represent Types, rather than seeing all the individual ones in method where you'll

see a lot more internal rectangles. If we crank this up to Namespace, now it gets really a lot less
granular especially in this application, there are only a few Namespaces and if we go all the way up to

Assembly, it gets even less granular. So that's really a good illustration of what's happening here when

it comes to these rectangles in the Tree Map view. You're controlling what it is you're seeing as the

finest unit of granularity, in order to help you visualize your code. So going back to Method, we can see

that over here on the right, we are searching by and selecting by type, but we're seeing things at the

level of granularity of Method. A few more interesting things to note in this view, one that's important to

understand is that there is this system of ranking here where you can instruct this window to Select the

Top 10, Top 20, Top 50, etc. It's important to note that what that is talking about is the level that you

have selected rather than this over here, where there's Search by Type. So for instance, if we say let's

take a look at the Top 10, it's actually going to pop a window here showing you in code query, that it's

going to select the Top 10 number of instructions per Method, descending, which is different than what

we were looking at over here, which is the the Top 10 or Top however many, in this case, types 13. So,

here there's 12 type matched and we're sorting and selecting by type. So the Metric in question and

selecting the Top X number of things, corresponds to the Level currently selected over here at the

left and that is independent of the search results, Order By window over there. Another interesting little

tidbit is that if you expand this a bit, it's hard to see in this view, you do get some options here Fit in

Window, just kind of controls how this all appears. You have this cool option to take a snapshot of this

view at any time, so a screen shot or a snapshot of the actually view for the purposes of reporting or

would have you as just a click away. You also have the ability of using the mouse wheel or

explicitly these buttons here to scroll in and out and to move the view around as you chose and this

gives you all manner of different ways to kind of drill in and view the code. Another really cool feature of

this is how interactive it is when it comes to your code. So if we bounce around here and find any

particular method in here, let's say this method result set build result sets, I can double click on it and it

will kick me into the actual code. So there we have build results at the method. So not only can you

visualize your code base from here, but you can kind of poke around and find methods or types and be

taken by them at your convenience.


Simple Metrics And Their Significance

How many lines of code are in this class? How about in this namespace? How many types are there in

this namespace? How about in this whole assembly or solution? How many methods are there? What

is the percentage of methods that have some kind of code comments? How many lines of comments

are there total? These are what I'll call Simple Metrics. They give you basic information about the code

base that speaks for itself and doesn't require learning or understanding new terms. Though

simple, these metrics are both telling and can be somewhat hard to figure out. Visual Studio may tell

you the number of lines of code in an assembly or type, but beyond that you usually have to do some

digging if you want access to this simple information. NDepend gives it to you in straightforward

fashion. It is important to have answers to these questions. It may not seem critical to know this at

first, but there are a couple of important reasons that you do want to know. First, you start to get a

sense for averages for code elements in your assembly and that helps you look for potential red flags or

anomalies. If you know that most of your classes, for instance, are between 50-150 and you see a class

with 3, 000 lines, you'll be inclined to do some digging to see why this class is so much different. The

second important reason that you want to know this information is that you start to understand, in

general, what characteristics of good code and shotty code there are. In other words, you'll start to know

intuitively that you're in a legacy code minefield by virtue of it being an outlier in these common

statistics. You'll start to say, uh oh, the fact that this class has 3, 000 lines is not a good sign. Let's take

look at how you can see some of these metrics. For example purposes during the rest of module, I've

created a simple solution in which I have two test projects one called CodeMetrics and another one

called CodeMetrics. Test for the part where I get to talk about code coverage, but for now, I've just

created these simple projects and I'm going to go back through the brief process of attaching a new

NDepend project to the current solution. So, I will do that once and again and here are my

two assemblies that I want to analyze. I don't want to do a Build Report, I just want to analyze and get

started. And what I'm going to demonstrate here is the process of creating observing simple metrics

within this solution. So let's say that I'm interested in probably the simplest metric of all. I just want to
see how many lines of code are in various elements of my code base. I'm going to go in here to Metric

and I'm going to go back to the now familiar Code Metrics View window and I see a breakdown by

namespace. So here's the first thing I can look at. I can see in terms of namespace, this

CodeMetrics. Cohesion namespace has 10 lines of code and I can sort of traverse things visually that

way. I can also take a look at by type, by method and so on and so forth. But in addition to that, I'm

going to open up another window here that is often sort of part in parcel with this window, even though

it's in another Menu section and that's the Search View and that pops in dock next to this Tree Map

view and what you can see here is the default. It's showing me Namespace and then it's showing me

lines of code next to it and it's showing this in descending order. Now I'm also going to say maybe I

want to see this by Type and it shows me the types sorted once again in descending order. Same kind

of thing with method, although these are now nested, so you can see the Assembly, and then

Namespace, Type, and Method, by line number as well. So these are a couple of different ways that

you can easily visualize lines of code, which is one of those simplest most sort of elementary metrics

you can possibly see. At this point, I'd like to demonstrate a little bit of the feedback loop that goes

on between you and your code with NDepend and its analysis. So in terms of simple metrics, I'm going

to scroll down to a class I created here in a namespace called SimpleMetrics called LinesofCode and I

have --- method that is one line long called DoSomeStuff, which I'm going to open and as you can see it

does something and it has a comment in it, but only one line code. What I'm going to do here is add a

second line code (typing) and something pretty much equally simple. And then, once that kicks in and

finishes writing to the line, I'm going to go ahead and do a Build. And then after that, I'm going to hit

Alt+F5 and that reruns the NDepend analysis. So you see the metrics window there that's still showing

go momentarily blank and then it comes back with the new information. So I'm going to close this class

and here are our Search results once again and now I'm going to scroll back down to my one line

method called DoSomeStuff, and there it is and now I observe that there are in fact two lines of

code. So that is the feedback loop where you do your code changes and then you build and then after

that, you do an NDepend analysis and then all of your information, in these types of windows is

updated. One thing that's worth pointing out as I'm doing a demonstration surround lines of code is that
there are two ways if measuring lines of code. There's physical lines of code and logical lines of

code. Physical lines of code is if you open a source file, how many actual lines appear. This is probably

the easiest way to conceptual those lines of code and the way you usually see it demonstrated. The

other way is going to be logical lines of code, which is more or what you could think of as how

many actual source code statements are there. So for instance if you crammed a loop and the body of a

loop all onto one line, this would be a single physical line of code, albeit a long one, and two or more

logical lines of code. The final thing that I'd like to demonstrate in this Simple Metric section is to take a

look at the percentage of comments. So down here in Simple Metrics namespace, I also have this

MethodsWithComments class that I've created, so let's take a look and there's a constructor and then

two methods, one of which has a comment and the other of which does not. So, what I want to do is

take a look at, in an actual window, what kind of statics we have about comments. Now, instead of

using the Metric or Search window, I'm actually going to go up to a different window to showcase and

that's one that we've seen before, which is this window about Creating a Code Query. This should look

familiar, from Module 3. Now here, what I'd like to do is take a look at what kind of percentage comment

we have on some of our classes. So, whereas before I typed in methods and that was kind

of exclusively what we looked at in this window, you can also type in Types. You have the option to type

in namespaces and assemblies as well, so methods is obviously not the only basic selector here. So

what I'm going to say is I want to do Types and then I want to OrderbyDescending and what I want to

order by is the types --- percentageComment and then from there I'd like to do a Select and what I want

to select is the Type and then a new anonymous object with the type and the

TypesPercentageComment. And one really cool thing that I like is look how fast that updates, kind of in

real time as you go. That's a really nice feature to have as your going along to see if your getting the

feedback that you want. So now that we've ordered the TypesBy PercentageComment, let's take a look

and see what we have here and we observe that they are in fact sorted in this sense. So, looking down

we see that these are namespaces and then underneath those, you see the different types in

descending order.
Cosmetic/Stylistic Consistency

Beyond Simple Metrics, it is entirely possible to use NDepend as a tool to expose metrics about

Cosmetic and Stylistic Consistency. In other words, one of easiest ways to move beyond simple counts

of code properties is to find counts of Stylistic source code and consistencies. This can be used to

identify the inconsistencies, correct them, and then eventually to prevent them from happening in the

first place. Stylistic consistency is important in code bases. It would be ideal when reading source

code, if it all looked as though it had been written by the same person. This isn't just a matter of being a

stickler for consistency, it's important for the readability of the code. If half the developers on your team

pre-pend class fields with underscores and the other half does not, anyone reading the code will have a

hard time knowing at a glance if a variable is a filed or a local variable. While this isn't the end of the

world, keeping track of stylistic consistency metrics costs almost nothing to do and it eliminates this

source of confusion. NDepend doesn't have any metrics out of the box that address this per say, though

it does provide code rules about naming conventions, but I wanted to address stylistic metrics early in

this module, to make clear that this is possible using NDepend and that you don't need to look to a tool

like Style Cop to it, NDepend can handle it. I also wanted to use it as an example of how to define your

own metrics, for use. One of the conventions I commonly follow, by way of my own coding standard, is

to pre-pend class level fields with an underscore. There's probably endless debate over this subject, but

the reason that I do that is so that I can distinguish fields from method parameters or from local

variables and methods. Let's say that as the architect of a project, I wanted to make sure that this was

happening in my project, but I didn't want to go around exhaustively examining all the source code files

and doing code reviews for trivial things like this, I just want some kind of rule. Well, even though in this

metric screen we're looking at, we don't have any such metrics, it's pretty easy to define one and then

save it for later. What I'm actually going to do is go in here and create a New Code Query and I'm just

going to start typing, and I'm going to call this something along the lines of types with bad

prefixing. What I've been doing so far (typing) then prefixing, and what I've been doing so far is doing

queries that sort of returned simple things, you know maybe a count or a true and false, we looked at
warnif kind of paradigm, but this is where I want to do something that says give me a type and let's look

at some fields in it and then make note of those types. So what I'm going to do is I'm going to say I want

Types and I want Types Where the type has Fields, and there's any field where that field does not have

a Name Like and then I'm going to use the @ and a little bit of regular expression syntax here and

then what I want to do is select the type in question, and now I can see that this is returned in my

assembly a single class, and I'm going to look in that class and see what's going on here, and there

sure enough, I'm not matching the correct naming convention so, it looks like this rule has worked. Now

it's a pretty trivial thing for me to go ahead and persist this rule and then it will appear in general, in the

rules. So this is a metric of sorts I could actually introduce more things to say perhaps I want to see how

many types within the assembly do this and then I can create a metric on assemblies, how many types

violate the field naming convention, something along those lines. You can get extremely creative; my

point here is just to show you what you're able to do by way of defining metrics that relate, not only to

the nuts and bolts of the code, but to stylistic and cosmetic concerns as well.

Inheritance Depth

This is a pretty straightforward metric, when it comes to your code. It tells you where your class resides

and this so called inheritance tree, in terms of depth. For instance, if your class inherits directly from

object in C#, it has inheritance depth of two. If you then define an inheritor class for that class, it has an

inheritance depth of three. This is a metric contended to give you information about the maintainability

of your code base. Deep inheritance hierarchies used to be quite popular back in late 1990's when

object oriented programming was a bit younger and its popular adoption and people found over the

course of time, that they created some maintenance headaches. While inheritance can help you avoid

duplication, an important code problem to avoid, it spreads the functionality of a class across other

classes and files, and in the case of a deep hierarchy it spreads the functionality over many files and

classes. These days, most object oriented programmers have come to favor flat inheritance hierarchies

and you may hear the wisdom favor, composition, over inheritance. This metric is designed to help you
know when you're running a file of that wisdom. In order to demonstrate depth of inheritance like with

many of the other examples, I've given it its own namespace and put some example types in here, and

in order to view what's happening I'm going to back into Metrics screen and into the Tree Map view, and

you'll notice that what comes up here is level and the metric selected is depth of inheritance. I had pre-

populated this, but you can come in and find it here, if you so choose and what you'll notice is that we

this rectangular view, once again. In the top left hand, corner here is the actual --- namespace that I'm

interested in with some types. You can see a large rectangle corresponding to Son and a smaller one to

Father and another to Grandfather and the reason for that is that these classes all inherit from each

other, which I did deliberately. You'll notice back here that it says Son 3 units, this is how many things

are above it in the inheritance hierarchy chain. So, as per my last slide, the actual inheritance depth per

say of Son is 4 and what it's showing here is that there are 3 units above it. There's also a code rule that

is supposed to warn you if you are at a greater level than 3 in the inheritance hierarchy. So if we were

going to create a second Son or something that inherited from Son, you would then have something

that violated that rule.

Cyclomatic Complexity

Cyclomatic Complexity is an impressive sounding term with what turns out to be a pretty simple actual

meaning. It is the number of paths through the code. So in a method that simply prints a few things to

the console, there's a Cyclomatic Complexity of 1. In a method that takes a Boolean parameter and

prints out 1 thing if the parameter's true and another if it is false, the cyclomatic complexity is

2. Complexity grows as you have more possible paths through the code base such as with nested

conditionals or switch statements. Loops also effect cyclomatic complexity, since any given loop

represents an if condition in the form of the loops condition and that's another possible branch to the

code. Other keywords that add to cyclomatic complexity include continue, go to, catch, and the ternary

operator. If that seems as though it's a little complicated, you needn't worry too much about the edge

cases. The important thing to remember is that cyclomatic complexity grows as there are more
decisions made that create alternate paths through the code. NDepend also has a cyclomatic

complexity based on the IL or Intermediate Language. Regular cyclomatic complexity is language

dependent, whereas IL cyclomatic complexity just evaluates all languages that compile to the. NET byte

code on an even field. In order to demonstrate cyclomatic complexity, I'm going to go a different route

here and I'm actually going to select from this METRIC menu, one of these out of the box things that

you've been seeing here. In this case, I'm going to select the Most Complex Methods in the code

base. But this gives me a good opportunity to point out the way something actually works here. You've

most likely noticed me moving sort of fluidly between the Tree Map View titled Metrics, the Search

Results View and then the Code Query view. The reason I've been doing this all within the Metrics

heading is because those out of the box choices that you have that pop-up, they actually just populate

whichever combination of the three windows I mentioned, makes the most sense for showing off what

needs to be shown off. All those windows predated the out of the box options and so basically, you're

just getting nice shortcuts here in later versions of NDepend to be able to pull that open immediately

and have it at a glance. So at a glance here, we have in descending order by types the complexity of

methods. So, at the top here is this Inception method and then below that I have some complexity

methods here in a class called Decision Decisions, and what this demonstrates here is how we can

take a look, and you notice the highlighting that goes on as I do this, how we can take a look at

comparably complex methods, in terms of cyclomatic complexity. So there's this four complexity with

nested ifs, two complexity with four loop and then so on down the line. And if we open this class up, you

can actually see that it corresponds. We start where one complexity has nothing going on and then you

move on down the line to a single if condition and then to a four loop and then eventually to nested if

conditions and you can see the cyclomatic complexity increasing the way you could see it decreasing,

when it was in descending order in the previous view.

Cohesion
Cohesion is an interesting and somewhat subtle metric. In short, it measures whether things that are

found together, belong together. Generally, things that are cohesive abide the single responsibility

principle of software design, in that they have only a single reason to change. An easy way to

understand this is to consider two hypothetical classes. Let's say that the first one is called telephone

call and it has three fields telephone, caller 1 and caller 2. Let's then say that every single method in this

class, operates on all three of those fields. This is a highly cohesive class, since it defines a series of

methods that are all concerned with all of the classes dependencies. By contrast, consider a class

called Office that has the same variables as Telephone call and many others besides. This class has

dozens of methods, a few of which are responsible for making phone calls, but others of which operate

on completely different fields and are responsible for maintaining printers, building cubicle walls, paging

people over the intercom and a whole host of other responsibilities. This is not a cohesive class

because many of methods do not operate on many of its fields, it's a hodgepodge of

functionality. NDepend introduces two cohesion metrics, relation cohesion on assemblies and Lack of

Cohesion On Methods or LCOM. The first metric measures how interconnected the types of an

assembly are by quantifying the ratio of relationships between types in the assembly to the number of

types in the assembly, discarding external relationships. The very low score indicates a very non-

cohesive assembly made of a hodgepodge of types, while an extremely high score indicates that the

assembly is cohesive and possible even too interdependent. The second metric, LCOM, is a measure

of types in the inverse of the example of Telephone Call and Office. A high score indicates that the type

is a collective and non-cohesive whereas a low score is indicative of a cohesive type. There are

different established ways for computing LCOM and NDepend makes use of two of them, LCOM and

LCOM HS, which stands for Henderson-Sellers. Going into extensive detail of the philosophy behind

these metrics is beyond the scope of this course, but suffice to say that there are two slightly

different methods for evaluating the same concept and they return slightly different ranges of score. I

have another example here to help understand the idea of cohesion of methods or lack thereof and

that's in this cohesion namespace here's a class that's cohesive, it has two fields and every method in

the class uses both fields, and now here's a type that is not very cohesive at all. There's a method that
uses one of fields, another method that uses another and so on and so forth, and then there's a method

that actually doesn't use any of the fields in this type. So in order to demonstrate what this cohesion

metric is really all about, let's go here and we're going to bring up the Treeview and then I'm actually

going to go and I'm going to say, let's see the Top 20 least cohesive methods in this code base. And so

it's organizing by type here and then it's bumping the ones with the least cohesion up to the top. What

we're interested here is in this cohesion type, so non-cohesive the class that I showed you most

recently, this one here, with the various cohesion problems has a score of. 8, whereas cohesive has a

score of 0 and remember, like golf here, the lower is the better. Now interestingly, there's a bunch of

classes that have a score of 1, which is the worst possible score on this scale, so it's interesting to see

what those look like and what you've got here is actually a class that has fields in it and no methods at

all operating on them, so that interestingly enough defaults to 1, that sort of a fascinating tidbit, but the

long and short of it is that the more methods you have, that are actually operating on your class

fields, the more cohesive your class will be and the closer your LCOM score is going to be to 0.

Coupling

Coupling is a subtly different animal than cohesion, whereas cohesion is generally desirable in

that, things are where they belong, coupling often is a mildly beggared of connotation. Coupling is the

degree to which elements in your code base depend on one another and while elements of your code

base depending on one another to some degree is inevitable, each dependency, each coupling,

becomes a maintenance liability. After all, if your code depends on a lot of things, then there are a lot of

things that can break your code, when they change or if a lot of things depend on your code and you

change your code, there are a lot of things that can break. These two types of coupling are known as

Efferent and Afferent Coupling, respectively. Efferent coupling of a code element is the number of

elements upon which it depends and Afferent coupling of a code element is the number of elements

that depend on it. Efferent coupling is something that is entirely within your control at design time and

you start to hit a threshold beyond which an increasing score is indicative of a design problem. Afferent
coupling can also be problematic, but it's harder to combat at design time, since it generally

accumulates over the course of time, following initial design. Afferent coupling is generally indicative of

types that are risky to change. Let's take a look now at a practical example of coupling. I'm going to

open a class first, to show you what we're dealing with here and this class contains several

classes which I don't generally condone as a good code in practice, in C#, but for the purposes of this

example, it makes things easier. We have three classes A, B, and C that don't do anything. Then

there's a class called Coupling that uses one each of A, B, and C, and I use the term use loosely here

and then there's a class called LessCoupling that just has an integer. So let's go to the Metric and look

at the Most Coupled Types in this assembly and see where we find these types on here. And right here

is the Coupling Namespace and there is the Coupling Type. As you might expect, this is one of the

more coupled types, that we have here in this very simple assembly, and the LessCoupling type has

fewer couplings although what you might find is that this seems like a surprising amount, given that the

only thing LessCoupling you had here was actually an integer, but once you account for the fact that the

class is going to be counting itself and also the object type that it inherits from, that's where you get the

usage of 3 types. Same thing goes for the Coupling class, that's why you get 5 instead of 3. So, this is a

simple demonstration of the types used kind of coupling.

Method And Type Rank

While afferent coupling provides some insight into how risky it is to change a type, Method and Type

Rank are Metric specifically designed to measure this concept. Afferent coupling gives you a superficial

glance in that you only get one level of risk. Method and Type Rank make use of the Google page

rank algorithm to traverse the entire dependency graph. This allows NDepend to identify a type as

Risky by seeing not only what depends on it directly, but what depends on things that depend on it and

so on. These rank metrics help to identify cod in your code base that you should test a lot when

changing or possibly even avoid changing. In addition, it also signifies area where code elements are

over exposed and thus risk bottlenecks. To borrow from Star Wars, it's best not to build your Death Star
with the point where shooting it causes the entire thing blow up. Types and methods with high rank

scores are considered critical weak points in your code. Rank as it turns out is one of the more

interesting and popular features to take a look at. So there's a nice out of the box Metric that comes

right here, which is called Most Popular and Most Popular is going to show you types and methods by

rank. So this will bring up the Metric Tree Map screen and the Search results screen and if you look

here, right at top, there's a very high Type Rank, as far as this assembly goes and it's called

Popular. So let's take a look and see what makes this so popular. Well Popular is used by some class,

some other class, and yet another class and then this class at bottom, uses two of those classes that

are also using Popular, so this is a nice demonstration of what exactly Rank means. It measures not

just what is using this class Popular, but all of things that are using classes that are using Popular and

so on and so forth. It tells you, and gives you, a good window to how integral this type is inside of the

code base. If I go back to Search screen here, you can see that this is in fact using the rank property to

do and to give me these results.

Nesting Depth

Have you ever seen methods that look like they have a mountain range turned on its side, in

them? What I mean is that there are places where conditionals become nested 5, 6, 7 deep, creating

the appearance of a sideways mountain range under code base because of all the indentation. These

are methods with a high score for nesting depth. Nesting depth is the number of encapsulated scopes

within your method and you can extract this information from NDepend. Most programmers have a

sense that too much nesting depth makes things hard to maintain and anyone who writes a lot of unit

tests, will definitely advise keeping nesting depth to a minimum. Personally, I try to limit my methods to

a single nesting depth and often, even a single conditional statement for the sake of readability, test

ability, and maintainability. Your mileage may vary on that exact limitation, but NDepend will alert you to

methods where this is a problem. NDepend measures this metric in IL, so you may sometimes

encounter small surprises as the complier might optimize away a scope. You might also be caught off
guard to see the conditionals with several conjunction operations, incur additional nesting depth, since

each of these conditionals could be decomposed into individuals nested scopes. As you're now used to

seeing, I have created a namespace for Nesting Depth to show an example of it. So I'm going to go

here and I'm going to go into the Code Metrics View and you'll notice that pre-selected here is IL

Nesting Depth, which you can find normally through the drop down, if you so choose. Over here on the

right, I have a type called Nesting Depth and it has a method called ifception in it. So let's take a look at

that. It says that there are three nested scopes here and that is in fact what we find. So that is a quick

example of how you can go in and take a look at the different amounts of nesting and methods in your

code base. In general, you may see this when you're running a report and you'll be flagged for methods

that have too high of a nesting depth.

Abstractness

Abstractness is an assembly level metric that asses the degree to which you employ the concept of

abstraction and to get away from using a term to define itself, an abstraction is where you generalize

concrete concepts. In code, a classic example of this is the extraction of an interface. An interface is a

general concept whereas an implementation of that interface supplies the specifics. Another way to

think of abstraction is to think of having facades of sorts that hide implementation details and

specifics. The NDepend measure of assembly abstractness is the ration of interfaces and abstract

classes to the total number of types in the assembly. In other words, it's the ratio of types that can be

instantiated to those that cannot. Unlike other metrics, there is no more is better or less is better that

can easily be applied to abstractness. It's generally going to be good not to be on either extreme, since

total abstractness wouldn't accomplish anything and no abstractness makes a design quite

inflexible. But mileage will vary in the middle depending on the situation. Let's take a quick look at how

to view the abstractness of an assembly. I'm going to go into the now familiar Tree Map view and what I

want to do is because this is an assembly only metric I'm going to select the assembly and then from

this drop down, I'm going to select Abstractness and what I see here is there's only one rectangle
because we only have the one assembly in question here for CodeMetrics, the codemetrics. test, isn't

showing up, since it has no abstract types and what it's telling me is that the CodeMetrics assembly

has. 16 units, which means roughly 1 in every 6 classes in this code base is either an interface or an

abstract base class.

Test Coverage

Test coverage is a metric that you've probably hear of at some point or another. It is quite simply, the

percentage of the lines of codes, in your code base, that are executed by a particular unit test suite. Unit

test coverage is not a measure of the quality of your unit test suite, by any means, there is no guarantee

that the unit tests covering the code are well written or even that they will assert or test anything. All

coverage means is that the lines in question are executed at some point during the test suites

execution. Nevertheless, most programmers consider high code coverage percentages, to generally be

better, all else being equal. After all, having your code executed somewhere regularly, at least ensures

that this particular code is not dead or crashing, which is better than nothing. The contingent of the

development community that does a lot of unit testing or that practices TDD is generally in favor of this

metric even more staunchly and will say that vast majority of your code base should be covered

meaning percentages and the 70's, 80's, or even 90's. Of course, it is always possible to gain this

metric with shotty or nonfunctional tests, but you only hurt yourself and your team, by doing things like

that. In order to get code coverage information into NDepend, the first thing I'm going to need to do here

is generate a coverage file from the unit test runner that I'm using. In my case here, I'm using the Visual

Studio unit test runner, so I'm going to go up here and I'm going to tell it to Analyze Code Coverage and

while that's going, I'll explain that basically, if you look at the right I've one Unit Test class here and that

test, --- in that test class has only one Unit Test in it. So coverage as you can see with the window that

pops up there at 16. 3% isn't particularly high in this code base. Now that this has generated, I'm going

to click on this button to Export the Results and I actually, in this folder already, have a file I've

generated here, I'll just overwrite it so that it's easy to keep track of. Now, with that generated, it's time
to go and tell NDepend about what we're doing here. So I'm going to say that I want to Define the

Project Coverage Files and now I'm going to Browse to the file that I just created and I'm going to have

to select down here, that I want the Visual Studio format and once that's in place, I'm going to grab this

file and I'm going to say OK and now I have my code coverage file define and I'm going to Save. And at

this point, I'm going to be able to check some metrics about code coverage. So, the one I'm going to go

in here and check to see is, I have a whole suite of them here and speaking of which, this is why you

might have been wondering earlier why should we go and actually define this in NDepend, when Visual

Studio or whatever test runner is actually already going to tell us what the code coverage is, but as you

can see here, you get a lot more out of the box with NDepend than simple code coverage statistics. So

rather than going with any of those, I'm actually just going to do something kind of simple here and say

let's Review Code Coverage by Tests and this is a little bit crowded here, but it pop-ups a window

where we see the Tree Map view and the Search Results view and actually you see in descending

order, methods and how may Lines of Code and what Percent Coverage they have. So I have this

Sandwich A method, which as it turns out, the only thing covered by a unit test. If I scroll down here, you

also see Coverage as a function of the unit tests and unit tests methods are always going to be 100%

covered. So, here is the unit test method and it is in fact calling this Sandwich A method and we can

see, up at the top here, that this is where we get all of our unit coverage, in this assembly. Now, given

what I had showed you earlier with all the menu options and then the full querying potential of

NDepend, you can do some pretty amazing things here when it comes to querying and making

statements about code coverage and unit tests, in general.

Detailed Demo: Define Your Own Metric

Now that I've shown you all sorts of out of the box metrics with NDepend, what I'd like to go through and

do is show you what it's like to actually create your own metric and I mean a real kind of useful metric at

least for me is I would go along be something like this, and probably what you would find yourself doing

as an architect and a solution is felling your way through and iterating until you have progressively more
and more useful metrics. So what I'm going to go in here and do is I'm going to go to Rule and then I'm

going to say let's Create a New Code Rule and I have a pretty good idea for what I want my metric to

be. I have this notion that when I'm writing unit tests, if the unit tests start to have too many lines of code

in them, I start to consider that something of a code smell, specifically a test smell meaning that the

setup is getting a little bit too complicated and I think I probably want to look at my design and wonder

why the setup of the Unit Test is so complicated. So, I want to figure out something to do here where I

can effectively give myself a warning along those lines. So the first thing I want to do is --- I have to

go about finding the methods in question, that are a part of the test. So there are all sorts of ways that

one might do that, but what I'm going to do is I'm going to say I want to go get Methods where the

Method has a ParentAssembly and that ParentAssembly has a Name like Test and then from there, I

want to Select the actual Method and I think this is a good start, this is going to give me, well this gives

me basically all of my test methods. So the next thing probably to do from here is to say, well, we don't

want all of the Test methods, we maybe want the ones that have too many lines of code and let's

just arbitrarily say we're going to go with the LinesOfCode > than 7. Well that narrows the field quite a

bit; we're down to one single method here that meets this criteria, but this is going to be good, we're

going to establish a pattern for later because if we inadvertently do this in the future, then we'll be able

to see this as it happens in our build report. Now this is all well and good, but I'm thinking that what I'm

really interested in is not so much that I want to see the individual test methods because that might not

tell me enough, at a glance, I don't know what supports foreach's or where it comes from. I think what

I'd really be interested in seeing is the ParentType of this method, that I'd like to define a metric and I

think I'm going to call it Test classes with test smells. So that's a better title, it's more descriptive of what

I want to do and now we have to set about making this a reality. The interesting thing to note here is

that I have a format for tests wherein I define within the actual test class, I define a nested class that

contains the name of the method that I'm testing, in this case it's the Constructor, and then I have a

method within that nested class that actually contains the contents of a unit test. So, baring that in mind

that's how I'm going to have to go about picking out what I want to get for the ParentType

because otherwise I'll just get that sub or that nested class in there and that's not what I'm looking
for. So what I'm going to do is instead of selecting the method itself, I'm going to Select its ParentType

and then I'm going to Select that guys ParentType. And this is more what I'm looking for. I see now that

I have a type called ResultSetTest and that's what's truly interesting to me. Now another thing I could is

I could say maybe I want the metric to actually, contain both pieces of information and I could

say, okay, so give me Method and then also give me ParentType and I could reverse the order of those

and so on and so forth, but now I basically created my metric here and maybe I'll cobble together a

few more things on this or add to it over the course of time, but I'd like to Save it and I think in this

case, maybe I'll even consider this to be a Critical rule violation, so I'll flag that and then I will Save

that. Now at this point, I'm going to call it a day on this particular editing, then you're probably wondering

how do I go about getting back at that rule, where did it go? Well, one easy way to see it, it may be a bit

of cheating, but since we knew that there was a critical rule being violated, here it is Test classes with

test smells, and there we go again. Also, I just happen to know that creating it that way it'll stick it in first

category, by default, and you have the ability to change the categories of the queries as you desire, but

that's beyond the scope of this what I'm doing here. The point is, it's very easy to create metrics that you

can put wherever you want, and you can define however you want, and you can customize, so you can

really go with the out of the box NDepend metrics or you can get pretty elaborate and creative with your

own metrics to do as you need within your code base and the context of your architecture.

The Significance of Having Access to This Information

So in this module, we've covered a lot of ground with Metrics and seen that some really cool and

interesting things are possible. But what does this matter? Is it more than just a curiosity? The answer is

clearly yes. In the last module I talked about how important it is to be able to query your code base and

to ask and answer questions. This helps discussions move from feeling oriented arguments to

more scientific discussions backed with facts and figures. Metrics take this even further and define

numbers by which you can measure and compare code bases succinctly. For instance, it's one thing to

say there are 80 methods in my executed by unit tests and quite another to say, we have 80%
coverage. The former is an answer to a question that nobody really cares about, while the latter is

something that programmers all over the world will understand immediately, by way of

comparison. Metrics characterize your code base in meaningful ways, in the same way that design

patterns and algorithms allow software developers to discuss approaches and producers without laying

all the ground work for discussion from scratch. Taking an aggregate, these discussions and ratings of

code base, guide the industry as a whole to a consensus about what constitutes good software. Of

course, there are always going to be exceptions and unique cases, but in general, metrics grease the

skids for knowledge and experience sharing.

Conclusion/Recap

In this module, I discussed Metrics in a lot more detail. I started off, by describing NDepend's

implementation of Code Metrics and then I talked about the importance of Simple Metrics. Next up, I

covered Stylistic and Cosmetic Metrics and how you can automate code consistency. From there, I

started getting into more nuts and bolts code metrics by talking about inheritance depth and Cyclomatic

Complexity. I also talked about Coupling, Method and Type Rank, Nesting Depth, and

Abstractedness. I finished up, talking about Standard Metrics with unit test coverage of code. Then I

showed a more detailed demo of how to define a Custom Metric and I rounded out the module by

discussing the philosophical importance of having this information about your code, at your disposal.

Managing Dependencies Visually

Introduction

Hi, this is Erick Dietrich presenting on behalf of Pluralsight. Welcome to this course entitled Practical

NDepend. This fifth module is dedicated to reviewing the visual aspects of code analysis, with

NDepend. In this module, focusing on the visual features of NDepend, I will start by talking about the

value of visualizing architecture. Then I'll demonstrate the graphs that NDepend provides. Next up, is

the iconic NDepend Zone of Uselessness, Zone of Pain graphic for your project, followed by a
demonstration of dependency and type matrices. After that, I'll briefly review the metric views Tree Map

structure, that we saw in the last module and then I'll talk about searching your code with

NDepend. Then I'll dive into CQLinq for Exploring dependencies. Finally, I'll briefly show you how to

tweak some of the options for NDepend's visual displays and then I'll round out with help and further

reading.

A Picture (Or Diagram) Is Worth A Thousand Words

So far, most of what I've covered has involved a lot of writing queries and looking at numerical

results. But NDepend, offers more than just that and that's what I'm going to be discussing throughout

this module. It's often said that a picture is worth thousand words and that's true of code and

architecture, as it is anywhere in life. To understand what I mean, think about what you do when you

and some of your peers are collaborating on software designs. Do you say to yourself, you know we

should probably design an assembly containing types with low coupling and high cohesion

scores? Somehow, I doubt it. Metrics like this are design to asses and quantify your code base in great

detail. I imagine what you probably do instead is head to someone's whiteboard and start drawing

squares, circles, and arrows. In fact, I bet you draw something that looks a lot like the picture, at the

bottom of this slide. But no one drew this picture on any whiteboard, NDepend drew it for you and this is

one of the true powerful differentiators of NDepend as a tool. It helps you visualize your architecture, so

that you can communicate it effectively and have intelligent discussions about the domain concepts

involved. The rest of this module will be focused on showing you how to leverage these visualization

technique.

Exploring Your Architecture Visually with Graphs

In the last slide, you saw an example of NDepend graph. NDepend graphs are all about helping you

understand the dependency structure of your application. A dependency is simply a relationship in

which the dependent item makes use of another item. If class car knows about them, performs
operations on class engine, then car is said to have a dependency upon engine. Dependency

relationships can and arguably should be single directional. Car can depend on engine, without engine

depending on car. NDepend allows you to view code components and their dependencies on these

graphs. From the main graph menu, you can view dependencies at the assembly or namespace

level, but you have many other ways of generating graphs that are built into the general flow of

NDepend, through context menus. It's possible to view interesting information, such as the

dependencies of methods within a type, which methods call which other methods. In the actual graph

view, you then have many options for what you'd like to do with the visual representation. You can do

obvious things like zoom and change the orientation, but you have other neat features as well, such as

the ability to change what the size of the nodes and vertices means and to create a matrix, from the

graph. You can also export screen shots for future use and for reports. To demonstrate NDepend

graphs, I've opened the AutotaskQueryExplorer project back up and I'm going to use that, since it has

interesting information of being an actual real project, rather than just something I created for a few

simple demonstrations of individual metrics. So to see what the graph looks like, we go to the Graph

menu here and the first thing that you see at the top is the ability to View the Dependency Graph, or

Alt+G is the keyboard shortcut of it. Now, if you do that, it just brings up whatever was last shown in it. In

this case, this will familiar to you, because it's the screen shot from the slide. But, if we want to actually

view, let's say Assemblies, there's an option for that here. We can view all the assemblies in this

application and you'll see them feed in. Now, if we Zoom a bit here, you'll notice that there are a number

of my assemblies and there are things here like system. xml. link and that's why there's also option out

of the box to say, well I'd like to view only the Assemblies in this application and that's where you get

this screen that was showing before. This is a view of the assemblies inside of my solution and their

dependency relationships to one another. The AutotaskQueryService at the bottom is dependent

upon, by everything, whereas some of these guys at the top only depend on other things. And you'll

also notice, one of the cool things, is this Context Sensitive Help that pops up, you have the option to

close this if you'd like, but for new users this is terrific because it gives you a lot of insight into what the

different things mean. For instance, this is saying that the green boxes are things that depend on what
you're focused on here with the mouse and it also talks about, has links where you can go and view

more information as well. Another compelling feature of this view is the series of options you get when

you right-click on a vertex, in the graph. So, let's take a look and see what happens when you

say, when you right-click on a vertex. You get the option to Build a Graph of the Elements involved, you

can also Open in on the Matrix view, and you have some options for Code Query as well. I'll briefly

show you what it looks like when you build a graph. So what pops up is a whole separate graph, where

you're seeing things within this vertex, at the method level and this graph is going to look nicer, most

likely if we view it from left to right and you start to see which methods of the dependent assembly are

called by which methods of the depending assembly. So, this is a pretty neat way to navigate through

your code as well. You also have the option to go back and get back to where you started if you choose

that particular view then. Next up with the graphs is the next out of the box option, which is to view

namespaces and this is conceptual similar thing. You'll notice that this is quite a large graph and if I

use Control and the cursor wheel to Zoom in, you're going to see a lot of things going on here. It gets to

be actually fairly hard to tell all that's going on at a quick glance. So what we might do is the same thing

as above and say, well let's forget about all the different namespaces of things in this code base

depend on, that are both internal and external and let's focus instead on application namespaces

only. This is going to be a much more manageable graph to take a look at. So conceptually, the

namespace graphs are pretty much, identical to the assembly graphs, it's just the level of granularity at

which you do things. Now with this graph open, let's take a look at a few of the options that we have

within this screen. Up here at the top left, this is the same menu that you see when you go from

above and you want to select the Graph menu. You're seeing --- most of the same options there. This is

the option to turn the graph into a matrix, which is an alternative way to view the information here that I'll

cover in a later slide. This is the option to Save it to a PNG file, so that you can put it into a report or do

whatever you'd like with it. You have the option to navigate back and forth between previous graphs that

you've viewed, if you so choose, and here are some options for undo and redoing changes, as you go

along here. Zoom you can do here explicitly, instead of using the mouse wheel and then you have the

option to do some fitting of the window --- I should say, fitting of the graph within the window. And you
also have other sizing and selection options here as well. The final sort of cosmetic layout option that I'll

mention is that you can view the dependences this way top to bottom if you'd like or you can

switch them and say I want to view them from left to right. My personal preference is top to bottom, but

your mileage may vary. Two of the controls that I haven't mentioned here so far are box size and edge

thickness and these give you some pretty dynamic abilities within this window. Box size is basically, a

function of what you want the size of these boxes in the graph to vary based upon. So if we go up to

Constant Font, that means that it's going to vary only inasmuch as it's going to fit the actual words of the

namespace, within the box. But, if I choose something else, Constant Area, well that makes all the

boxes that same size. But let's do something more interesting, let's see which namespaces have the

most code in them. We can see that that AutotaskShell. AutotaskService here is very large in terms of

lines of code, whereas this test assembly is not so large for instance. And there's a similar idea kind of

on down the way with these metrics, Cyclomatic Complexity is what we had opened before and it's

not a surprise that the namespaces with the most code have a lot of Cyclomatic complexity and

well, and you can do things in terms of rank and coupling and so on and so forth. Edge thickness, talks

about what the thicknesses of these vertices is based on and that's a function of dependencies. So, this

is going to be how many different types do we have that are related between these or methods or what

have you. The only thing at this point that I haven't covered now are these last two items here and this

one that I'm hovering the mouse over is going to show you options, if you so choose and I'll go over that

in a bit more detail later and this question mark is obviously Help, where you can go and see some

documentation online on the NDepend website. Now, I'll close this out and I'm going to cover some

context dependent graphs that you can generate as well, if you want. So, let's go take a look at say

Rules Violated and bring up and center the actual queries and Rules Explorer window to take a look at

some stuff that's going on here. So, let's take a look at potentially dead methods in this application and

we can see here that here is a constructor in one of the types that is potentially not used anywhere. You

actually have the option to go up here and you can do this from the Context menus as well, but you can

say let's Export the Results of this current Query that's opened to a graph and take a look at it and what

you would expect to see here in this graph is about what occurs, since this is dead code, you see only
this one type in here, but this demonstrates what this context based option is, which is that you can pick

the results of any query and also see them within this graph view and that means types, methods,

etc. Now, to see something in the way of a gra=ph that's a little bit more interesting, I'll close all these

things out and show the Context menu and how that works. I'm going to open this class called

Gatekeeper that has a few things going on, there's a few different methods some of which are public

and private and they refer to one another and I'm going to right-click, and then I'm go to go to this

Context Dependent option and I'm going to say I want to view this type on the graph. And what fades in

is basically all of the different methods within this type and this Tree structure of methods, as the call

one another. So TryLoginUnitSuccess calls TryLogin, which in turn calls Login and so on and so

forth. This is a really powerful way to understand your types and what they're doing. You can pull up

any type in your application and see what the Tree structure of the calling of one method to another

looks like. This is particularly helpful for large convoluted types that might exceed 1, 000 lines or

more. You can kind of see a visual representation of what they do and that might help you in the context

of let's say trying to extract several classes from one that's too large. You can group them into these

Tree structures and see which method Tree structures are sort of independent of one another and that's

a really powerful thing to be able to do. There are many other graphs that you can generate in

general, but the last one that I'm going to demonstrate is to see usage about a type. So I'm going to go

back to this Context menu and I'm going to say, let's select types that I am using directly here. Now

that's going to bring up this code query window and what I'll do from here is go back to this option that

says, Export the Query Results to a Graph and there I have a graph. So I can see what I'm using here

in terms of both interfaces and types and that's a pretty neat thing to be able to do. There is no shortage

of other things that you can do, NDepend is particularly nice for looking for a namespace cycles and you

can see inheritance structures and all manner of other things, you might want to visualize both using the

Context menus, the Rules in Query Explorer, the Query --- window itself, and pretty much anything that

you're doing in your application there's a right-click Context Dependent menu, where you can look at

graphs about the structures involved.


Zone of Uselessness And Zone of Pain

This graphic might be the most iconic feature of NDepend. If you Google either of these terms, you get

all manner of results about NDepend and you'll find on shortage of developers that may use these

terms ruefully to describe projects they've worked on. This graphic, at a glance, puts your assemblies

onto a chart and characterizes them on two scales. The first is abstractness and the second is

instability. Abstractness is simply the metrics that I talked about in the last module, the ratio of the

abstract types in the assembly, to the total types. In this context, it measures how likely it is that an

assembly can be extended without recompiling say, by implementing a public interface from a

dependent assembly. Instability, measures how much use the public API of the assembly gets. The

graph is based on a paper written by Bob Martin about abstractness versus stability and it represents

fundamental tradeoffs and software design. Extremely concrete designs are ridged and hard to

change, but they don't have much overhead or complexity, in terms of the number of types, whereas

flexible designs have a lot of that type of overhead. At the edge of each graph, you will find two places

to be that are bad and that are responsible for the famous naming. The Zone of Uselessness is an area

where assemblies are very abstract, but the public API isn't really used, meaning that there is a good bit

of pointless abstraction. The Zone of Pain is where you'll find heavily used public APIs, with little or no

abstraction behind them. In the Zone of Pain, pretty much everything you do is going to be a break in

change, while in the Zone of Uselessness, nothing you do really matters in the assembly all that

much. Here's a brief demonstration of how to generate the Zone of Pain and Zone of Uselessness

Graph. Really, all you need to do is to do an analysis including a report. So I'm going to say Run Full

Analysis and Build Report and then I'm going to let NDepend do its thing and its going to wind up

popping up the actual report, that I covered a few modules ago. So I'm going to Skip this screen and

then I'm going to take a look at the report, which popped up off screen and here it is. And here at the

right is Abstractness vs. Instability and I'm going to ask this to pull and there we have the iconic

graph. So really, that's all there is to it, it's part of the report. It's just that this is so famous and so well
known and it provides such valuable information about your project at a glance, that it's certainly worth a

slide of its own, in covering and understanding.

Dependency And Type Matrices

In addition to allowing you to visualize dependencies with graphs, NDepend can also show you them

using a matrix as pictured in tiny zoom, on this slide. The reason I picked such busy screenshot is that

the Matrix view is really designed to be used when graph view simply is overwhelming because of how

much is pictured. Whereas the graph view doesn't scale very well unless you have a computer monitor

the size of a wall, the Matrix view scales impressively. Matrix view offers the same information, but in a

spreadsheet style, two dimensional layout. Instead of boxes, it has rows and columns and instead of

vertices, it has boxes with information where the rows and columns intersect. For those that have

worked with node vertex graphs, like state machines, the Matrix view is just a grid representation of

such a graph. In Matrix view, there are conceptually similar formatting options. You can swap the

header rows and columns to alter orientation, zoom in and out, and choose to view the Matrix contents

in a graph. You also have screen shot export help and settings as well, and like the graph view, you can

alter what is represented in the intersection of rows and columns. To demonstrate the Matrix view and

how it's a natural extension of a the graph view, the first things I'm going to do is actually go back into

the Graph screen here and say let's View Assemblies and this brings up the screen that we saw

before, except what I'm going to do here is go to the option that I had briefly mentioned before, which is

that we can send this to a Matrix view and I'm going to actually go ahead and do that. So now what

we're seeing here is our assemblies on the left and then our assemblies again at the top and then

there's also this view instead of on the left just our assemblies there's also the ones that are part of the

system namespace and third party dependencies in general. So, what I want to do is rather than

confuse the issue, I kind of want to get rid of these and say let's view our Assemblies only, and so there

we have a view with just the assemblies and the solution. So now, let's figure out what we're looking at

here. On the left we have the assemblies and then again at the top we have the assemblies and then in
the cells, where they meet, there's some numbers. So what do those numbers mean? Let's first take a

look at this and see that some Context Sensitive Help pops-up and it's telling us why this green and

what the number mean, which is a really nice thing for when you're first getting started. So, the reason

that this assemblies is green is because the column is used by the assembly in the row and that makes

a lot of sense because the assembly in the row is AutotaskQueryExplorerTest and the assembly in

the column is AutotaskQueryExplorer. So, it stands to reason that QueryExplorerTest would do a lot

using of AutotaskQueryExplorer, since that's what test assemblies typically do. And furthermore, the

number, it says, is the number of methods that --- are involved in the coupling of these two assemblies

and again, it makes sense that this count would be very high, given that the AutotaskQueryExplorerTest

project is going to invoke pretty much all of the public facing methods in the AutotaskQueryExplorer

itself, assuming that you have pretty high-test coverage. So, the reason that it's in green and the reason

that I has the number it does become obvious in this Help view and also what is helpful here is that it

says on the right of the Help view, that here are some things that you can look for in the Matrix, here's

how to spot problems in your code. So there's already, right of the bat, some pretty useful information

here. Another really nice feature about this view that's worth pointing out is that if I hover over this, I can

go down here to this Show description of the dependency, and now you're going to see an extra window

next time I hover over, which is this window up here at the right, this Show Info in Window and this is

going to give you information of the form X methods of the assembly are using Y methods of this

other assembly and this is a really nice thing to kind of explain what the numbers mean at a glance. So

we see this 19 here and it says the 19 methods of this assembly are used by 41 methods of this other

assembly, so that's something that's very handy to bear in mind and use as you go along. Now, let's

take a look at this weight on cells text box. This is somewhat familiar from the graph view, but what we

have here is basically what's showing up inside of these cells. So, 41 members in the dependency that

we explored before, were there 41 members of AutotaskQueryExplorerTest depending on

AutotaskQueryExplorer, but if we go in here and we say we want the # of types, that number gets

smaller because it's not members now, but the granularity of type. If we do namespace, that gets even

smaller, only 2 namespaces there. If we click this button, it preselects something for us and that shows
us the depth of direct and indirect views, meaning that if we had a more rich and large assembly

structure, if assembly A were using assembly B and assembly B were using assembly C, you'd see

numbers of 2 here. Another property of this Matrix is that if you so choose, you can start drilling into the

assemblies and you can look at things in terms of the actual namespaces and types and so on and so

forth. So that tends to be a pretty useful feature as well. Let's look now at some of the other controls in

this screen, very quickly. Here we have similar to graph view, some of the menu options that are

available from the menu at the top. Now over here, we have an interesting feature where you actually

have the ability to isolate and find namespace dependency cycles or assembly dependency cycles and

in larger assemblies, there don't happen to be any in this assembly that I've designed, it's not very

large, but in larger assemblies where you might run into something like that, this can be very

useful. Assembly namespace cycles, that sort of things, is a real problem, it's a huge code smell, so

isolating and finding and doing something about that tends to be important. Here you have the ability to

remove different things from the graph, if they're there, or I should say from the Matrix. This is what we

had seen before, which is we can generate a graph based on this Matrix and here's the screen shot and

then we've got similar Zoom and Back and Forth navigation buttons here, and then also down

here, there's similar settings and help. And we've got a couple of icons here that have to do with how

we're going to bind and arrange the column and row headers. The last thing I'd like to demonstrate

when it comes to the Matrices is what we did before with the Context menu in this Gatekeeper class. So

instead of saying that we want to go and view the graph view, we can go in here and say that we want

to view on the dependency matrix and this is going to show us a similar kind of thing, where we have

methods here on the left and then methods on the right and you can see what relies on what. In this

case, of a relatively small type, this tends to look better on the graph view, but if this were a very large

type or assembly or namespace or something of that nature, that's when it typically looks better on the

matrix view. But, you can view the matrix view for anything you view the graph view with and vice versa.

Metric View (Tree Map)


At this point, I want to briefly revisit the metric view. I realize that I covered this extensively in the

last module and if you want to see more about it, by all means go and review that module, where that

screen, that whole menu in fact were covered in great detail, but one thing I'd like to point out in the

context of this module, about visualizing your code base, is that we've been looking through a lot of

different ways to use shell integration and context menus and all that, to see different windows and you

can actually do the same thing here when it comes to NDepend's metric view. So for instance, what I

can do is go here for this type and I want to say let's view on Metric View the type Gatekeeper. cs and it

actually pops open the Metrics view with gatekeeper. cs highlighted and that's a pretty nice little

feature that I just wanted to sneak in there, briefly to mention, in the context of this module.

Search Your Code Like Never Before

The search window has made a handful of appearances so far in this course, but I'm now going to focus

on it. While it isn't specifically visual, as the other things I've discussed so far in this module are, it is

heavily integrated enough with the visual features that I'm going to talk about here and boy is it a

powerful tool. It leaves the standard search mechanism of Visual Studio, the find and files dialogue,

completely in the dust, by giving you a code rather than text based way to search. First of all, the search

orients around code elements rather than basic text, so you have search semantics of search for

methods named X or search for types named Y. Like Visual Studio search, it supports case sensitivity

and partial or full matches, but it's also clever enough to search through just your code or to include

third party code and it can even match containing code elements. So you can for instance find all

methods that have the word customer in their names, the name of the their containing type or the name

of their containing namespace. Oh and by the way, it is incredibly fast, and that's just the basic

search. You can also search by other criteria such size, complexity, popularity, coupling, etc. This is

what you saw in the detailed coverage of the metric Tree Map view, in the last module. And if you don't

like the out of the box queries that the search --- screen generates, you can actually edit the query to be

customize exactly as you like and when you have your results, you get a lot of options for how to group
them as well including things like namespace or assembly, but also really cool things like source code

file or even directory on disk. You will never go back to searching your code any other way, after

this. Let's take a look at Search in action. For certain types of Search, you could actually go through the

menu and there are couple that are most common that you can use the keyboard shortcut. Here you

can see that they are Search Type by Name and Search Method by Name, that's going to be your most

common Search, so Alt+C and Alt+M lets you do that, but I wanted to actually show you here. So, what

I'm going to do is say let's Search for say Method by Name. And this is called the

AutotaskQueryExplorer. So, it's a safe bet that there will be something with the text auto in the name

and as I type that, the first thing to notice here is the speed. Look at how incredibly fast this changes as

you go, that's really powerful and really nice. You'll notice that there are options here with the Search as

well, we can select that we want to Match case and you'll notice that all the non-case matches are

filtered out. Next, we can choose Third party and that doesn't seem to have any effect, but if we were to

type in something like ToArray from the framework, you can see that, that actually appears, but if we

uncheck Third party, there are no matches for this, within the AutotaskQueryExplorer code base. Full

name is another interesting one. If we type in Autotask and that's going to show us our methods once

again, but if we select Full name, you'll notice a lot of other things start to appear and the reason for that

is because it's matching anything that appears not only in a method name, but anything that appears in

a type name or a namespace name, it's the fully qualified name of this type contains the word

Autotask. And then Exact name does exactly what you would expect it to do, it only matches on the

exact name of the thing that you type into the search. So far, we've searched only by method, but here

are other available options here, Field Type, Namespace, and Assembly. If I search by Field, now I'm

going to get only fields with this particular name in it. But you'll also notice that over here you have visual

indications so this is search by namespace and it populates the namespace drop down, so this tends to

be handy way to reason about your searches. Now, I'm going to show you an absolutely killer feature of

search. I'm going to go back to Method and I'm going to say, you know I think I remember that I had

some method about setting clauses, but I'm not really sure, so I'm going to type in set and clause and

see what matches and you can see that what's actually matching is that it's searching for both of these
tokens within the string and that is a really cool search feature. It's a really nice way to narrow down

your code in a way that you can't with Visual Studio's Find or Find and Replace dialogue, where you just

have no ability to do something like that except for using wild cards. Another handy feature is that if I

don't particularly like the way this query is, I can pop-up a Query and Rules Edit window and maybe I

want to say I want to see something with set or clause in it and I can go ahead and see what matches

there. So, you can get pretty much everywhere you want to go with this search screen and if it's not

quite far enough you can just pop out this Rules Editor and take it to where you need it be. Now, I'm

going to back out of here, and I'm going to go and show you what you can Search by as well. So far, it's

just been a matter of Searching by name, but you can Search by the Size of the types and you can go

and say we want types with more than this amount of lines, or I should say methods, excuse me, but

you can also obviously do it by Types and there are going to be a lot of Types with more than 4

lines. You can Search in terms of Complexity and you can do the same there with scaling up or

down, just as we saw earlier in the Tree Map view and there are a number of other things here too, I'm

not going to show all of them, but one of them that's pretty cool is you can Search by Visibility, can look

for types that are private, methods that are private and so on and so forth. So there's a lot of really

interesting ways to Search your code. This is an incredibly powerful way to do it and I doubt you'll ever

go back once you start getting used to this style of search.

CQLinq for Dependency Exploration

In the context of code visualization, there is one last kind of advanced feature, that I'd like to show

off, which is something you can kind of take and run with in your own code base and that is the idea of

exploring dependencies while using CQLinq. So far, we've seen a lot about using code queries in

general and then about the dependency graphs and matrices and what not, but if we go in here and we

look at the dependency matrix, you actually have the ability to then go and generate code queries

based on things of this nature. Like maybe we pick, we drill down from here and pick something within

this AutotaskQuery service, then here's a test class, and we go on further and further down and maybe
we go to one of these particular types and we say Select Namespaces that using me directly, for

instance. And up pops a Code Query window and so this is kind of an entry point, where you can

actually use the Code Query window to explore your dependencies. There are some really nice features

in the code query language that lets you do this; IsUsing, IsUsedBy, and so on and so forth. So I'm not

going to go into a ton of detail here and now about this, but you can really drill into the dependencies in

your code base, beyond just the visualization of the graph and of the matrix views. You can use those

views as a staging point to go and explore in greater detail what you want to see about your code and

how it depends on other pieces of code.

Tweaking The Options

So having been through all the different visualization windows, let's take a look now at some of the

Options that we have. If you go to the NDepend Option Screen, there's a lot of stuff that you can do

here, but I'm specifically focusing now on these Visualization Options so here are some options related

to the Matrix View, here are some related to Graph View, and there are some also related to Metric

View. So I'm going to get out here and then launch the particular window in question. So first let's take a

look at the Dependency Graph and then we'll go up here to get the options specific to this. It comes up

highlighted with the Graph View options. We have the ability here to select if we want to display

namespace and member name or for type and member name. So if we do that, you see now that the

box has included the entire namespace. You also have the ability to select some additional layout

directions beyond what's displayed at the top of Dependency Graph and finally, you can select a

different font, if you're so inclined. Next up, let's look at the Matrix view. So I'll go back here and pull up

the Dependency Matrix and there are some options up here from the options screen as well, where it

comes up highlighted. So basically, you can tell it only to display the Blue ones if having both sets

of dependencies that mirror one another are too confusing or busy for you. And you also have the

option to control what the balloons look like when you hover over thing, whether they appear at all or not

and maybe you want them to be transparent. Now, let's take a look at the options for the Tree Map
view. So if we go in here to Metric and then select Code Metrics View, we get a set of options specific to

this screen too. So, this is basically just a matter of what you want to view, you can take away the

borders if you want a more seamless kind of view. You can opt to hide the code elements name, so you

get a really kind of Spartan view unless you mouse over something and then finally, you can take away

the Seeded Code Elements Names in Red and you get almost nothing here. And finally, I'm going to

put these options back here because I prefer them to be enabled and while were in here, you can take

a look at it's sort of the types of other options that are available here. This is the screen that was on

install, where you could install it for different versions of the IDE and then there's options about the

Analysis, Reflector Addin, and Namespaces Hierarchy here, where you can select what you want the

namespace hierarchy view to look like. There are different options for selecting your default editor and

how you want to compare things. And there are various options for code query also. And finally down

here, Analysis settings and Tread Metrics you have some options to play with as well and NDepend has

a general skinning option where you can pick different looks and feels for it, if you'd like. And finally,

there's a handful of options related to Export/Import/ and Reset. Mostly I just wanted to cover the ones

related to the visual tools here, but just also to make you aware of what some of the other options that

you can tweak and play were here are. Going through all of them in exact detail is a little beyond the

scope of what I want to do.

Help And Further Reading

For help and for further reading, there's really no need to go much beyond the NDepend website. You

can search the main site itself and kind of poke around there. There's a lot of extensive and detailed

documentation. For the subjects covered in this particular module relating to the Visualization Features

of NDepend, here are some specific links for the Dependency Graph, for the --- Dependency Matrix, for

Metrics and Tree Map, and for Search as well. In addition, you might go to stack overflow or look at

some relatively recent blogs, although as of the time of this recording, version 5 of NDepend is brand
new, so it may take a while for the blogosphere and for other sources to catch up, but can definitely find

some great information at the NDepend website.

Summary

In this module, I started off talking about how being able to visualize your applications architecture and

dependency structure is invaluable. I then gave a detailed demonstration of the Graph View in

NDepend. Next up, was a foray into the famous Zone of Uselessness, Zone of Pain view, followed by a

look at the Matrix view of your applications dependencies. From there, I revisited the Tree Map view

that I used extensively in the last module, to showcase metrics and then I talked about the powerful

NDepend Search functionality. The last demonstration I did was highlight the fact that CQLinq is a good

tool not just for properties of code, but also for examining dependencies and relationships between

code elements. Finally, I did a brief overview of the NDepend options screen and then had a section

about help and further reading.

Additional NDepend Features Beyond The IDE

Introduction

Hi, this is Erick Dietrich presenting on behalf of Pluralsight. Welcome to this course entitled Practical

NDepend. This sixth module addresses the components of NDepend that extend beyond the Visual

Studio IDE. In this module, I will show you features beyond just using NDepend with in Visual

Studio. First up, we'll take a look at Visual NDepend. Next, I'll show you the NDepend Console. After

that, you'll get to see the NDepend Power Tools and then the NDepend API, including a demonstration

of a sample usage of it.

Visual NDepend
The first additional tool that I'll talk about is Visual NDepend. Back in module 2, when I was describing

NDepend installation, I talked about Visual NDepend, but mentioned that I'd be using the Visual

Studio plug-in for the remainder of the course. I chose this route because I believe that the Visual

Studio plug-in is the more common used case. Visual NDepend is a standalone version of all of

the functionality that we've seen throughout the course. Rather than just being another cog in the Visual

Studio machine however, Visual NDepend allows you to use and focus exclusively on code analysis

functionality. The look and feel is similar to Visual Studio, so Visual Studio users will feel right at home

here. The NDepend functionality however is more spread out, showing a lot more icons and menu

options at a glance then you would see within Visual Studio. So why would you want to use Visual

NDepend instead of just using the functionality within Visual Studio? Well I've already mentioned a

powerful motivation, which is the fact that the NDepend functionality is the center piece of this

tool, meaning that everything is more spread out and accessible and visible. You might also use Visual

NDepend if you're not interested in diving into and working directly with the code, but you just want to

obtain information about it, look at trends, or maybe generate a report Visual NDepend is also much

less resource intensive than Visual Studio, so you may find yourself using it for the reduced amount of

overhead, during the course of usage. Let's take a look. So here I have opened my NDepend5 folder

and I'm going to click on the VisualNDepend. exe icon here, which is going to launch Visual NDepend

and then you get a splash screen and the tool will come up after a moment. When it comes up, it will be

opened to --- the start screen where you can pick whatever project you'd like to launch so I'm going to

launch the AutotaskQueryExplorer here by double clicking on it, this is the one that you've seen

throughout the course of this course. So if you'll look up at the top, you'll notice that the same menu we

see within Visual Studio, but all from the NDepend drop down itself is now expanded out kind of at large

here, across the top. So, instead of clicking on NDepend and then being able to choose the

DASHBOARD, or RULE, or GRAPH, or MATRIX, and so on and so forth and then have those be

submenus, those are all your main menus here at the top and then, there are also a number of icons at

the top left there, right below the menus and those are going to be quick things, you know opening

projects, running analysis and so on and so forth. So things that you would accomplish within Visual
Studio in the context of keyboard shortcuts or context menus or other things like that, their kind of more

spread out and easy to see. Down here at the bottom, you can alternate between some windows that

are opened within Visual NDepend by default and you see that you've got kind of a nice dashboard a

nick cockpit here for running all of your code analysis functionality.

NDepend Console

The NDepend Console is the engine that drives NDepend. Everything I've shown you so far, depends

on this functionality in order to work. It is the NDepend Console that executes the analysis and

everything else simply builds on it from a functionality perspective. In this sense, NDepend Console is a

lot like MS Build, where Visual Studio's functionality and project and solution structure ultimately rely on

it to actually do the core work. NDepend's Console is used from the command line and it comes with a

handful of command line options. You have a few options related to report generation such as a flag to

show the report and an option to supply a XSL file --- to have the NDepend report display with the look

and feel that you define therein. You can affect the console behavior by opting to hide the console or to

squelch feedback by using silent mode, As for the actual execution, you can opt to parallelize execution

if you'd like and you can also manually alter the directories of the analysis input and output files. Other

miscellaneous options include the ability to force the logging of trend metrics and the admit an XML file

containing a representation of your applications dependencies. And of course, there is the help

command line flag as well.

NDepend Power Tools

NDepend exposes a powerful API that you can use to truly customize and automate your code analysis

solutions. You've seen me write CQLinq queries in the Query Explorer window to help define my own

custom rules, but the API actually lets you leverage NDepend's functionality in your. NET code. I will

cover the API shortly, but understanding what it is helps you understand what the power tools are. The

NDepend power tools are a series of utilities that make use of the NDepend API to extend the
functionality of NDepend. They're open source and the service code even ships with NDepend, so you

have access to it right away. Not only do they provide useful functionality to you, but you can actually

open the power tools solution in Visual Studio and use it as an example of how to use the NDepend API

for your own purposes. You interact with the power tools by launching the executable that comes with

NDepend, which is called NDepend. PowerTools. exe. It's a console executable that you interact with

via key based menu options. It will launch windows for input as needed for some of its tools, but the

basic interaction is console based. I won't list all of the functionality here, but some interesting highlights

include search for duplicate code, search for dead code, code review of changed methods since a

previous analysis, and converting old CQL NDepend queries to CQLinq. First, I'm going to show you

how to actually use the NDepend Power Tools, it's pretty straightforward. I have my application folder

up, once again, and I'm going to click on the NDependPowerTools executable and I'm going to tell it,

not to ask me about this anymore, and then I'm going to Run it, and off screen it popped up the

NDepend Power Tools. I'll Maximize it here on screen and this is the basis for your interaction with the

power tools. There are a lot of options here that we can choose from, but the one I'm going to pick for

the sake of simplicity of demonstration, is I want to say let's see if in the codebase in question there

are any examples of dead code or potentially dead code. This is option e, so I'm going to hit e at the

command line and it's not e enter, you're just going to hit e and you're going to see that a series of

options comes up. This is fairly common throughout the power tools, you'll get submenus expressed in

this way, and what I'm going to say is I want to open an existing NDepend project, and what this is

going to do is it's actually going to prompt me with a window to pick the project. And this is what I was

referring to earlier, when I talked about the power tools console will pop-up windows that it needs, as it

needs them, for you to interact with it, but the basis of the interaction is generally console based. So, I'm

going to pick the AutotaskQueryExplorer and tell it to look for dead code there, and here is what it tells

me, that I have 2 unused types and an unused method. So, we can see that it is doing as

advertised, let's maybe see what the unused types are by selecting a and there, sure enough, it says

this Properties. Resources and Properties. Settings types are not actually in use. Now, within the

console her, I actually have the option, if I want, to use the arrow keys to go up and down and then if I
were to hit Enter, this would actually take me to Visual Studio where I could examine this type, but I'm

not really interested in doing that right now, so I'm going to exercise my Any key option and hit Esc to

get out. Then I'm going to hit F to get out of this menu and then finally, I'm going to hit P to get out of the

tool altogether and go back to where I was before. Now, I want to show you how you can actually go in

here and take a look at the source code, for what we were just looking at. So predictably, the source

code for the power tools is in this folder called NDepend. PowerTools. SourceCode, so I'm going to

navigate into here and then expand this to get a little more visibility as to what I'm doing and there I find

my solution file. So I'm going to open this guy up and wait for Visual Studio to launch. And it's pretty

straightforward structure in here. There are all of these different power tools that you can take a look at

and their organized by namespace, with the basic namespace, the default namespace being the main

entry point for the application, from which you can invoke the various power tools and there's a

code structure in there for the continuous loop of the menu, as well as a delegate structure for invoking

the actually power tools. And as you can see, I've been in here once or twice before. I'm going to close

these and then expand the structure here, just to give you a brief overview. So, here's Program. cs, this

is in fact a console application, as you might expect, there's the program. csentrypoint and there are

some static helper utilities here and then all of these different power tools. This is the DeadCode one

right here that corresponds to what we were looking at just now, in terms of functionality. If we go into

the tool, we'll see that it implements an interface an IPowertool interface and that is going to be basis for

if you we're to implement a power tool of your own you'll see that they all implement this interface and

then they all have this Run method and that is going to be the essentially the entry point for the power

tool itself, as opposed to the entire application. So, if you have NDepend installed and you are so

inclined, you can absolutely go in here and poke around. You can change things and tweak it and get a

better understanding of how everything within the power tools works and this lets you nicely dive into

the API as well.

NDepend.API
Now that you have had look at the power tools and source code, you've seen the NDepend API in

action, but let's talk about it in more detail. The NDepend API is a DLL that comes in the lib folder of the

NDepend install. If you add it into your solution, you can leverage its functionality to build any kind

of static analysis tool that you can imagine. The NDepend API is well documented on the NDepend

website and if you intend to do much with the API, you should definitely start there. The API is involved

enough that one could probably make an entire Pluralsight course to cover it, so I won't go into much

detail about it. Instead, I'll just describe some of the namespaces in it at a higher level and then

demonstrate adding a power tool the power tools application using the API. The default NDepend

namespace contains a service provider class that you use to get instances of the key service classes

that you need in order to interact with the NDepend functionality. These include Analysis Manager,.

NET Manager, Project Manager, and the Visual Studio Manager. The NDepend. Analysis namespace

contains the functionality for driving the NDepend analyze functionality. The CodeModel namespace

contains types like IType, INamespace, etc and these are critical types for modeling your applications

source code. The CodeQuery namespace contains the CQLinq functionality that you use for retrieving

information about your code. The NDepend. DotNet and the NDepend. DotNet. Visual Studio

namespaces contained types for modeling the framework and the IDE respectively. For a full list of the

namespaces, take a look at the documentation online. Now, by way of example, let's create that

additional power tool within the power tools project.

NDepend.API Demonstration

So here we are back in the NDepend Power Tools project, and this time I'm not just taking a look, I'm

actually going to Add a power tool. The first thing that I'm going to do in here is I'm going to define a

namespace for my power tool, just kind of following suite with what's been done here before. So I'm

going to Create a New Folder and I'm going to call it TestSmells because what I want to do is recreate

the same demonstration that I did in an earlier module, by way of creating a custom rule. So I want to

do the same kind of logic to say let's find types that have test methods in them containing more than
seven lines and I'm going to call those test classes that may have test smells. So, the first thing I've

done is created the folder, as I mentioned, and then I'm going to look for inspiration to the way these go

in other folder, so here's the DeadCodePowerTool. cs file and I'm going to create one of those of my

own and call it TestSmellsPowerTool. cs and that's going to create the class for me. I'm going to make

this a Public class and then I'm going to tell to implement IPowerTool because all of the power tools

have to do that, as I mentioned earlier. Now, at this point, this guy is not going to compile, so I'm going

to consider it a good first step to say let's go ahead and implement the interface, and see what I have to

do here in order to get this working. Now obviously, we don't want our power tool to throw not

implemented exceptions, when we go to implement, so instead what I'm going to do is say, return Test

Smells and for the description I'm going to say that I want to return a description where I say Types that

may contain test methods that are too large. So I'm going to do that and then what I'm going to do is just

have the Run method be a no op and I want to confirm that when I go in here into the Program class

and add my power tool, when I run it I actually get something happening at the main menu. So following

the pattern that exists here before me, this is MainSub and this is where you're going to see all the

different menu options, so I'm just going to add my power tool into the mix. New

TestSmells. TestSmellsPowerTool and I expect that if I Run this, what should happen is that the Power

Tools engine is going to pick up based on the polymorphism of all the power tools here implementing

that interface, it should pick up mine and execute that Run method where appropriate. So I'm going to

have a Build and then Run and then I'm going to verify that we're actually going to see the appropriate

text on the screen to represent the PowerTool that I just created. So, there you can see that now

instead of being Quit, option P is TestSmells and now appropriately Q is exit so if I actually execute the

TestSmells method, nothing happens and then I hit Enter to take myself back, and then I'm going to hit

Exit and get out of here. So that's a good start. We have a Power Tool now that doesn't do anything, but

at least its a working power tool. Now, I'm going to go back to the PowerTool class that I've created and

it's time to start actually doing something here in the Run method. And the way this works is there's a

class in the PowerTools solution called ProjectAnalysisUtils, so what I'm going to do is I'm going to say

that I'm going to go get a codebase from this class called ProjectAnalysisUtils and I'm going to use
Ctrl+. to go find that for me and I want to say ChooseAnalysisResult and then from there, I want to get

the codebase that results. ChooseAnalysisResult is going to return an IAnalysis result and that has this

codebase property and what this codebase represents is your actual codebase. So what this is doing

here is it's abstracted functionality away for the sake of all of these Power Tools, that goes out and pops

that dialogue asking you to choose what NDepend project you want to use to analyze your code and it'll

bring up --- that menu, so that there are few different ways you can go about choosing it, choosing it

from the solution, choosing the NDepend project etc. So this, if we dive into it, I'm going to hit F12 and it

will take us here and this is where you start to get into some of the nitty gritty of the API, in terms of

actually going out and figuring out what project to analyze. You can see some of the mechanics of

getting the appropriate instances here, that you need, if you drill into these methods; and you can see

how the analysis interoperates with some of the other more nitty gritty details of the API. I'm going to

back out of here and go back to where we were before because I'm not looking to get down to that

level, I'm just going to do a quick overview of the API here, where we start to use some of the concepts

like Type, and Method, and so on and so forth. Similar kinds of things that you've seen before with the

CQLinq through Visual Studio, but what I want to demonstrate here, that's most important, is that we

can actually go in and do this in C#. And then later, you can get into hitting the analysis in certain ways

and configuring a lot of the more detailed stuff. There's certainly all manner of advanced use cases you

can have in here, but I just want to cover the basics. So now, codebase in hand, what I'd like to do is

kind of confirm the executing some code and Running it, that I'm actually getting meaningful results

here. So instead of just kind of trying to code this all up and assuming that it's going to work as I

planned, I'm going to do a tentative query of this codebase and I'm going to say, let's say that I want to

just output all of the assemblies in here, how might I do that? So I'm going to say foreach assembly in

codebase. assemblies I just want to write to the console the assembly say by name and then I'm going

to do a Console ReadLine here, so that it pauses when I'm done and now I'm actually going to execute

this and we'll see what happens. Hopefully I should get a list of my assemblies in the solution that I

choose as output. So I'm going to choose my TestSmellsPowerTool here and it tells me what it

is, there's the description, that it pops up in addition to the name and I'm going to choose option A here
and just go with an existing NDepend project and I'm going to pick my AutotaskQueryExplorer and

there, low and behold, you see a bunch of system libraries, but also at the bottom,

AutotaskQueryService, AutotaskShell etc, so this is obviously working. It is successfully going out and

getting my code and I am reading the actual names of assemblies through the NDepend API. So

now, I'm get out of here and you'll notice that you're seeing kind of a funny display here, that's really a

function of the recoding resolution that I'm at, in higher resolutions this is actually looked normal, so

unless you're --- using this at a pretty low resolution, you're not going to see it look like that. Now I'm not

actually interested in the assemblies in this codebase, what I'm really looking for here is test methods

that have more than seven lines of code and I want to, based on those methods, I want to output the

name of their Parent types. So I think the thing to do at this point is to figure out, how do we identify a

test method and I think the best way to do that, to get right to the point, is to look for things decorated

with the test method attribute. So now, given our codebase, how do we go about doing that? That's the

interesting question. Well, as it turns out, based on what we saw before, where it outputs assemblies,

not only in our code, but in the system namespace and in the third party dependencies in

general, presumably, we're going to be able to get at the --- various attribute types, the things --- that

are third party dependencies that we're using, so TestMethod, TestClass, that sort of stuff. So, I think

what I'll do is I want to go and I want to find Types in the codebase and I'm going to say that I want to

find Types where the Type Name Contains Attribute. So, this will just kind of verify that I'm on the right

track here. I'm going to go look for these and I'm going to say that foreach of these I'm going to call this

for each attribute in the codebase, let's print out the name of the attribute and this is going to tell me if

I'm barking up the right tree here or not. So now, with these coded as is, I'm going to Run and I'm going

to expect to see probably a whole lot of types comes back because any attribute that's in the entire set

of third party libraries, that I'm using, should appear, but let's just see what this looks like. So I'm going

to go to Test Smells and then I'm going to choose an existing NDepend project as I did before and

I'm going to use AutotaskQueryExplorer. And actually, that isn't too bad, and if you look through

here, there's a number of attributes you're probably used to seeing including here at the bottom we do

in fact have TestMethodAttribute TestClass, TestInitialize and so on and forth. So that's a positive sign, I
think we're definitely on the right track here and I'm going to, based on that, sort of start to triangulate

what we want to get at here. Now that I've kind of nice proof of concept here, it's time to start thinking

about what I actually want to do. So first of all, I don't want to iterate through all the things that contain

the name attribute, I'm only looking for one of them, so I'm going to get rid of this loop and I'm going to

get rid of this write line. So, what I've got here an attribute, but this is actually going to pick any number

of attributes. This will return an Ienumerable, and what I want is specifically the testMethodAttribute and

I don't want all of them, I want the first one where the name is not containing, but is = to

TestMethodAttribute. So now, what I'm expecting is that this is going to actually give me

the TestMethodAttribute and what I'm going to do is I'm going to use that in the next thing I'm looking

for, which is the collection of methods that meet the description. So I'm call this MethodsWtihTestSmells

and I'm going to say that these are = to our codebases Method, where the method has an attribute, has

attribute testMethodAttribute. Now this is getting a little bit more interesting, but I'm going to go back and

maybe confirm here that I'm actually getting what I want. So as an intermediate step, I'm going to iterate

through the methods. Say that foreach method in the MethodsWithTestSmells, I'm going to write the

name of the method. And let's test that and see what it looks like. What I'm expecting to see here is

basically, every test method in the codebase. So it's not MethodsWithSmells quite yet, we'll get

there, but if thing are going well here what we're going to see is every test method in the codebase. So

I'm going to pick the project and we'll see what happens. And that definitely looks promising, I know my

own code well enough to see that this is the kind of method naming convention I tend to use for test

methods. So, that looks good to me and I'm going to get out of here and go back here and now start

refining. So in order to refine, now I have to have an understanding of what it is I'm looking for. What I'm

not looking for is every test method in my codebase. What I am looking for is the methods that meet

certain criteria, specifically as defined in the earlier module, this is going to be --- test methods where

there are more than seven lines of code. So I'm going to add a clause here to filter and I'm going to say

that not only do we want it to have the TestMethodAttribute, but we also want the Number of

LinesOfCode to be > 7, and just to note, if I were doing this for long term maintainability, I wouldn't have

the magic number 7 in here, nor would I have the magic string TestMethodAttribute, I would factor those
out or perhaps even parameterize them, in order to be a little more flexible and clear about what I'm

doing, but this is just for example purposes, so please bear with me. Now, what I should expect to see

here is when we launch this and I'm going to go and choose the TestSmellsPowerTool once again, that

is should match what happened earlier, which is that there was one method that matched only. So I'm

going to select Test Smells and then I'm going to choose the same option that I've been choosing all

along and there, sure enough, is Supports_For_Each, which is the only method matched this criteria

earlier. We're seeing the same result here, which is certainly promising. The only problem now is that

instead of returning what we promise in the name and description here, we are returning methods

rather than the types containing those methods. So, what I actually want to do is rename this guy to

be TypesWithTestSmells and then based on what I'm doing there, I should actually rename this to be

Type as well and I'm kind of lying for the minute, but I won't be in a minute here. So I'm going to say that

I want to Select and from the method, I want to Select the ParentType and now, if we mouse over the

var here, I'm expecting this to be returning, I'm going to make it explicit with code rush, so I can see

what we're actually getting back and confirm that this in fact returning and collection of types, rather

than methods. So, there we have it. The Ienumerable is one of IType rather than method because

of the select and so what I'm expecting is this is actually going to give us what we're looking for, which is

the as advertised property of returning the collection of types that contain --- test methods that might

have the test smell of being too complicated to setup. So once again, I'm going to choose the same

project and I'm going to see what we get back here and what I see is that we're getting back the

TypeResultSetTest. Enumeration, which according to my convention is the ParentType, but wanted the

broader class to match what we we're doing before. So there's one more piece of work to be done. I'm

going to get out of here and then get out of here and then what we actually want to select is the

ParentTypes, ParentType. Now, I should note here that if you're not following the convention that I do

here in my unit tests of nesting the classes, the way I described earlier, then what's going to happen is

you're going to get exceptions. I'm not being particularly robust in what I'm checking for here, but I'm just

out to demonstrate the API. So let's take a look now at what's going on, when we choose this and there

is ResultSetTest, which is what matches, what happened before earlier in the


CQLinqQueryExplorer. So now, this is a demonstration that you can pretty easily use the NDepend API

to do all of the same kind of things that we've been doing throughout the duration of this

course, manipulating code query language in order to get what we want and you get a lot more room to

move around here when it comes to doing stuff in C#, since you're able to do things like foreach and

what not, rather than just declarative, you can actually encapsulate some state and get a little

more complicated with some of the things that you want to do and that's really what the NDepend API is

for, to let you get extremely creative and customize with all of the code analysis stuff that you might

want to do.

Summary

In this module, we took a tour through the various pieces of NDepend functionality, beyond the Visual

Studio plug-in. First, we looked at Visual NDepend and how it differs from the Visual Studio plug-

in. Next up was an overview of the NDepend Console. From there, we saw how to use the NDepend

Power Tools and where to find the source code for them and finally, we talked about the NDepend API

including a detailed demonstration of how to use some of the basic features of it for querying your

codebase, using C#.

High Quality Projects: The Full Value of NDepend

Introduction

Hi, this is Erick Dietrich presenting on behalf of Pluralsight. Welcome to this course entitled Practical

NDepend. This seventh module talks about incorporating NDepend into the life cycle of your

application. In this final module, I'll talk about how you can use NDepend to add value to and improve

the quality of your projects, by going beyond just asking questions about your code. First, I'll talk about

the idea of defining rules and quality "From Now" Next, I'll go over the idea of Code Diff and Code Base

Snapshots. From there, I'll generalize to looking at Trends in Your code, over the course of time. I'll talk

about how you can know immediately when you might be introducing Breaking Changes to your code
and then I'll cover Advanced Analysis of Code Differences. From there I'll move on to talk about

Generating NDepend Reports and then I'll talk about some Practical ways to Improve Your Build

process. I'll then discuss philosophically how NDepend can Improve your Code and even your team

and then I'll provide some Additional Resources and wrap up with a Summary.

Defining Rules From Now

Throughout this course, we've see a whole a lot of ways to look at and to analyze every possible nook

and cranny of your code base, but here's one dimension that has been conspicuously absent

through all of this discussion and that dimension is time. How do you leverage all of this information to

look at your code over the course of time? Well, I'm going to address that theme a lot in the next few

slides. To start off with, I'll introduce the concept of a base line and talk about examining your code from

here forward, so to speak. What I mean is that I'll show you how to take a snapshot of your code, as it is

right now and focus on making sure that you're not increasing the number of rule violations in your

codebase, going forward. Why would you want to do things this way? Well, if you've never had the

experience of running some kind of quality analysis tool on a large legacy codebase, let me fill you in on

what typically happens. You turn on a tool like FXCop or NDepend and you're lucky if there aren't so

many problems that it doesn't crash right out of the gate and by the way, here's a pro-tip, if an analysis

tool crashes while running against your codebase, there's something seriously wrong with your

codebase. But assuming the tool doesn't crash and chugs through, you'll wind up with hundreds if not

thousands or tens of thousands of violations and seeing numbers like that is daunting. It makes you

want to give up and say, ah, forget it. But NDepend allows you to define and check on set of rules the

embody the concept of from now. In the new version, these rules fall under the group called Code

Quality Regression. The rules define a concept and CQLinq of old version versus new version and they

let you compare a properties of the two. This is really nice because you can turn off all of the current

rules, if you want, and use the ones in this group or/and equivalence for the ones you've turned

off. Doing this is important because of the broken window theory. If you get use to seeing dozens of
warning whenever you run your analysis, you're not going to pay attention because what difference

does one more make? But, if in your legacy codebase you don't have the weeks and months, you don't

have the time to spend weeks and months correcting all of the rule violations, you can effectively

say, ignore the existing ones and let's focus on avoiding them going forward. Let me show you how the

regression rules work. Right now, I have open the Code Metrics Solution that you've seen throughout

this course, but in this copy there's no NDepend project that's attached to it. Right from the

beginning, I'm going to show you how to go about attaching an NDepend project and then defining a

baseline for comparisons. So the first thing to do is to attach a New NDepend Project and I'm going to

select that I want to attach the assemblies from this solution and I'm going to say go and Analyze them

and also generate the build report, which means it's going to go ahead here and go with the full

analysis and everything and that's going to give me a baseline, that I can use for comparison. I'll just

then, when it's done, have to go and define that baseline. So, it's in the process of running the

analysis. I'm going to bring the output window in here and now that it's all set, I'm going to go ahead

with the definition of the baseline, not set quite yet, and there's the report and once the NDepend menu

option comes back, then we'll truly be all set. And there we go. So now, with that in place, what I'm

going to do is I'm going to go in here to the actual Project Properties, I'm going to close the Warnings

window, and this is where I can go to the Analysis section and down here is Baseline for

Comparison. So far, there is no baseline defined for comparison, but I'm going to go over here to

Settings and I'm going to define one. So, I want to say, let's compare with the Current NDepend Project

and then we're going to compare with The particular build and we're going to do the one that I just did,

so this is defining a baseline. I'm saying that I want it to use the Initial Analysis that was done and

that's what it's going to compare to and each subsequent analysis, which will it be the current NDepend

project, is going to be compared with this particular build. You also, as you can see here, have the

option to do a comparison with the most recent analysis or one done some set duration of time ago. So,

you do have a few options depending on what you're looking to see, as you go along. So with the

baseline in place, I'm going to say Yes, let's go ahead with this and then I'm going to go the Code Query

or Code Rules Section and take a look at the Rules Violated. I'm going to center this and what we can
see here is there's a section of rules here called Code Quality Regression and this is what I meant

about from now in the slide. As you can see, "From Now", all methods added or refactored, From now,

all types added or refactored, and so on and so forth, that was kind of the quid essential definition of

from now and that's what we're looking to demonstrate here and to kind of see how this works. So what

I'm going to do, from this point, is I'm actually going to change the codebase and do a different analysis

and I'm going to run afoul of one of these rules and show you exactly what that looks like. I know that

one good way to run afoul of these rules is this one up here, From now, all methods added or refactored

should respect basic quality principles, and it's saying here are some properties about a method, that in

order for us to add methods to the codebase, this defines basic quality principles. There's some stuff in

here about nesting depth, number of lines of code, and so on and so forth. So I'm going to close this

and I'm going to set about putting a method in place that it's definitely not going to like. So I'll pick a

class at random here and I'll define a new method and what I'll do is when it opens I'm going to define a

fairly ugly method. So let's pick one of these classes and just add a method here that nobody would

like, I'm going to call it Beast. I'm going to make it public and then I'm going to tell it I want to do a

Console. WriteLine and do something simple here and what we'll do is do this simple thing over and

over again. So I'm just going to hit Enter a whole lot of times and let this scroll down into being a

gigantic method. It doesn't have a lot of cyclomatic complexity, but I know that when I get done here the

various code rules are not going to like how many lines of code this has and that's why it's called

Beast. So, you can see Visual Studio itself struggling to keep up with all the times I just Enter to repeat

this line and sooner or later it's going to be done, and when it's done, I'm going to perform another code

analysis through NDepend and we're going to take a look and see what it says in the regression

sections. So first, I'm going to actually Build the code and then after I do that, I'm going to come in and

I'm going to run an NDepend analysis. So let's see what NDepend thinks. I'm going to hit Alt+F5 to

actually run the analysis and you'll see it going through here, you get a couple of warnings, things I

haven't cleaned up about the assembly such as telling it where to find the unit test framework and what

not, but the analysis is done and now I'm going to go back to my Rules and I'm going to see what Rules

have been violated and now you'll notice here, we actually have a regression that's happened and as
I'd predicted we're running afoul of this first rule, it does not like the method Beast. So what's happened

is that I've defined a new method in the codebase, that has too many lines of code to pass this metric

here that says methods should be of a basic quality. So this is an example of the from now in action, the

idea that we can say look, this codebase might have all kinds of violations. it might be a relatively

unpleasant legacy codebase, but we can sort of turn some of the those off and just focus on going

forward, getting things right from now and what's it's telling us here is that this new method that we've

added is not doing that. We're violating going forward from that baseline.

Code Diff: Compare Snapshots of Your Code Base

You've seen that's it's possible to orient code rules around differences in your code between two points

in time; doing this lets you keep track of whether or not you're adding problems to the codebase, but

what about comparing beyond just the rules. What if you want to take a deeper look at what's been

changing. NDepend allows you to do just that and it allows you to do it in a code-centric way. Source

control systems are great for maintaining historical information with the files as the basic unit of

storage, bust source control systems are agnostic about what those files represent. In other words, you

can see lines in the file that were added, removed, or changed, but it's harder to reason about what

happened to a particular method or to type because source control schemes that keep track of

files, don't understand those code concepts. NDepend does and that is what the basis for code diff is all

about. You can ask and get more direct answers to questions like, which classes have changed? or

what fields have been removed. It understands code. Let's take a look. I'm now going to showcase the

code diff feature, but first I'm going to change a few things about the settings. I'm going to go to the

Project Properties and what I'm going to do is tell NDepend to Keep Historical Analysis Results

because over the course of this module, I'm going to be doing a number of different analyses, so I'd like

to have as many around as possible. The next thing I'm going to do is go into the Options and I'm going

to look to say I want to go into the Analysis Settings and change a couple of things. So I'm going to tell it

to Always Run the Full Analysis, so that I get all of the different information that I want and I don't
actually have to generate the report every time to get the full analysis and I'm also going to tell it that I

want to Store results up to every hour. I want to be able to do this as frequently as possible of the sake

of this demonstration. You probably don't want to actually do that in your codebase, especially not just

on your own machine, but on the build, but that's what I'm going to do for the purposes of demonstration

and I wanted to showcase a little bit, these options that you have. So with these options set and

saved, I'm going to exit out of here and I'm going to run another full analysis of the code, after I do a little

bit of playing. What I want to do, is we have added this Beast method, but I'm going to come back in

here and I'm going to Add some more stuff, so I'm actually going to a Type. I'm going to call it

NewType. I'm going to add a Field called A and then I'm going to Add a method called SomeMethod

and I'll just have it do another Console. WriteLine or Write as it were, and I'm going to Save. I'm going to

do a Build in Visual Studio and then I'm going to Alt+F5 and run and NDepend analysis. We're going to

take a look at how code diff works, after doing this. With the analysis done, I'm going to go into a New

Menu here that we haven't seen before, the Diff Menu and I'm going to select the top option, which is

Define the Two Snapshots to Diff. Now here comes a menu saying that I can choose Assemblies or

Analysis Result to Compare or I can back through what we already did, which is Defining a Baseline or I

can Define a Temporary Baseline. What I'm going to do is choose the top one. I'm just going to choose

Two different Snapshots of the Code that I want to compare and I'm going to say let's Compare the one

I just did with this older one that is from before and what I'm going to expect to see is all sorts of stuff

popping up about the field, method, and type that we added so I'm going to select those and now, this is

stored and going forward, I'm going to be able to do diff all about these two snapshots that I've selected

to compare. So, the first thing I'm going to do is go into this menu and it has all sorts of things that you

can look at for Code Diff. The list is to exhaustive to go through right now, but here's one thing that I'm

expecting to see, let's take a look and see that we've added a NewType and sure enough, there is the

NewType that I so appropriately called NewType. So now, let's get out here and go see about the new

method and the field that we just added and take a look and verify that those are in fact here. So I'm

going to go back to the same menu and let's check out the new field. And now, what I see is

interestingly absolutely nothing. What I was expecting to see might have been that this field we added
was in fact shown here. But now if you actually look at the querying question, it's saying let's take a look

at fields where the field was added, but also where the fields ParentType was not added and looking in

more detail, we can see that this doesn't meet the criteria and I actually did do this with a purpose, I

wanted to go and point out what this would do. Basically, it means fields that were added to existing

types or methods that were added to existing types. So it turns out, we're not expecting to see this

here. If I were to get rid of this line, now we would expect to see the new field that was added and there

it is. In the NewType, we are now seeing the field that was added. So this is CQLinq, just like anything

else, we can mix and match a little here and I'll get to that in a little bit, but I just did want to point out

that behavior. Now let's take a look at the Diff Menu, once again, to see some of the other options that

are here. There's this option of API Breaking Changes, and I'm going to cover that in a future slide, and

then there's this Code Quality Regression, which is a menu version of what we had seen before in the

rules itself, so you have the option to access that here as well. What I want to go to now is this Review

Diff and that's going to bring up the Tree Map view. So let's take a look. And Review Diff gives you this

cool ability with some buttons that are right here and very easy to access to see some things that have

changed or that are different or what have you, between these two snapshots of the code. So this gives

you a kind of more visual sense, the way we'd seen before with the Tree Map. So, what I'm interesting

in saying here is let's look at some things that were Added. And there's a tiny blue line there and you

can see in the actual Search window below here, that we're seeing what was added was this NewType

or New method and that's all of the level of granularity that you're going to see here, given the GroupBy

and what not, but I did want to point that out that you get this nice Diff view in the Tree Map view as

well. So like a lot of the other functionality, you have different ways of seeing this. We can see it in the

code query window. We can also see it in the Tree Map view window as well. And now, I'll briefly talk

about a couple of the other menu options that are here under the Diff Menu. You can Review how the

Diff is Covered by Tests. I haven't enabled code coverage yet in this particular project, so that won't

show anything. But when you add code and you have Test Coverage setup with NDepend, you can see

how the different things that you've added, or done, or changed, or what have you, are covered by unit

tests. There's also the Browse Dependencies Diff and what that does it is shows you that Matrix view in
contrast to the Tree Map view. So you really get the whole suite here when it comes to the diff. There's

also, you can Define in the Settings, below that. Source File Comparison Tool, if you have something

like beyond compare, or Kdiff or a tool of that nature, you can set that up to work here and then there's

the Online Documentation for further help and reading, if you so desire.

Examining Trends in Your Code

Now that we have a full on code comparison tool, in our back pocket for comparing codebases in detail

over a period of time, it's time to move beyond simple comparisons. Knowing what's been added,

removed, and changed is clearly valuable, but the value runs deeper. As I demonstrated earlier in the

course, you can keep track of trends in your metrics over the course of time. This includes both simple

metrics, like the number of lines of code in the codebase and also more nuance metrics such as rules,

compliance, or third party code usage. Using NDepend's trend features, you can view your trends from

the dashboard and you can also view the queries that generate the trends. Beyond that, you can

actually define your own trend metrics to use in the queries and charting as well. The customization

allows you to leverage your own specific concepts of targeted areas for improvement or warning

signs, to see that you're on the right track. If for instance, you have a highly procedural codebase with

lots of state-full static methods, you might define a metric for the percentage of static methods in

your code and chart to see if it decreases over time. Let's take a look at trends, in NDepend. In order to

demonstrate this, I'm going to tweak a few settings here first. So the settings for the Diff are still the

same as from the last demonstration, which means that right now, there are two selected snapshots

that are currently being diffed. So I'm going to say that we want to Stop comparing and keep only the

current Analysis Result. This is going to allow us to actually take a look at the trends. Now let's lunch the

Dashboard and take a look around. So what you'll notice is that we have two different analysis

snapshots of the codebase and those are being incorporated here. If you look at Lines of Code that's up

because we added that Beast method and the type and the other method as well, although those

wouldn't have nearly as much as an effect. If you look at Method Complexity it actually went
down, which is interesting, but then you stop to think about this is talking about cyclomatic complexity of

the methods and even though we added a very long method, it didn't have any cyclomatic complexity to

contribute, so that means the score goes down. Number of Types goes up and these are basically

the things that would expect to see here, given the only two analyses of the codebase that we've done

thus far. Over the course of time, this gets a lot more interesting, as you have a lot of different

snapshots of the code being added into the trend metrics area, so the Dashboard becomes a real true

Dashboard of your codebase showing you all sorts of different things about what's going on as a

function of time. You can also go to this menu under Trend and you can see that you have the ability to

look at actual Trend Metric Queries and the Trend Metrics themselves. So for instance, under Code

Size, we have a lot of different trends that we can look at. We can say, here's the number of Lines of

Code for instance or the Number of Types and so let's take a look at this specifically. See here, we

have a count of 27 Types and what this is the different trend metrics and the queries associated with

them, it's the individual queries that are associated with all these things that show up in the

dashboard, all the things you can take a look at, so counts of things is usually what you're going to

see because that's the nature of the metrics. So you have the option to go and again view all of these

individually, as you wish about the codebase. Now I'd like to show something pretty cool, which is how

you can actually go in and create your own trend metrics. So, what I'm going to do is go in here and

Create Trend Metric and that's going to pop up a window and I'll center it here, which is a Code Query

window, but it's got a bunch of stuff in it that tells you how to go about actually setting up your trend

metric. I'm going to skip that for now because it's not really important to the demo at the

moment, though when you're doing this yourself you might want to read through that because it will be

helpful. And what I'm going to do is I'm going to create a metric based on what I talked about before,

which is to say, let's take a look at how many static methods are in the codebase as a function of the

total methods, so it's the percentage of static methods in this codebase. The way I'm going to do that is

I'm going to say that from all the method or I should say, from all the assemblies in

JustMyCode. Assemblies, I want to let a couple of things be true here. I want to let statics be the

method and I'm going to have to use --- assembly. ChildMethods here and I'm going to get the Count of
methods where the Method is Static and I'm going to let total = assembly. ChildMethods. Count and

then what I'm going to do is select the decimal 100 x Static/Total and I need some aggregator on

that, so I'm just going to take Sum even though there's only one thing that it's returning. So, this is giving

me the percentage of methods in the codebase right now that are static. Alright, so how do I know that

this is actually working? Well, what I've done is I've created a method in the codebase here, this Asdf

method and that is the only static method that I have right now. So let me comment that out and then I

want to build the actual Visual Studio solution and following that I'm going to do an NDepend analysis

using Alt+F5. And once I'm done with that, what I'd expect is when I go back the Query Rules edit

screen, that's going to be showing me that I have a value of 0 now, once it's refreshed. I won't have any

static methods at all in the codebase. So, let's take a look and see what's going on here and there you

can see 0 TODO Unit actually and while we're in here, that's another thing that I want fix. You want to

give this a nice name while you're creating it so I might call this Percent Statics and then the Unit is

actually what you want it to say next to that 0 right there, which you could do % and you would see it's

0% or you could spell out Percent which is what I'm going to choose to do here. So I'm going to Save

that and then if I go back to this class and I uncomment this method, once again, and do the whole thing

all over, I Build and then once complete I'll Run the NDepend analysis with Alt+F5. What I'm expecting

is that that goes back to that 2. 33 figure that I saw before, so let's go over and take a look and once it

finishes analyzing we should see where we're at with a Refresh and then it'll say 2. 33 and there you

have it. Now that I've created this, I'd like to do a demonstration of how we can actually use it in a

chart. If you look at the Dashboard here, there are all these charts, as you scroll down, where you can

see visual information about the codebase which is pretty neat. So what I'm going to do is actually show

you how to use this newly created metric in a chart. But first, I'm going to go to the Options and I'm

going to go to the options and I'm going to set something I'm for this demonstration, which is the Trend

Metrics Log. This is where in the Options you can set how often you're recording metrics information in

order to actually make this work, since I'm doing the demonstration all kind of in a short span of time

here. I don't want to Log Metrics At most once a day, I want to actually say let's Log the metrics every

time I run an analysis because that's what's actually going to let us record information her for
demonstration purposes. In your actual demonstration project, you probably do not want to have this set

to Always, if you're doing a lot of builds or a lot of analyses. This is going to be how often do you want to

see the trending data and you probably don't want dozens of points of day or whatever it might be, your

mileage may vary here, but that's just something to be aware of. So with that set in place, I am going to

now say let's Define a Trend Chart. And I get a screen here that says let's take a look at creating a

Trend Chart. What is the Chart Name going to be? And I'm going to call this Percentage Statics, just to

be consistent, and you have a an option here as to whether or not you want to show this Trend in the

actually NDepend report that gets generated. So the next thing do here is you have to pick out what you

actually want to show. And what we want to show is the newly created Percentage Statics and that is

appearing down here. So, now we can see that the Percentage Statics here, it has picked up the last

analysis and that happened to be because I went over the midnight barrier and technically into another

day. If I want to create subsequent ones, I need that set to analyze Always. But you can see that we get

a preview of what the charts actually going to look like and I can show Area under the curve or I can

show a line. Similarly, you can change the Color and you can change the Scale of this if you are so

inclined. So, this is a really nice thing that you can do here and you can Save it and here it appears at

the bottom of the Dashboard.

Know Immediately about Breaking Changes

You might have noticed that I skipped over a menu item when talking about Code Diff a couple of slides

ago. The reason I did this was because I thought bore mentioning all on its own. Using the NDepend

Code Diff feature, you actually have the ability to pinpoint and be alerted to breaking changes in your

public API. A lot of breaking changes can slip through the crackers pretty easily. A breaking change

happens when you alter the public interface to your codebase. Any code relying on the old public

interface is now at risk for not complying or running properly with the updated version of your

code. Countless hours have been lost to this kind of thing. A developer checks in a change where a

parameter is added to or removed from a method, and when the code is delivered much later, nobody
remembers that this happens, which caused a downstream break. Using NDepend, you can now

investigate code elements to see if your API has broken. This include methods, fields, types, and

abstract types, as well as more subtle considerations like changes to enumerations, serializable types,

and type mutability changes. Let's take a brief look at how that works. From the NDEPEND menu, I can

go into Diff and here is the menu item, which was skipped over, which is API Breaking Changes. So for

instance, let's see if there's been any methods that have had Breaking Changes to them and the

answer is no that that hasn't happened. And what I'd like to show is how to actually make that happen

or more accurately, what triggers that to happen because you generally don't want to make that happen

in your codebase, if you can avoid it. So in order to institute a Breaking Change, what I have to do is

come in and I have to change the public interface of some existing method. So what I might do is I

might add and integer to this method called X and maybe I'll come in here and for good measure, I will

remove this method from Class A called Beast and what we'd expect then that if there were something

that had called the method up here called SomeMethod, that would now be broken, just like if there

were something that were using PublicType A, now that Beast method has been removed, if that

method we're being called that calling code would also now be broken. So what I'm going to do is

execute a Build and then I'm going to execute and NDepend Analysis using Alt+F5. Now, let's take a

look to see what we have in the way of Breaking Changes. If I go to Diff and back to API Breaking

Changes, what we've been looking at here is Methods so let's take a look at the Method Level and it

says that there are three methods that have matched this Breaking Changes criteria,

SomeMethod, which is gone because we've replaced it with an overload that has an integer. The

method Beast and then the method Asdf, which kind of came along --- for the ride from the

previous demonstration having been commented out. So you can see that when you go in and you

change the public facing parts of your code base, you can be alerted to it here and in fact, you can

configure this to setup warnings or even critical warnings, if you so choose, so that you have a scheme

where your Build alerts you to things where you might be breaking code that depends upon you.

Advanced Analysis of Code Differences


So far, I've shown a variety of ways to look at codebase changes and you can certainly get tons of

valuable information this way, but if you want to define your own custom differences in code, you

certainly have the ability to do so. Using CQLinq, you can check out whatever differences in your code

you can think to query. It allows you to completely customize what kinds of differences you want to

see. CQLinq is just as useful for evaluating code in terms of time, as it is for just taking a look at your

codebase in general. Using a handful of extension methods having to do with change in your

codebase, you can see what has been altered. We used this in the previous example. Some examples

include WasChanged, which allows you to see if a code element was changed since an earlier version

of the two codebases being compared. WasAdded and WasRemoved tell you whether the

elements were added or removed respectively. CodeWasChanged tells you whether code belonging to

this element was changed at all and VisibilityWasChanged indicates the visibility level assigned to the

type being altered. OlderVersion and NewerVersion allow you to iterate conceptually over time, to

different versions of the code element to which you're referring. This is not an exhaustive list, but if

definitely gives you a feel for the options available to you in CQLinq for doing advanced queries about

changes to the code beyond just what comes out of the box.

Generating Reports

Earlier, I took you on a tour of the NDepend Report. I did this by using the NDepend Visual Studio plug-

in to run a report, which popped up in the browser as a small local website, confined to a folder on my

machine. If you think about what this means, there are some very interesting ramifications. One of the

greatest difficulties, when it comes taking initiative to improve a codebase, is that there is a certain

degree of inertia in place, in any team. It may be that you champion clean code ideas and beg people to

write unit tests, but you only get halfheartedly cooperation, when people think of it. It's not that people

are lazy, they're usually just busy and when people are busy they tend to take shortcuts and go with

what they already know. If there is a mechanism to shine a spotlight on undesirable behaviors

however, change usually starts to take place. Think of the effect of regular source control check-ins to
the same codebase. In scenarios like that, people rarely check in non-complying code because

teammates will get on them about slowing everyone down. But with code quality issues like gigantic

methods, poor test coverage, etc, there is an immediate feedback about them. If you make that 16, 000

line singleton a little bigger and check-in the differences, unless people are conducting detailed code

reviews, nobody is going to notice or care expect the next time there in there a month later when their

muttering, how does this thing just keep growing? NDepend can easily change this culture with its

reporting feature. You've seen me generate the report, but how hard would it be to write a simple script

that pushes the report onto a web server for everyone to access? How about to add and email to that

script, to email everyone the report? Or perhaps, NDepend executes the analysis and report on your

codebase for every check-in and emails people with the report. When this is happening the entire team

is confronted with the information that you just piled onto that monstrous singleton and you wind up with

the same kind of behavior altering feedback that causes people not to check in code that won't

compile. The simple nature of NDepend's report output makes all of this possible. You can email it to

people, share it, publish it, or anything you like because it comes in a format so conducive to cross-

platform compatibility and general portability and it's trivial to integrate NDepend into your build or

continuous integration process because of its command line nature. I'm going to show you how easy it

is to integrate NDepend's execution into your build. And the way I'm going to do this is going to go to the

Project Properties and I'm going to add a Post-build event. Now normally, this isn't how you would

integrate something into your build, Post and Pre-build events are sort of a little bit hackish and it's not

something I would normally resort to. So if you're using something like Team City or TFC or Final

Builder there are more sophisticated steps that you can take. You can actually create a build target for

M is build using NDepend and that's the way I would go about it, but just for the demonstration

purposes here I want to show you how this might work. So here in the paste buffer, I have a simple

command line. So the first, or the actually executable here that's going to be executed is the

NDependConsole. exe that I've talked about earlier and I'm going to apply that to the Solution

Directory. In quotes because there are spaces in that directory and I'm going to then apply it specifically

to the code metrics NDepend project that I've created for this demo and the /ViewReport is so we can
actually see what's happening when we run build. So I'm going to Save this and then I'm going to kick

off a build. And what's going to happen is that the project will build normally and when it's all said and

done NDepend will run its analysis now on every build and what's more is we will actually see the

browser pop-up with the NDepend report. Now again, this is not probably something you want to do in

production, I just want to caution you that right now, not something you want to do in your day-to-day

code process, but it is nice to see that this can happen and there in fact pops-up the NDepend report,

just as instructed in the post-build event. So, this is an actual demonstration that it is possible, right into

your build, on your local machine, to hook in NDepend and actually be getting this in your full build

cycle. Now before I wrap up here, I'm actually going to Delete this setting and I would like to point out

that this was done for demonstration purposes and it is actually possible, through the NDepend

options, through a couple of settings to make the same thing happen, at least on the local machine, this

wouldn't apply to your build machine, obviously unless you configured NDepend to be installed and to

do it there, but you can go in here, into the Analysis Setting Options and have Build Report

checked, which it currently is and then the other thing you would do is you would go up to the this option

for Refreshing Analysis in Visual Studio and right now there's something checked that says Not more

often than every hour. You could just uncheck this and have the automatic refresh triggered all the time

and in this case, you would after every local build have a full report generated.

Practical Examples To Improve Your Build

Let me start by saying that the possibilities for integrating NDepend with your build are basically limited

only by your imagination. I know that sounds a little cheesy, but it's true. You have an entire declarative

language for asking detailed code oriented questions about your codebase and you have a way to

hook the answers questions into your build. It also doesn't hurt that you have at your disposal, a whole

lot of already built tools for assisting in visual displays as well. So here's a list of concrete ideas for

things that you can in do to improve your teams build process, but really you can extend and customize

as much as you want. The easiest step is to add the analysis and generation of the NDepend report to
your build process and then to have the report deposited somewhere that the team can view it. I just

demonstrated a crude version of this where the report would generate for anyone building on their local

machine. Given that NDepend command line tool kicks back output indicating the critical rules have

been violated, it's also easy to have your build fail when critical rules are violated. With NDepend's

customization mechanisms, you can define a critical rule that test coverage cannot be below a certain

percent. If you have already setup your build to fail when critical rules are violated it will now fail for the

violation of this new critical rule. I talked earlier in this mod about NDepend being aware of breaking

changes to your public API, you can certainly design a utility that examines NDepend's output for

the presence of these violations and triggers an email to go out to your team and even to work this

email into the build. If you have setup a gated check in through TFS, by having the build fail on critical

rule violations, you can actually prevent code with such violations from even being checked in. If you

write a custom mechanism to examine NDepend's output, extending it to publish or notify about things

in your codebase, most in need of refactoring will not be a difficult addition. Nor would it be difficult to

extend this to include a list of the most risky types to change. Nor would it be difficult to push the PNG

and the report containing your assemblies dependency graph to anywhere you wanted it to go. And

finally, you could also examine NDepend's output files to keep track of metrics and alert the appropriate

people when things get too bad. As I've said, this is all just a drop in the bucket of what's possible, but

these are some ideas that would be fairly straightforward and would more than likely improve your build,

perhaps even drastically.

How NDepend Improves Quality And Efficiency

To wrap up the NDepend course, I'd like to take a look at how NDepend can improve the quality

and efficiency of your codebase and your projects. All of the different features and techniques you've

seen through the course, come together to have a broader effect on the quality of your work product

and even your team. First of all, NDepend shines a light on your code in a way that it hasn't seen

before. For example, there's a good change that nobody has ever asked how cohesive the methods are
on average or whether fields that could be read-only have been marked as such. Just getting answers

to those questions and then realizing that they're worth asking is valuable in and of itself. One of the

reasons that understanding which questions to ask is valuable is that it starts to teach your team what

the industry at large considers to be good practices and bad practices. While everyone else is doing it is

obviously not a reason to blindly adopt something, understanding what the broad consensus about

practices are at least gives you more information than if you had just developed in a vacuum with no

outside input. One of the key benefits of NDepend is the speed with which you get feedback about

nuanced issues. When it's not laborious to figure out how many large methods and types there are in

your codebase or what percentage coverage is in the methods that you've just added, your far more

likely to bother getting the answers to those questions and then to act accordingly. NDepend removes

barriers to useful information. It almost goes without saying that you're going to be more productive if

you can search and compare your codebase more effectively and NDepend allows you to do exactly

that. The less time you spend flailing around your solution looking for something, the more time you

spend route actually writing code. As the common saying goes, a picture is worth a thousand

words. Having access to NDepend is like having access to a giant whiteboard on which your application

architecture is drawn. However, unlike a giant whiteboard, NDepend is always accurate and is updated

automatically. NDepend allows you do define concretely what you want from your codebase in terms of

properties of the code and quality. For instance, you can decide that you don't want any classes with

more than 200 lines of code or that you don't want any extremely non-cohesive types. This allows you

to set metrics that can help guide design in code quality in general. And finally and perhaps that the

broadest level, it does what I mentioned in the previous slide, which is to promote accountability on your

team regarding the code. All of the features add up to the fact that you can have discussions about the

properties of good architecture and then codify those discussions and check your progress against

them. The feedback is fast and detailed and there's nowhere and no way to hide bad code.

Additional References And Resources


Here are some references to using NDepend, within your projects. The first link takes you to the first

entry in the NDepend documentation under Build Process Integration where a lot of what I've covered in

this module is discussed. The second link is a project that integrates NDepend with TFS 2010 and is

recommend on the NDepend site. While there is no similar project for TFS 2012, you can download this

project and use it as a template for your own integration with the latest and greatest. The third link is a

primer on Build Targets, using MS Build, which can help provide you the hook you need for

adding NDepend to your actual build process. The fourth link describes the anatomy of the NDepend

Build Report and the fifth link is a description of trend monitoring in NDepend, while the sixth goes into

detail about Code Diff.

Summary

In this module, I covered the general subject of how you can go beyond querying your codebase to

realize the full value of NDepend. First, I talked about defining code rules From Now and then

I generalized that to talk about the idea of doing diffs between snapshots of your code. Next up, I

generalized further to talk about examining Trends in your codebase, over the course of time. I

discussed how you can be informed Immediately of potentially Breaking Changes and then I discussed

Code Diff in more detail going into using CQLinq to analyze customize differences. From there, I talked

about the NDepend Report and then I cited some practical examples of how NDepend can improve

your build process. Then I talked philosophically about how NDepend can improve the quality and

efficiency of your team and its work and I wrapped up with some additional references and resources.

You might also like