Professional Documents
Culture Documents
Supplement To Inside The Microsoft Build Engine - Using MSBuild and Team Foundation Build (PDFDrive)
Supplement To Inside The Microsoft Build Engine - Using MSBuild and Team Foundation Build (PDFDrive)
Supplement To Inside The Microsoft Build Engine - Using MSBuild and Team Foundation Build (PDFDrive)
William Bartholomew
If you purchased this ebook directly from oreilly.com, you have the following
benefits:
DRM-free ebooks—use your ebooks across devices without restrictions or
limitations
If you purchased this ebook from another retailer, you can upgrade your ebook to
take advantage of all these benefits for just $4.99. Click here to access your
ebook upgrade.
Please note that upgrade offers are not available from sample content.
A Note Regarding Supplemental
Files
Ah, the thankless life of the Build Master. If they do their job well, you’ll never
know they exist! If they don’t, well, everyone knows it’s the Build Master’s
fault, right?
I’ve been a builder in one form or another since my first foray into managing the
build. Nearly 15 years ago now, I worked on an extremely large system with a
team of hundreds. When it came time to build, we used Fred’s machine. Yes, I
learned that day that we built and shipped large systems on Fred’s laptop.
This is also how I came to find that learning a new build system is similar to
boiling a frog. If you throw a frog into hot water, it jumps out. But if you turn the
water up slowly, the frog doesn’t realize it’s getting hot, so it stays in the pot and
gets boiled. The team didn’t realize how big the system had become and how
complex the build was getting.
I realized immediately, somewhat intuitively, that we needed build box. Fast-
forward some years, and now every group I work with uses Continuous
Integration. Groups I work with have build farms, one with a “Siren of Shame,”
a flashing light to effectively shame the build-breaker. We have build artifacts as
complex and elegant as actual preconfigured virtual machines that pop out the
end of our build.
All this was made possible by the power of automation and the surprising
flexibility of MSBuild. Sayed and William have written what amounts to the
“missing manual” for MSBuild. MSBuild, and its enterprise support counterpart
Team Foundation Build, are almost unapologetically powerful. However, they
need to be.
Today’s software systems are multilayered, multitiered, and iterate at a speed
previously unheard of. All our software development practices and team
building comes together at one pinch point: the build.
This essential reference to MSBuild gives us the knowledge not only into how to
create an adaptable and vigorous build system, but also valuable insights into the
“why” of the product. William is a senior development lead on engineering
systems within the Developer division at Microsoft, while Sayed is a program
manager overseeing build and pushing for the Microsoft Azure Cloud and Web
Tools. I could think of no better people to help me understand a large build
system than the folks building large systems themselves.
Sure, we’ve all started with “Build.bat” and called it our build system. Perhaps
we’ve put together a little schedule and called it an automated build. But these
simple constructs don’t scale across a large team or a large product. This book is
what the documentation should have been—a guide that takes us through the
humble beginnings of MSBuild as a supporting and unseen player in the .NET
ecosystems to complete and sophisticated team build solutions.
More importantly, Sayed and Bill dig into the corners and edge cases that we all
find ourselves bumping up against. They elaborate on the deceptively deep
extensibility model that underlies MSBuild and give us the tools to bring both
stock and custom components together into a complete team workflow.
MSBuild continues to evolve from version 2, to 3.5, and now to version 4 and
beyond. This updated supplemental edition builds (ahem) on the good work of
the previous editions and includes new sections on the updates to the MSBuild
core, changes in Team Build, and even updates to Web Publishing in Microsoft
Visual Studio 2012.
I’m glad that this book exists and that people who care about the build like
Sayed and William exist to light the way. Now, if I can just find out what I did
just now that broke my build . . .
— Scott Hanselman Teacher, coder, blogger, podcaster hanselman.com
Introduction
Build has historically been kind of like a black art, in the sense that there are just
a few people who know and understand it and are passionate about it. But in
today’s evolving environment, that is changing. Now more and more people are
becoming interested in build and making it a part of their routine development
activities. Today’s applications are different from those that we were building 5
to 10 years ago. Along with that, the process that we use to write software is
different as well. Nowadays, it is not uncommon for a project to have
sophisticated build processes that include such things as code generation, code
analysis, unit testing, automated deployment, and so on. To deal with these
changes, developers are no longer shielded from the build process. Developers
have to understand the build process so that they can employ it to meet their
needs.
Back in 2005, Microsoft released MSBuild, which is the build engine used to
build most Microsoft Visual Studio projects. That release was MSBuild 2.0.
Since that release, Microsoft has released three major versions of MSBuild—
MSBuild 3.5, MSBuild 4.0, and now MSBuild 4.5. Along with the updates
included in MSBuild 4.5, there are many build-related updates in related
technologies. For example, with Visual Studio 2012, you now have the ability to
share projects with Visual Studio 2010. Another great example is the usage of
NuGet. In many ways, NuGet has changed how we develop and build
applications. In this book, we will look at the updates included in MSBuild 4.5,
as well as other related technologies.
Team Foundation Build (or Team Build as it is more commonly known) is now
in its fourth version. Team Build 2005 and Team Build 2008 were entirely based
on MSBuild, using it for both build orchestration and the build process itself.
Team Build 2010 moved build orchestration to Microsoft Windows Workflow
Foundation and continues to use MSBuild for the low-level build processes.
Team Build 2012 continues this architecture but now supports building in the
cloud using the Team Foundation Service, an updated task-focused user
interface, gated check-in improvements to improve throughput, and better
support for unattended installation.
When developing automated build processes, the next step in many cases is to
automate the publish process. In Visual Studio 2010, the initial support for the
Web Deploy tool was added. In Visual Studio 2012, there have been a lot of
updates to how web projects are published, including first-class support for
publish profiles from the command line, sharing of publish profiles with team
members, database publishing, and many more. In this update, we will describe
these updates and show you some real-world examples as well. You’ll see how
the process used in Visual Studio 2012 is much more straightforward than what
was provided in Visual Studio 2010.
Assumptions
To get the most from this supplement, you should meet the following profile:
You’re familiar with MSBuild 4.0 and Team Foundation Build 2010.
You should have experience with the technologies you are interested in
building.
Text that you type (apart from code blocks) appears in bold. In code blocks
code in bold indicates code added since the previous example.
System requirements
You will need the following hardware and software to complete the practice
exercises in this book:
One of Windows 7 (x86 or x64), Windows 8 (x86 or x64), Windows Server
2008 R2 (x64), or Windows Server 2012 (x64).
Visual Studio 2012, any edition (multiple downloads may be required if using
Express edition products)
Code samples
Most of the chapters in this book include exercises that let you interactively try
out new material learned in the main text. All sample projects, in both their pre-
exercise and post-exercise formats, can be downloaded from the following page:
http://aka.ms/MSBuild2ESupp/files
Follow the instructions to download the MSBuild2ESupp_678163_Companion
Content.zip file.
NOTE
In addition to the code samples, your system should meet the System Requirements listed
previously.
NOTE
If the license agreement doesn’t appear, you can access it from the same webpage from which
you downloaded the MSBuild2ESupp_678163_CompanionContent.zip file.
Acknowledgments
The authors are happy to share the following acknowledgments.
Stay in touch
Let’s keep the conversation going! We’re on Twitter:
http://twitter.com/MicrosoftPress.
Chapter 1. What’s new in MSBuild
4.5
The latest version of MSBuild is 4.5, which was released along with Microsoft
.NET Framework 4.5 and Microsoft Visual Studio 2012. Typically, someone
thinking of MSBuild includes items that are not technically a part of MSBuild.
For example, it’s very common to include updates to the build process or the
web build and publish process for Visual Studio projects as a part of MSBuild.
In reality, this support is built on top of MSBuild. With the release of .NET
Framework 4.5 and Visual Studio 2012, you’ll find many updates to pieces
surrounding MSBuild, but only a few updates to the core MSBuild technology
itself. In this chapter, we’ll cover updates to MSBuild, as well as related
technologies that you might already categorize as being part of MSBuild.
One or more changes are required to make the projects compatible with both
Visual Studio 2010 SP1 and 2012.
For the most part, your solutions should fall into the first category. The
difference between the first and second options are that changes are required to
one or more projects to ensure that they can be loaded by both versions. A good
example of this are web projects. If you have a web project created with Visual
Studio 2010, it will be modified slightly when it’s first opened in Visual Studio
2012. The project will be modified to use a property for the location of the
related .targets file instead of a hard-coded value. For the third case, there are
some projects that 2012 no longer supports, so you will not be able to load those.
For example, 2012 no longer supports Setup and Deployment projects,
Extensibility projects for 2010, Web Deployment projects, and a few others. A
new property, VisualStudioVersion, was introduced to assist in scenarios where
multiple versions of Visual Studio may be used for a given project. Let’s discuss
this new property now.
VisualStudioVersion property
One of the enablers of sharing projects between Visual Studio 2010 and 2012
was the introduction of a new MSBuild property, VisualStudioVersion. When a
project is built in Visual Studio, or from a developer command prompt, this
property will be set to 11.0 or 10.0 for 2012 and 2010 SP1, respectively. One
example of how this property is used can be found in the web project file. If you
open the .csproj/.vbproj file of a web project, you will see these elements:
<PropertyGroup>
<VisualStudioVersion Condition="'$(VisualStudioVersion)' ==
''">10.0</VisualStudioVersion>
<VSToolsPath
Condition="'$(VSToolsPath)' == ''">
$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)
</VSToolsPath>
</PropertyGroup>
<Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" >
<Import Project="$(VSToolsPath)\WebApplications\Microsoft.WebApplication.targets"
Condition="'$(VSToolsPath)' != ''" >
<Import Project=
"$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.
WebApplication.targets"
Condition="false" />
Here, you can see that the VisualStudioVersion property is initialized to 10.0 if it
is not defined already. That property is used to determine the path to the
Microsoft.WebApplication.targets file that is imported. Web projects created
with 2010 previously hard-coded this value. By using the VisualStudioVersion
property, these projects can be opened in either Visual Studio 2010 or 2012.
NOTE
You might have noticed the last import, which has v10.0 with a hard-coded
Condition="false" code. Believe it or not, this is by design. Without this import, Visual
Studio 2010 would treat the project as if it were out of date and “update” it by reinserting the
import for the v10.0 targets file. To keep this from happening, the import cannot be removed.
When you are building solutions or projects from the command line, you should
be aware of this property and know how it might affect your builds. First, let’s
cover how this property is being set, as it is not a reserved property. This
property is set in the following way:
1. If VisualStudioVersion is defined as an environment variable or global
MSBuild property, that is used as the value of this property.
2. If building an .sln file, the value used will equal Solution File Format
Version - 1.
3. Choose a default: 10.0 if Visual Studio 2010 is installed, or else the value
will be the highest version of the sub-toolset that is installed.
Out-of-process tasks
In MSBuild 4.0, it was difficult to invoke a task under a different context than
the one that the build was executing in. For example, suppose you have a .NET
task that needs to execute under a specific CPU architecture. In the past, you
would have to be a bit creative to ensure that it was executed in the correct
context. The task execution would be the same as the Msbuild.exe process.
There are two different versions of Msbuild.exe: a 32-bit version and a 64-bit
version. If you executed your build using the 32-bit version, then your task
would be loaded in a 32-bit context, and the same goes for 64-bit.
In MSBuild 4.5, it’s a lot easier to ensure that your tasks are loaded in the correct
context. Two updates in MSBuild 4.5 enable this: new parameters for the
UsingTask element and Phantom Task parameters. Let’s start with the updates to
the UsingTask declaration.
UsingTask updates
In the previous edition of this book, we showed many different examples of
using tasks inside MSBuild project files, and we detailed the attributes of
UsingTask in Chapter 4, “Custom tasks.” The two new attributes that have been
added in MSBuild 4.5 are listed in Table 1-1. When these attributes are present,
they will affect all invocations when that task is loaded.
Attribute Description
name
Architecture Sets the platform and architecture and bitness. The allowed values are x86, x64,
CurrentArchitecture, or * for any of them.
Runtime Sets the Common Language Runtime (CLR) version for the task context. Allowed
values are CLR2, CLR4, CurrentRuntime, or * for any of them.
For example, if you have a task that requires that it always be executed under a
64-bit architecture, you would add the attribute Architecture=“x64” to the
UsingTask declaration. Without this, if the build was executed with the 64-bit
version of Msbuild.exe (which can be found under %Windir%\Framework64\),
then you would be OK, but if the 32-bit version (which can be found under
%Windir%\Framework\), you would encounter errors. Let’s see this in action.
Let’s create a task that we can use as an example. We won’t go over the details
of how to create a task here. If you need a refresher, visit Chapter 4 in the
previous edition. In the following snippet, you’ll see the definition of the
PrintInfo task:
using Microsoft.Build.Utilities;
return true;
}
}
You can find this task in the samples that accompany this book. First, let’s look
at the default behavior when invoking this. The next code fragment shows the
contents of Printinfo-01.proj:
<UsingTask TaskName="PrintInfo"
AssemblyFile="$(MSBuildThisFileDirectory)\BuildOutput\Samples.Ch01.dll">
<Target Name="Demo">
<PrintInfo >
</Target>
</Project>
Now let’s take a look at the result when this task is executed. From a Visual
Studio 2012 Developer command prompt, execute the command msbuild
printinfo-01.proj. The result is shown in Figure 1-1.
Figure 1-1. The default result when building printinfo-01.proj.
Here, you can see that the task is running under CLR 4.0, in a 32-bit context.
You may be wondering why it’s not running in a 64-bit context. Because we did
not specify the architecture on the UsingTask declaration, the task is loaded in
the context in which the build is executing. So it inherits the architecture of
Msbuild.exe. In this case, we have invoked the 32-bit Msbuild.exe. The Visual
Studio 2012 Developer command prompt will use the 32-bit version of
Msbuild.exe by default. Now let’s invoke it with the 64-bit version of
Msbuild.exe and see the result. To make this simple, in the Visual Studio 2012
Developer command prompt, I created an alias to this executable using doskey
msbuild64=%windir%\Microsoft.NET\Framework64\v4.0.30319\msbuild.exe
$*. In Figure 1-2, you can see the result of running msbuild64 printinfo-
01.proj.
Figure 1-2. The result when building printinfo-01.proj with the 64-bit version of Msbuild.exe.
In Figure 1-2, you can see that the PrintInfo task is now executed in a 64-bit
context. If you need a task to load with a specific architecture or CLR run time,
you can tweak the UsingTask element to indicate this. In the following code
snippet, you will see the contents of Printinfo-02.proj, which is very similar to
the previous sample:
<PropertyGroup>
<PrintInfoArch Condition="'$(PrintInfoArch)'==''">x86</PrintInfoArch>
</PropertyGroup>
<UsingTask TaskName="PrintInfo"
Architecture="$(PrintInfoArch)"
AssemblyFile="$(MSBuildThisFileDirectory)\BuildOutput\Samples.Ch01.dll">
<Target Name="Demo">
<Message Text="PrintInfoArch: $(PrintInfoArch)" >
<PrintInfo >
<Target>
</Project>
Here, you can see that a new property, PrintInfoArch, has been added. The
default value for this property is x86. The value of this property is passed in as
the value for the Architecture parameter on the UsingTask element. This will
ensure that the task is always loaded with the specified architecture. Let’s take a
look at the result. In Figure 1-3, you will see the result of executing
msbuild.exe printinfo-02.proj /p:PrintInfoArch=x64.
Figure 1-3. The result when building Printinfo-02.proj and specifying an x64 architecture.
Even though we are using the 32-bit version of Msbuild.exe, the task is loaded
under a 64-bit context. You can use the Runtime attribute on UsingTask to load a
task specifically under CLR 2.0 or CLR 4.0.
In the samples, you will find a few different varieties of the PrintInfo task. Each
class contains the same code, but the containing project targets a different run
time/architecture. The sample shown here, taken from Printinfo-03-v2.proj,
shows how we can ensure that the PrintInfo task is loaded with CLR 2:
<Project ToolsVersion="4.0" DefaultTargets="Demo"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<UsingTask TaskName="PrintInfo"
Runtime="CLR2"
AssemblyFile="$(MSBuildThisFileDirectory)\BuildOutput\Samples.Ch01.v2.dll">
<Target Name="Demo">
<PrintInfo >
</Target>
</Project>
Here, we pass CLR2 as the Runtime attribute value. Similarly, we can ensure that
it’s loaded under CLR 4.0 with the following code from Printinfo-03-v4.proj:
<UsingTask TaskName="PrintInfo"
Runtime="CLR4"
AssemblyFile="$(MSBuildThisFileDirectory)\BuildOutput\Samples.Ch01.v4.dll">
<Target Name="Demo">
<PrintInfo >
</Target>
</Project>
The results of building all these project files are shown in Figure 1-4.
Figure 1-4. The result showing the PrintInfo task loaded under CLR 2.0 and then CLR 4.0.
Figure 1-4 demonstrates that we were able to load a task successfully under a
specific .NET CLR version. We have seen how to control the context in which a
task gets loaded by modifying the UsingTask attribute. What may not be entirely
obvious at this point is that you can actually load different versions of the same
task. You can use a couple of new attributes on the task to indicate which version
should be picked up automatically. Let’s see how this works.
using Microsoft.Build.Utilities;
The other implementations of the task are similar. In the code snippet that
follows, you will find the contents of the file Say-hello-01.proj:
<UsingTask TaskName="SayHello"
Runtime="CLR4"
AssemblyFile="$(MSBuildThisFileDirectory)BuildOutput\Samples.Ch01.v4.dll">
<Target Name="Demo">
<SayHello MSBuildRuntime="CLR2">
<SayHello MSBuildRuntime="CLR4">
<SayHello >
<Target>
</Project>
In this sample, you can see that we are loading two versions of the SayHello
task. One is built targeting .NET CLR 2.0, and the other invokes .NET CLR 4.0.
In the Demo target, we can see three invocations of the SayHello task. The last
one does not pass any parameters, and the first two specify the value for
MSBuildRuntime. First, let’s take a look at the result and then delve into the
details. You can see the result of building this project in Figure 1-5.
Here, you can see that, as expected, the correct version of the task was invoked
based on the MSBuildRuntime value for the first two invocations. From the
message displayed from the third invocation, it is clear that the CLR 2.0 version
of the task is invoked. When MSBuild encounters a task invocation inside a
target, it will use the following information to find the corresponding task: the
task name, MSBuildRuntime value, and MSBuildArchitecture value. It will
search through all the available tasks (those that have been registered with
UsingTask) and find the first task that meets the criteria. In the previous sample,
when <SayHello MSBuildRuntime="CLR2" /> is encountered, MSBuild will
return the task declared in the first UsingTask, which is Runtime="CLR2". If
there is no UsingTask declaration with Runtime="CLR2", then the first SayHello
task is returned. In our case, the first UsingTask declaration indicated
Runtime="CLR2", so that one was loaded when <SayHello /> is encountered. If
we had defined the Runtime="CLR4" UsingTask declaration first, then that
version would have been loaded instead. In the samples, you can find Say-hello-
02.proj, which demonstrates this case. Now let’s take a look at something new: a
package manager called NuGet.
NuGet
When you are developing an application, the odds are that you will be reusing
components created by others. For example, you would like to add logging to
your application. Instead of writing your own logging framework, it would be
much easier to use one of the existing ones out there, such as log4net or NLog.
In the past, you would typically have integrated external dependencies in your
application by following these basic steps:
1. Find the developer’s website and download binaries.
3. Copy and paste sample code from the website into your project.
After that, you would have to update those references manually. NuGet makes
this process much simpler.
NuGet is available from its home page, nuget.org, as well as the Visual Studio
gallery. On the NuGet extension page in the Visual Studio gallery, NuGet is
described as “A collection of tools to automate the process of downloading,
installing, upgrading, configuring, and removing packages from a VS Project.”
Essentially, NuGet is a package manager for Visual Studio projects. A package is
a self-contained unit that can be installed into a project. NuGet packages can do
all sorts of things, including adding references, adding code files, and modifying
Web.config. NuGet is integrated with Visual Studio 2012, but it is available for
Visual Studio 2010 as well.
Figure 1-7. The search results for “log4net” in the Manage NuGet Packages dialog box.
To install the selected package, you simply click Install. Then the package will
be downloaded and installed into your project automatically. NuGet packages
can depend on other packages. If the package being installed depends on other
packages, they will be installed automatically as well.
When a package is installed for the first time into a project, a few things happen:
A Packages folder is created in the solution directory.
Figure 1-8. The Updates tab of the Manage NuGet Packages dialog box.
To update a selected package, you simply click Update. You can also use the
Install Packages tab to uninstall packages. Simply click the Uninstall button on
the selected package itself. Earlier we mentioned that you can also manage
NuGet packages in the Package Manager Console. Let’s take a look at that
experience.
Uninstall-Package
Update-Package
Package Restore
Because it’s recommended that you do not check in the Packages folder (after
all, you don’t want a bunch of binaries clogging your repository), there is an
additional step that needs to be taken before checking in a project using NuGet.
You need to enable Package Restore. After this is enabled, when the project is
built it will automatically download the packages as needed. To enable Package
Restore, right-click the solution in Solution Explorer and select Enable NuGet
Package Restore.
NOTE
Alternatively, you can install nuget.exe in a well-known location on your build server and use
that for Package Restore. This would prevent you from checking in several different copies of
nuget.exe for each solution. We won’t cover that here, though.
When you enable package restore on a given solution, a few things happen:
A .nuget folder is created with the required files, including Nuget.targets.
Each project with NuGet packages is updated to import the Nuget.targets file.
After package restore has been enabled, the missing NuGet packages will be
downloaded automatically each time the solution or project is built. Enabling
package restore is essentially a requirement for team scenarios. Because package
restore is implemented using MSBuild, the package restore functionality will be
invoked automatically when your project is built from Visual Studio, the
command line, or a build server. Let’s take a closer look at the package restore
process.
After enabling package restore, if you open the .csproj/.vbproj file, you will find
the Import statement <Import
Project="$(SolutionDir)\.nuget\nuget.targets" />. This MSBuild
.targets file defines the RestorePackages target. This target is injected into the
build process using the following property declaration:
<!-- We need to ensure packages are restored prior to assembly resolve -->
<ResolveReferencesDependsOn Condition="$(RestorePackages) == 'true'">
RestorePackages;
$(ResolveReferencesDependsOn);
</ResolveReferencesDependsOn>
<Exec Command="$(RestoreCommand)"
LogStandardErrorAsError="true"
Condition="'$(OS)' == 'Windows_NT' And Exists('$(PackagesConfig)')" >
</Target>
After the execution of this target, all the required assembly references will have
been downloaded into the Packages folder, and your build happily continues. For
more details on the package restore process, take a look at the NuGet.targets file.
Now that we have covered NuGet, we will take a look at another useful Visual
Studio extension.
After this, each time you build your application, the App.config file will be
transformed with the appropriate App.xxx.config transform. This is true if you
are building in Visual Studio or from the command line. Let’s see how this
works.
In the samples accompanying this book, you will find a console project file,
TransformSample.Console.csproj, which uses SlowCheetah. You will need to
have installed SlowCheetah for this sample to work correctly. This project will
read the application’s configuration file and output the application settings to the
console. Since the App.config file will be updated during build (or F5 in Visual
Studio), this application should output different values when the build
configuration is switched. Let’s take a look at the App.config file for this project,
shown in Example 1-1.
Example 1-1. App.config file contents
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="appName" value="console-default">
<add key="url" value="http:/localhost:8080/"/>
<add key="email" value="default@localhost.com">
<appSettings>
<configuration>
The App.config file here is very basic; it just includes a few application settings.
The connection strings are loaded from another file. These are the values that the
application would normally be loaded with. Now let’s take a look at the
transforms. Example 1-2 shows the contents of the App.Debug.config transform.
Example 1-2. App.Debug.config contents
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<appSettings>
<add key="appName" value="Demo-Debug"
xdt:Transform="Replace" xdt:Locator="Match(key)">
</configuration>
From the previous listing, you can see that XDT is being used to update the three
appSettings entries when the project is built using Debug mode. Similarly, the
App.Release.config file updates these entries as well, as shown in Example 1-3.
Example 1-3. App.Release.config contents
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<appSettings>
<add key="appName" value="Demo-Release"
xdt:Transform="Replace" xdt:Locator="Match(key)">
</configuration>
Let’s see what happens when we run this application in Debug and Release
mode. In Figure 1-11, you can see the result when the application is run in
Debug mode, and Figure 1-12 has the results for Release mode.
Figure 1-11. The TransformSample result when running in Debug mode.
In these two images, we can see that the three application settings (appName,
url, and email) are updated automatically when we hit F5 in Visual Studio. You
can see that the connection strings are updated as well. Because the
connectionStrings element includes
configSource="connectionStrings.config", the values for connection
strings will be taken from ConnectionStrings.config. This file is also transformed
with SlowCheetah. When using SlowCheetah, you are not limited to
transforming App.config; it can transform any XML file. The extension of the
file does not have to be .config. We will not discuss the transformation for
ConnectionString.config here, but you can find it in the samples. Now that we
are familiar with the transformations, let’s take a look at how we can invoke
them on a build server.
<PropertyGroup Label="SlowCheetah">
<SlowCheetah_EnableImportFromNuGet
Condition=" '$(SlowCheetah_EnableImportFromNuGet)'=='' ">
true</SlowCheetah_EnableImportFromNuGet>
<SlowCheetah_NuGetImportPath
Condition=" '$(SlowCheetah_NuGetImportPath)'=='' ">
$([System.IO.Path]::GetFullPath(
$(MSBuildProjectDirectory)\.\packages\
SlowCheetah.2.5.5.1\tools\SlowCheetah.Transforms.targets ))
</SlowCheetah_NuGetImportPath>
<SlowCheetahTargets
Condition=" '$(SlowCheetah_EnableImportFromNuGet)'=='true' and
Exists('$(SlowCheetah_NuGetImportPath)') ">
$(SlowCheetah_NuGetImportPath)
</SlowCheetahTargets>
</PropertyGroup>
Your project is updated to load the .targets file from the Packages folder.
Now when you build your project, the .targets file will be imported from the
Packages folder. As covered in the Package Restore section earlier in this
chapter, a NuGet package that extends the build process needs to be restored
before the build for your solution/project starts. This is where the
PackageRestore.proj file comes into play. If you open the PackageRestore.proj
file, it will look like the following code block:
<PropertyGroup>
<SolutionDir
Condition="$(SolutionDir) == '' Or $(SolutionDir) == 'Undefined'"
Label="SlowCheetahSolutionDir">..\</SolutionDir>
</PropertyGroup>
NOTE
If you did not have Package Restore enabled when the package was installed, you should
enable Package Restore and then manually uninstall and reinstall the SlowCheetah package.
This is a very basic MSBuild file. All it does is define the SolutionDir property
and then import the Nuget.targets file. If this looks familiar, it’s because this is
the same type of edits that are made to your project file when you enable
package restore. The PackageRestore.proj file itself does not do the work of
restoring the packages; this is left to Nuget.targets. In order to restore your
packages, you can execute the command msbuild packageRestore.proj. This
will restore all the packages that your project utilizes. Let’s see how we can
configure Team Build for this scenario.
If you are using Team Build, it is very easy to build the PackageRestore.proj file
before your solution/project is built. When configuring your build definition on
the Process tab, you can select the items that will be built from the Items To
Build list. You can see these items highlighted in Figure 1-13.
When you edit the Items To Build list, you should add the PackageRestore.proj
file to the list and make sure that it is at the top of the list (see Figure 1-14).
Figure 1-14. The Items To Build dialog box.
Because the PackageRestore.proj file is at the very top of the list, it will be built
before the other items. Once this project is built, it will restore the necessary
package and the transforms will be executed as expected on your build servers.
That is all you need to enable the transformations to be executed on your build
servers. Now that we have covered SlowCheetah, let’s move on to the next
section where we will do some hands-on experimentation.
Cookbook
This section presents instructions on how to implement and customize some of
the new features in MSBuild 4.5.
How to extend the solution build
One of the most commonly asked questions is “How can I extend the build
process for my solution?” It’s easy to extend the build process for a project, but
extending the solution build is an entirely different thing. It is possible to extend
the solution build process from the command line, but not from within Visual
Studio. The content in this answer relates to command-line builds only.
The .sln file is not an MSBuild file. MSBuild is able to build an .sln file, though.
Because of this, we cannot use the techniques that we’ve already learned to
execute additional targets. Before we get to the specific implementation, let’s
discuss what happens when Msbuild.exe builds an .sln file.
When MSBuild attempts to build an .sln file, it is first converted to an MSBuild
file in memory. That is how MSBuild consumes the solution file. When building
an .sln file at the command prompt, you can have MSBuild write out the
MSBuild version of the .sln file. To do this, create the environment variable
MSBuildEmitSolution and set it to 1. If you set this using the Environment
Variables dialog box, you will need to reopen your command prompt before this
change can take effect. With this environment variable present when you build
an .sln file, two files will be generated in the same folder as the .sln file. Those
two files are named {SolutionName}.sln.metaproj and
{SolutionName}.sln.metaproj.tmp, and they can provide some insight into the
process.
If you open the .metaproj.tmp file, you will see that the contents are roughly as
shown in the next code segment:
InitialTargets="ValidateSolutionConfiguration;ValidateToolsVersions;ValidateProjects">
<Import
Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\SolutionFile\ImportBefore\*"
Condition="'$(ImportByWildcardBeforeSolution)' != 'false'
and exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\SolutionFile\
ImportBefore')" >
<Import
Project="C:\InsideMSBuild\ch01\ExtendSlnBuildefore.ExtendSlnBuild.sln.targets"
Condition="exists('C:\InsideMSBuild\ch01\ExtendSlnBuildefore.ExtendSlnBuild.sln.
targets')" >
<Import
Project="$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\SolutionFile\ImportAfter\*"
Condition="'$(ImportByWildcardBeforeSolution)' != 'false' and
exists('$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\SolutionFile\ImportAfter')"
>
<Import
Project="C:\InsideMSBuild\ch01\ExtendSlnBuildfter.ExtendSlnBuild.sln.targets"
Condition="exists('C:\InsideMSBuild\ch01\ExtendSlnBuildfter.ExtendSlnBuild.sln.
targets')" >
Some irrelevant elements were removed from this code for the sake of space. In
the .metaproj.tmp file, there are two pairs of Import statements: one pair at the
top of the file, which imports MSBuild files before the content in the
.metaproj.tmp file, and another pair at the bottom, which imports MSBuild files
after the content. The difference between importing your MSBuild files before or
after the content is subtle but important. When your file is imported before the
.metaproj.tmp file’s content, that means you will not be able to use any
properties/items that are declared in the .metaproj.tmp file. The advantage of this
approach is that it gives you a chance to set the properties/items first. Typically, I
default to importing my files by using one of the after Import statements. This
way, I can use all the properties and items if needed.
In each pair of Import statements, the first Import uses the
$(MSBuildExtensionsPath) property. The code snippet above the Import
declaration in one case is
$(MSBuildExtensionsPath)\$(MSBuildToolsVersion)\SolutionFile\ImportAfter\*
On a 64-bit machine, this evaluates to C:\Program Files
(x86)\MSBuild\4.0\SolutionFile\ImportAfter\* by default. If there are
any files in this folder, they will be imported into the build process for every .sln
file that is built from the command line using Msbuild.exe. This is a good
solution for build servers, but it’s also a bit heavy-handed. It affects every single
build on that machine. Typically, it’s preferred to have a more targeted solution.
That is where the second Import element is used.
In the previous sample, the Import declaration from the bottom pair imports
C:\InsideMSBuild\ch01\ExtendSlnBuild\after.ExtendSlnBuild.sln.targets
The pattern here is that the .targets file is placed in the same folder as the .sln
file and named after .{SolutionName}.sln.targets, where {SolutionName} is the
name of the solution. This is the preferred method to extend the solution build.
Included with the samples is a solution named ExtendSlnBuild.sln. This sample
contains a single project, ClassLibrary.csproj. In the same folder as
ExtendSlnBuild.sln, there is a file called After.ExtendSlnBuild.sln.targets, which
will be picked up and imported into the build process automatically. Let’s take a
look at the contents of that file and then discuss the details. The contents of
After.ExtendSlnBuild.sln.targets are shown in the following code fragment:
</Project>
In this MSBuild file, there are two targets: GenerateCode and AnalyzeCode.
They will be executed before and after the solution build process, respectively.
These targets are injected into the build process using the BeforeTargets and
AfterTargets attributes. A solution file will always have the following four
targets:
Build
Rebuild
Clean
Publish
For this example, we want to execute our targets whenever a build is occurring.
You might have noticed in the sample that in the BeforeTargets/AfterTargets, the
value was defined as “Build;Rebuild” instead of simply Build. The solution file
does not attempt to interpret Build and Rebuild; it simply invokes the
appropriate target on each project file. Because of this, when calling Rebuild on
the solution, the Build target is not invoked on it. This is different from the
typical case for project files. When you build the ExtendedSlnBuild.sln file from
the command prompt, the result will be similar to Figure 1-15.
<ItemGroup>
<FilesToCopy Include="$(MSBuildProjectDirectory)\*.proj" >
<ItemGroup>
<Target Name="Clean">
<ItemGroup>
<_FilesToDelete Include="$(OutputFolder)*.proj">
<ItemGroup>
<Delete Files="@(_FilesToDelete)">
<Target>
</Project>
In this project file, we have just three targets defined. The CopyFiles target will
copy a set of .proj files to the BuildOutput\IncBuild folder. The files to copy are
placed in the FilesToCopy item list. The CopyFiles target has its inputs and
outputs set up so that the target will be skipped if all the files are up to date. The
AfterCopyFiles target will be executed after the CopyFiles. Let’s take a look at
the behavior when the BuildOutput\IncBuild folder is clean. The result of
invoking Inc-build-01.proj in MSBuild is shown in Figure 1-16.
Figure 1-16. The result of building Inc-build-01.proj when the BuildOutput\IncBuild folder is empty.
Because the files in the BuildOutput\IncBuild directory do not exist, all the
source files will be overwritten. You can see from the result in Figure 1-16 that
the AfterCopyFiles target was successfully executed as expected. Now let’s see
what the result is when we build Inc-build-01.proj a second time. The result is
shown in Figure 1-17.
In the result, you can see that the CopyFile target was completely skipped, but
the AfterCopyFiles target was not skipped at all. Because the AfterCopyFiles
target does not have any inputs or outputs, it will never be skipped. So how can
we execute AfterCopyFiles only if the CopyFiles target is executed? You might
think that you could copy the inputs and outputs from CopyFiles and paste them
on the AfterCopyFiles target. This doesn’t work because the CopyFiles target
executes before AfterCopyFiles, so all the outputs will be up to date every time
AfterCopyFiles target is ready to execute. Because of that, the target will always
be skipped.
In short, there is no straightforward way to ensure that the AfterCopyFiles target
gets executed only when CopyFiles is. You could manage the inputs and outputs
of the AfterCopyFiles target to achieve this, but there is no built-in support that
we can use for the general case.
So now let’s go back to the original question, which relates specifically to how
Microsoft Visual Basic and C# projects are built. The target that calls the C# or
Visual Basic compiler is CoreCompile, which you can find in the
Microsoft.CSharp.targets file for C# and Microsoft.VisualBasic.targets file for
Visual Basic. If you look at that target carefully, you might notice the following
CallTarget invocation at the end:
<CallTarget Targets="$(TargetsTriggeredByCompilation)"
Condition="'$(TargetsTriggeredByCompilation)' != ''"/>
In this case, CallTarget is being used to invoke the targets in the property
TargetsTriggeredByompilation. In order to have your target executed when your
project artifacts are built, all you need to do is to append your target to this list.
You can find an example of how to accomplish this in
CoreCompileExtension.csproj. The elements added to this file are shown in the
following code fragment:
<PropertyGroup>
<TargetsTriggeredByCompilation>
$(TargetsTriggeredByCompilation);
CustomAfterCompile
</TargetsTriggeredByCompilation>
</PropertyGroup>
<Target Name="CustomAfterCompile">
<Message Text="********* CustomAfterCompile executed"
Importance="high">
<Target>
Here, you can see that the CustomAfterCompile target is appended to the
TargetsTriggeredByompilation list. You append to this list instead of simply
overwriting it because other targets may be using this same feature. By using this
technique, if the CoreCompile target is skipped, your target will not be executed
either. Now let’s move on to another sample, which shows how we can inject a
new target into the build process of a project.
<Import Project="$(CustomBeforeMicrosoftCommonTargets)"
Condition="'$(CustomBeforeMicrosoftCommonTargets)' != '' and
Exists('$(CustomBeforeMicrosoftCommonTargets)')"/>
And similarly, at the bottom of the file, you will find the next Import statement:
<Import Project="$(CustomAfterMicrosoftCommonTargets)"
Condition="'$(CustomAfterMicrosoftCommonTargets)' != '' and
Exists('$(CustomAfterMicrosoftCommonTargets)')"/>
From the sample in Chapter 8, we simply placed the file in the default location
for these files. Instead of that, we can simply specify the file path for either
CustomBeforeMicrosoftCommonTargets or
CustomAfterMicrosoftCommonTargets. By default, I use
CustomAfterMicrosoftCommonTargets to ensure that all the properties/items of
the project itself are made available to my build script. There may be some cases
where you will need to use the alternate property. Now let’s show how to use this
technique.
In the samples, you will find Extend-build-01.proj, the contents of which are
shown in the following code block:
<ItemGroup>
<ProjectsToBuild Include="Samples.Ch01\Samples.Ch01.csproj">
<ProjectsToBuild Include="Samples.Ch01.v2\Samples.Ch01.v2.csproj">
<ProjectsToBuild Include="Samples.Ch01.v4\Samples.Ch01.v4.csproj">
<ItemGroup>
<Target Name="Demo">
<MSBuild Projects="@(ProjectsToBuild)"
Properties="CustomAfterMicrosoftCommonTargets=
$(MSBuildThisFileDirectory)extend-build-01-After.proj">
<Output ItemName="ProjOutputs" TaskParameter="TargetOutputs">
<MSBuild>
In this build script, you can see that we have defined an item called
ProjectsToBuild. This contains a list of C# projects that will be built. We pass
this item list to the MSBuild task so that it can be built. When we do so, we pass
the additional property
CustomAfterMicrosoftCommonTargets=$(MSBuildThisFileDirectory)extend-
build-01-After.proj. Because of this, the file Extend-build-01-After.proj will be
imported automatically into the build process for each of the C# projects. Let’s
look at the contents of that file, shown in the next code snippet:
<Project ToolsVersion="4.0"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Message Text="%(Compile.FullPath)">
<Target>
<PropertyGroup>
<BuildDependsOn>
$(BuildDependsOn);
RunCustomTool
</BuildDependsOn>
</PropertyGroup>
</Project>
This project file is pretty simple. It contains one target declaration, as well as a
PropertyGroup. The target, RunCustomTool, will run the custom tool on all the
source files. In this sample, we just output the full path of the files that will be
compiled. This target gets appended to the targets that will be executed by
prepending it to the BuildDependsOn property. Let’s look at the output when we
execute the command msbuild.exe extend-build-01.proj (see Figure 1-18).
In Figure 1-18, you can see that when each project is built, the RunCustomTool
target is executed before the build process for each individual project. This is
actually really fascinating, and it also can be very useful for build lab scenarios. I
like to call this target injection, as we can literally inject targets (and other
elements) into the build process for a given project without even changing it.
The approach shown here is good, but it has one drawback. It requires that you
create and maintain two different MSBuild files. It would be better if we could
achieve the same thing with a single file. As it turns out, we can; let’s see how to
do that.
In the previous example, we have two different MSBuild files being used:
Extend-build-01.proj, which is the file driving the build process, and Extend-
build-01-after.proj, which is the file containing the elements being injected. We
can basically combine both of these files. Take a look at the contents of the file
Extend-build-02.proj:
<ItemGroup>
<ProjectsToBuild Include="Samples.Ch01\Samples.Ch01.csproj">
<ProjectsToBuild Include="Samples.Ch01.v2\Samples.Ch01.v2.csproj">
<ProjectsToBuild Include="Samples.Ch01.v4\Samples.Ch01.v4.csproj">
<ItemGroup>
<Target Name="Demo">
<MSBuild Projects="@(ProjectsToBuild)"
Properties="CustomAfterMicrosoftCommonTargets=$(MSBuildThisFileFullPath)">
<Output ItemName="ProjOutputs" TaskParameter="TargetOutputs">
<MSBuild>
<Message Text="%(Compile.FullPath)">
<Target>
<PropertyGroup>
<BuildDependsOn>
RunCustomTool;
$(BuildDependsOn);
</BuildDependsOn>
</PropertyGroup>
</Project>
This project file contains all the elements from both files, and the invocation of
the MSBuild target has been updated to pass in
CustomAfterMicrosoftCommonTargets=$(MSBuildThisFileFullPath)". If
you build this file, you will see that the result is the same as building the Extend-
build-01.proj file. The drawback of having everything in a single file is that
when your target projects get built, you will be importing elements that may be
irrelevant to that build process. For example, the Demo target is imported for
each C# project being built, and it is not always needed. In many cases, this will
not be an issue, but if there is a property/item name collision, that could cause
some conflicts.
In this chapter, we have covered what’s new with MSBuild 4.5 as well as
updates to related technologies. There may not have been a lot of core updates to
the MSBuild engine, but there have been significant updates to how projects are
built. For example, when Visual Studio 2010 was released, NuGet and Project
Compatibility were not available with older Visual Studio versions. Both of
these items have a significant impact on the build process for Visual Studio
solutions. Now that we have covered what’s new in and around MSBuild, let’s
move on to look at the Team Build updates.
Chapter 2. What’s new in Team
Foundation Build 2012
In the first part of this chapter, we’ll look at the new features that are available in
Team Foundation Build 2012, as well as the improvements to Microsoft
Windows Workflow Foundation 4.5 that you can use when customizing or
creating build process templates. In the second part of this chapter, we’ll look at
how to use some of these new features, step by step.
Installation
The installation process for Team Foundation Build 2012 is largely unchanged
from Team Foundation Build 2010, but there have been some changes to system
requirements as well as improved support for unattended installation, which
we’ll cover in this section.
System requirements
The system requirements for Team Foundation Build 2012 have not changed
significantly. The hardware requirements are the same, but Team Foundation
Build 2012 now supports the following operating systems:
64-bit version of Windows Server 2008 with SP2 (Standard or Enterprise
edition)
This means that Team Foundation Build 2012 is not supported on Windows XP,
Windows Vista, Windows Server 2003, or 32-bit versions of Windows Server
2008.
Unattended installation
Team Foundation Build 2012 also supports unattended installation by allowing
configuration to be done unattended using the Tfsconfig command-line tool.
There are three steps to performing an unattended installation of Team
Foundation Build 2012:
1. Create an unattended configuration file. A stub configuration file can be
created by running tfsconfig unattend create type:build
/unattendfile:unattend.ini.
TIP
The Tfsconfig.exe file is located in %ProgramFiles%\Microsoft Team Foundation Server
11.0\Tools once Team Foundation Build 2012 has been installed.
In the Team Foundation Build 2012 cookbook section later in this chapter, we’ll
look at how to implement a common unattended installation scenario in more
depth.
Team Foundation Service
Perhaps the biggest change in Team Foundation Build 2012 is one that isn’t
technically part of Microsoft Visual Studio 2012 or Team Foundation Server
2012—the introduction of Team Foundation Service.
The Team Foundation Service is a cloud-based version of Team Foundation
Server, which allows you to use Team Foundation Server in a matter of minutes,
without having to set up your own infrastructure. A Free Plan is available, which
supports up to five users with an unlimited number of team projects. It also
supports a wide variety of the features available in the on-premise Team
Foundation Server, including the following:
Version control (including Git support)
Currently, the Build service is in “preview,” which means it can be used without
charge. According to the service’s pricing page, when the feature exits this
“preview” mode, a certain number of builds will be available for free each
month (if you’re on the Free Plan), and builds beyond that will incur an
additional charge.
The Team Foundation Service supports both on-premise build controllers and
agents (where they’re hosted on your hardware and network) and a single build
controller/agent (per account) hosted in the cloud. To use the hosted build
controller/agent, simply select Hosted Build Controller from the Build Defaults
tab of the New Build Definition dialog box, as shown in Figure 2-1.
Figure 2-1. Selecting the Hosted Build Controller.
On this tab, you may also notice a new option for the drop location (now called
Staging Location) that allows you to put the build outputs into version control.
This option is available only for build definitions on the Team Foundation
Service and is very important because the Team Foundation Service does not
have access to a Universal Naming Convention (UNC) path to drop the build
outputs. This path needs to be placed under the drop’s folder at the root of each
team project, and a subdirectory will be created automatically under the selected
path based on the build definition’s name. If you choose to delete drops as part
of your retention policy, the drop’s folder in version control will be destroyed so
that that space can be reclaimed by the database.
Each time you build using the Hosted Build Controller, you’ll notice a short
delay before the build begins. This delay is because the build controller/agent is
reimaged automatically before each build.
The Hosted Build Controller is a single machine per each account, so it will be
shared across all the team projects and build definitions in that account. It is the
quickest and easiest way to start using builds in the Team Foundation Service,
but it does have a number of limitations that may affect its suitability for your
purposes. This machine will run only a single build at a time, which may be a
problem if you have a large number of build definitions or need to process a
large number of builds.
If you determine that the Hosted Build Controller won’t meet your needs, then
you can use on-premise build controllers/agents, which is discussed in the Team
Foundation Build 2012 cookbook section of this chapter. Some of the limitations
of the Hosted Build Controller are
The build process does not run with administrative privileges on the build
controller/agent.
The build controller/agent may not have the software installed that your build
process requires. A list of the software installed on the build controller/agent
is available at http://tfs.visualstudio.com/en-us/learn/build/hosted-build-
controller-in-vs/#software.
The most significant limitation is that the Hosted Build Controller cannot
build Windows Store applications.
Because the build process does not run with administrative privileges, you
can’t install software as part of the build process template to work around this
limitation. However, you can use software that can be XCopy-deployed
(including NuGet packages).
The build controller/agent does not run interactively, so it cannot run tests
that require an interactive session (such as Coded UI tests).
You can still customize build process templates in the Team Foundation Service,
just as you can for Team Foundation Server. However, you need to keep these
limitations in mind to ensure that your customizations work in the hosted
environment. You can detect programmatically whether you’re running in the
Team Foundation Service by using the IsVirtual property of IBuildServiceHost,
which is accessible using the ServiceHost property from either IBuildController
or IBuildAgent.
Team Explorer
Visual Studio 2012 includes a significant update of the Team Explorer user
interface, which is now streamlined to make common tasks easier to complete
and to provide “at-a-glance” information within Team Explorer itself. The
hierarchical tree that had been in previous versions is now replaced with a series
of pages that focus on specific tasks. Figure 2-2 shows the Builds page of Team
Explorer, which replaces the Builds tree node from Visual Studio 2010.
My Builds
The My Builds section (see Figure 2-3) automatically shows your six most
recent builds, which allows you to see at a glance the outcome of recent builds
that you’ve triggered. Double-clicking a build takes you to that build’s details,
and right-clicking it allows you to take actions quickly, such as retaining a build,
retrying a failed build, or reconciling your workspace after a gated checkin. You
can also click Actions and My Builds to open Build Explorer in a form that is
filtered based on builds you queued.
Each build definition now has an icon that provides two pieces of information
about it: what type of trigger it uses, and an overlay that indicates whether it is
paused or disabled. Figure 2-5 shows a build definition of each trigger type,
along with its associated icon and information on its status (Batched, Paused,
Disabled, and so on).
One feature that at first appears to be missing from Visual Studio 2012 is the
ability to double-click All Definitions to view the whole build queue. However,
you can still get to this view by clicking Actions and then Manage Queue.
Favorites
You can mark a build definition as a favorite by right-clicking it in Team
Explorer and choosing Add To Favorites. You’ll then see the build definition
listed in the My Favorite Build Definitions section, as shown in Figure 2-6, with
a summary of the most recent build that completed and a histogram showing the
last nine builds for that build definition. Pausing over the definition will give you
a summary of the definition, including what trigger it uses, whether it’s enabled
or not, and information about the most recent build. It is also possible to make a
build definition a Team Favorite, but this can be done only from Web Access.
Extensibility
Visual Studio 2012 supports extending Team Explorer by adding new pages and
adding sections to existing pages. In the Team Foundation Build 2012 cookbook
section of this chapter, we’ll show you an example of extending the Builds page
with new functionality.
Queue details
In Visual Studio 2010, you couldn’t double-click a queued build to view details
about it; but in Visual Studio 2012, when you double-click a queued build
(technically a build request), you see the Build Request window, shown in
Figure 2-7. This window provides information about how many requests are
queued for the build controller, as well as the build definition that this build
request is for, the position of this specific request in the queue, and the average
wait time in the queue and build time based on previous builds. In addition, the
Build Request window warns you if the build definition has been paused.
Web Access
One of the side effects of the introduction of the Team Foundation Service is that
web access in Team Foundation Server 2012 has become more feature-rich. The
home page for a team (see Figure 2-8) shows a tile for each build definition that
has been marked as a Team Favorite. This tile shows the same histogram that is
shown in Team Explorer, and although pausing over the bars of the histogram
will show you information about each of the builds, clicking an individual bar
won’t take you to that build’s details as it does in Team Explorer. Rather,
clicking anywhere inside the time will take you to Build Explorer and show you
recently completed builds for that build definition.
Figure 2-8. Team Favorites in Web Access.
Now you can also queue builds from within Web Access by clicking the Queue
Build link on the Build Explorer page. You can specify basic settings when
queuing the build, as shown in Figure 2-9, but you can’t specify values for any
custom parameters, which may limit the usefulness of this feature.
Figure 2-9. The Queue Build dialog box in Web Access.
2. Check the unit test framework’s assemblies into your build controller’s
custom assemblies location to allow the tests to be run automatically as
part of the build.
3. Modify the build definition to use the Visual Studio Test Runner, as shown
in Figure 2-10. To get to this dialog box, click the Process tab in the build
definition, click the Automated Tests parameter in the Basic category, and
then click the ellipsis button that appears.
Figure 2-10. Selecting the Visual Studio Test Runner.
NOTE
When upgrading from Team Foundation Server 2010 to Team Foundation Server 2012, the
build definitions will default to using the MSTest.exe Test Runner or the MSTest Test
Metadata File Test Runner, each of which provides backward compatibility with Team
Foundation Server 2010.
3. Right-click the build for the integration and choose Start Now.
4. If the build succeeds, enable the build definition, which will allow the
other queued builds to be processed.
5. If the build doesn’t succeed, submit checkins to fix the issues and force
them through using Start Now. Once all the issues have been resolved,
unpause the build definition.
Batching
For teams with a high volume of checkins or long running builds, adopting gated
checkin can result in throughput that doesn’t keep up with demand. Team Build
2012 introduces the concept of batching, which will group multiple gated
checkins into a single build in an attempt to build them together. If this succeeds,
all the changesets will be checked in; otherwise, the checkins will be retried
individually as separate builds.
Batching is enabled by checking the Merge And Build Up To X Submissions
check box on the build definition’s Trigger tab (see Figure 2-12) and entering the
maximum number of submissions that can be in each batch.
Figure 2-12. Enabling batching when editing a build definition.
TIP
Batching too many builds together can increase the chance of merge conflicts and build and
test failures, which can result in decreased throughput. Therefore, you need to experiment to
determine the optimal batch size for your build definitions.
If the build succeeds, then these requests have now been completed. However, if
the build fails, those requests can be pushed back into the queue and trigger
additional builds. This means that a build can include multiple requests, but also
that a request may be part of multiple builds. In Figure 2-14, you can see a
request that is included as part of multiple builds.
Figure 2-14. Build Explorer showing a failed build containing two requests.
In Figure 2-14, you can also see a second batching behavior. The build
20120516.8 contains two requests: one that contained a change that would
compile successfully (Build Request 13) and one that wouldn’t (Build Request
14). This build failed because of a compilation error, but the
DefaultProcessTemplate then queued the requests to be retried individually. The
retries subsequently became builds 20120516.9 (which contained Build Request
13) and 20120516.10 (which contained Build Request 14). Because Build
Request 13 compiled successfully by itself, it was committed; but Build Request
14 still failed to compile, so it was rejected (and not retried again).
You can also determine what requests make up a build and what builds a request
was part of using the Team Build application programming interface (API). The
IBuildDetail interface has a new property called Requests, which returns a
readonly collection of IQueuedBuild instances that initiated the build. You can
also determine the builds that a request was a part of during its lifetime using the
Builds property on IQueuedBuild.
This default behavior provides the best of both worlds because changes that
batch together successfully will provide high throughput. When the batch fails to
build successfully, though, the individual requests will be built individually,
providing additional feedback about the cause of the failure and allowing valid
requests to still be committed. This behavior exists during the sync, build, and
test phases of the build process, although during sync, because it’s possible to
determine the specific requests that caused the failure, only the requests that
failed to unshelve will be retried, and the rest will continue to build.
Besides batching, there are other ways in which a request may be associated with
multiple builds. If you retry a build (as described in the User interface (UI)
enhancements section earlier in this chapter), the requests in that build will be
associated with the new build. If a build controller loses connectivity with the
Team Foundation Server, any builds that it runs will be retried automatically,
causing the requests in those builds to be associated with multiple builds.
The logic that accepts, rejects, or retries requests is driven by activities called
from the workflow, which enables you to add batching support to your custom
build process templates or modify the default batching logic in the out-of-the-
box template.
Logging
Team Build 2012 introduces two new features to help debug build and
infrastructure issues. The first, diagnostic logging, makes diagnostic logs
available regardless of the logging verbosity shown in the build log; and the
second is Operational and Analytic logs on the build controllers and agents
themselves.
Diagnostic logging
Diagnostic logging is one of the most useful tools when debugging build process
template issues because it includes the inputs and outputs for each activity, as
well as including activities that have been configured to log only at higher
verbosity levels. In Team Build 2010, you enabled diagnostic logging by setting
the Logging Verbosity option to Diagnostic in the build definition, or when
queuing the build, which would increase the verbosity shown in Visual Studio
when opening the build.
In previous versions of Team Build, it wasn’t practical to leave diagnostic
logging on because it made the build log harder to read, decreased the
performance of viewing it, and increased the size of the TFS database
unnecessarily. This meant that diagnostic logs were usually turned off, and as a
result, they were rarely available when you needed them, so you would have to
enable them temporarily, try to reproduce the issue (which may be time
consuming or impossible), and then disable them again.
In Team Build 2012, diagnostic logs are copied to the build’s drop location in
XML format regardless of the logging verbosity configured in the build
definition or at queue time. Because these logs are always there, it is
significantly easier to investigate intermittent issues, and because they’re not
stored in the TFS database, they don’t have an impact on the size of the database
or the performance of viewing the build log in Visual Studio.
These diagnostic logs are dropped automatically by the build controller and
agents when either the build completes (for the build controller) or the
AgentScope exits (for the build agents). These logs are dropped to the Logs
subdirectory, and you’ll find a separate XML log file for the build controller and
for each build agent involved in the build, as well as an Extensible Stylesheet
Language (XSL) transform that will format the XML files for viewing. (See
Figure 2-15 for an example of this.) To view the formatted XML files, simply
open them with Windows Internet Explorer, and the XSL transformation will be
applied automatically, as shown in Figure 2-16.
Figure 2-15. Diagnostic log XML files and stylesheet dropped for a build.
For long-running builds, you may want to access the diagnostic logs while the
build is still running. You can do this by selecting Diagnostics and then Request
Logs from the build log, as shown in Figure 2-17. When you do this, the
intermediate logs will be copied to a Logs\Intermediate\<Timestamp> directory
within the build’s drop location. You can select Diagnostics and then View Logs
to view the most recent intermediate diagnostic logs, or you can select the
specific set of intermediate logs you’d like to view if you’ve requested them
multiple times during the build. On the build controller and build agent, these
logs are temporarily written to %Temp%\<BuildController|BuildAgent>\
<BuildControllerId|BuildAgentId>\Logs\<BuildId>.
Figure 2-16. ActivityLog.xml opened in Internet Explorer, showing the stylesheet applied automatically.
MORE INFO
For a more comprehensive list of Windows Workflow Foundation 4.5 changes, visit
http://msdn.microsoft.com/en-us/library/hh305677(v=VS.110).aspx.
Workflow Designer
This release brings a number of features to the Workflow Designer that
drastically improve the productivity of working with the build process templates
in Team Build.
Visual Studio 2012 also includes improvements to Find In Files. When you
double-click a search result, it will open the associated Extensible Application
Markup Language (XAML) file and automatically navigate to the location of the
search result within that workflow.
Outline view
Navigating around a large workflow in Visual Studio 2010, you either needed to
expand the entire workflow and browse it that way, or search for the correct
hierarchy to expand to find what you want. In either case, you were slowed
down by the Workflow Designer repainting as you navigated around the
interface. While the find improvements help somewhat, that’s only if you know
what you’re looking for.
Visual Studio 2012 now supports the document outline view (which was
available previously for other hierarchical files such as HTML) for workflow
files. You can open this view by clicking View, Other Windows, and Document
Outline. In Figure 2-20, you can see the document outline for the
DefaultProcessTemplate build process template. Note that when you click an
activity in the document outline, it will take the Workflow Designer to the
location of that activity and select it.
Figure 2-20. The document outline view of the DefaultProcessTemplate build process template.
Annotations
Clear and concise comments that describe intent improve the understandability
of code and, because workflows are essentially graphical code, they’d benefit
just as much from containing comments. In Visual Studio 2010, there were a
couple of workarounds that allowed you to do this: either adding XML
comments to the XAML itself (which wouldn’t be visible in the Workflow
Designer), or adding such comments as the DisplayName to activities that don’t
have any behavior, such as sequences (which adds noise to the workflow’s
structure). Both of these approaches have their drawbacks.
Windows Workflow Foundation 4.5 adds a first-class commenting feature called
annotations. You can add an annotation to any activity by right-clicking it,
clicking Annotations, and then clicking Add Annotation. Figure 2-21 shows an
activity which has had an annotation applied to it.
Figure 2-21. Workflow activity with an annotation applied.
Workflow Runtime
Because Team Build is its own workflow host, the majority of improvements to
the Workflow Runtime don’t apply to the Team Build environment. There is one
improvement worth discussing, however, and that is support for C# expressions.
C# Expressions
One of the most anticipated features of Windows Workflow Foundation 4.5 is
support for C# expressions. Unfortunately, Team Build 2012 does not fully
support this feature. The build process templates themselves are still restricted to
using Microsoft Visual Basic expressions because they do not get compiled. If
you try to deploy a build process template that uses C# expressions, you’ll see a
NotSupportedException.
Custom activity libraries, which get deployed as a compiled assembly, can fully
use C# expressions and can be consumed by the build process template without
any problems. Note that you can’t mix and match activities that use different
expression languages in the same assembly, so if you have existing assemblies in
your custom activities, you’ll either need to fully convert the existing activities
to use C# expressions or put your C# activities in a separate assembly.
5. Verify that the AgentCount setting is the number of agents you want
created on each machine that you perform the unattended installation on.
Figure 2-22. The Manage Build Controllers dialog box showing the controller name.
7. Verify that the CollectionUrl setting is correct. If you generated the stub
configuration file from a machine that is already connected to the correct
Team Project Collection, then this should already be correct.
8. Specify the credentials that the Team Build service will run with in the
ServiceAccountName and ServiceAccountPassword settings. Depending
on how your Team Foundation Server is configured, you can either use a
built-in account (for example, NT AUTHORITY\NETWORK SERVICE)
or a domain account.
9. Save the file and copy it to a location that is available from the build
machines you want to configure. In this example, we’ll assume that the file
was copied to the same location (%Temp%\Unattendbuild.xml, or the file
name you chose to work with) on the machine being configured. The
resulting file for this example is
[Configuration]
Activity=Microsoft.TeamFoundation.Admin.TeamBuildActivity
; You can submit information about your Team Foundation Server configuration
and administration
experience to Microsoft.
SendFeedback=True
; The type of build configuration to perform. 'Create' creates and agent and
controller,
'Scale' adds agents to new or existing controller, 'Replace' replaces a host,
controller and/or
agents, and 'HostOnly' just creates a service host
ConfigurationType=scale
; The name of the new build controller. This is typically the machine name
that the controller
runs on.
ExistingControllerName=VSALM - Controller
; Account that the build Windows service will run as. On a domain joined
machine, this can be
a domain account or NT Authority\Network Service. On a workgroup machine, it
can be a local
account or NT Authority\Local Service
ServiceAccountName=VSALM\BuildSvc
ServiceAccountPassword=P2ssw0rd
; Port that the TFS web site binds to. The port must be an integer greater
than 0 and less than
65535
Port=9191
; The maximum number of concurrent builds that the controller will create.
MaxConcurrentBuilds=0
Next, perform the unattended installation to put the Team Foundation Server
2012 binaries on the build machine by running tfs_server.exe /quiet from
the installation media. As mentioned earlier in the chapter, if the machine
doesn’t already have .NET Framework 4.5 on it the first time you run this
command, it will install only the prerequisites (including .NET Framework 4.5)
and then exit. You then need to reboot the machine and rerun this command to
install the Team Foundation Server 2012 binaries.
At this point, the Team Foundation Server 2012 binaries are on the machine, but
it hasn’t been configured as a build agent. That is where the unattended
configuration file that we created earlier comes in. To configure the machine as a
build agent, run this code:
If you’d like to verify the configuration before applying it, you can run the
following command, but this needs to be done on a machine that hasn’t already
been configured:
5. In the Add Team Foundation Server dialog box, shown here, enter the URL
of your Team Foundation Service account and click OK.
9. On the Configure Build Machine page (shown here), enter the credentials
you want to run the Team Build service as. Unlike when configuring Team
Build for an on-premise Team Foundation Server, these credentials won’t
be used to connect to the Team Foundation Service. However, because the
service runs under these credentials, they will affect access to local or
network resources (such as the drop location) and, depending on the
corporate network configuration, may also affect the service’s ability to
connect to the Internet (and therefore the Team Foundation Service). Click
Next to continue.
11. The wizard will now run the readiness checks, as shown here. If there are
issues, you need to resolve them before you can complete the
configuration.
12. Finally, click Configure, and the configuration settings will be applied.
MORE INFO
For a broader overview of extending Team Explorer in Visual Studio 2012, visit Chad Boles’s
sample on the Visual Studio Developer Center at
http://code.msdn.microsoft.com/vstudio/Extending-Explorer-in-9dccd594.
To extend Team Explorer, you first need to install the Visual Studio 2012
software development kit (SDK), which can be downloaded from
http://www.microsoft.com/en-us/download/details.aspx?id=30668. Once this is
done, you can proceed.
Start by creating a new project using the Visual Studio Package template in the
Extensibility category of the language of your choice, as shown in Figure 2-23
(this example uses Microsoft Visual C#).
Figure 2-23. The New Project dialog box for a Visual Studio Package.
This will start the Visual Studio Package Wizard. For this example, you can
accept the defaults on each page of the wizard, although in a real-life scenario,
you should enter information to identify your package on the Basic VSPackage
Information page of the wizard (shown in Figure 2-24).
Figure 2-24. The Basic VSPackage Information page of the Visual Studio Package Wizard.
Microsoft.VisualStudio.TeamFoundation.Build from
%ProgramFiles%\Microsoft Visual Studio
11.0\Common7\IDE\PrivateAssemblies (contains the Team Build Visual
Studio integration API)
The first thing to do once the project is created is modify the Visual Studio
Extension (VSIX) manifest to specify that this project should be installed as a
Managed Extensibility Framework component. To do this, perform the following
steps:
1. Double-click source.extension.vsixmanifest in Solution Explorer.
6. From the Project drop-down list, select the project that contained
source.extension.vsixmanifest. At this point, the dialog box should look
like this:
7. Click OK.
We’re now going to add a class to the Visual Studio package that represents our
new Builds page section. Right-click the project; choose Add, Class; name the
new class BuildStatisticsSection; and change its accessibility to public. You
should end up with a class that looks like this:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Company.BuildStatisticsSample
{
public class BuildStatisticsSection
{
}
}
Next, we’ll add the following namespace imports that we’ll need:
Microsoft.TeamFoundation.Build.Client
Microsoft.TeamFoundation.Client
Microsoft.TeamFoundation.Controls
Microsoft.VisualStudio.TeamFoundation.Build
System.ComponentModel
Team Explorer uses attributes to discover page and section extensions. Next,
we’re going to apply the TeamExplorerSection attribute to our new class so that
Team Explorer can discover it. This attribute takes the following three
parameters:
An ID for the section. This should be a GUID that is unique to your custom
section (that is, if you create multiple sections, they should not share the same
ID).
The ID of the page you want the section to appear on. This is another
GUID that identifies which Team Explorer page the section is a part of. The
IDs for the built-in pages are available from
Microsoft.TeamFoundation.Controls.TeamExplorerPageIds.
The priority of the section within that page. This is an Int32 value that
represents how the section should be sorted within the page, Team Explorer
will show the sections in order of priority, so you can use this value to add
your section at a specific location within the page.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using Microsoft.TeamFoundation.Build.Client;
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.Controls;
using Microsoft.VisualStudio.TeamFoundation.Build;
namespace Company.BuildStatisticsSample
{
[TeamExplorerSection("E52594FD-490A-4218-9D89-25B16500AA32",
TeamExplorerPageIds.Builds,
10)]
public class BuildStatisticsSection : ITeamExplorerSection
{
public BuildStatisticsSection()
{
Title = "Build Statistics";
IsExpanded = true;
IsVisible = true;
IsBusy = false;
}
public void Cancel()
{
}
At this point, if you debug the package (by clicking Debug, Start Debugging), it
will start an experimental instance of Visual Studio; and if you connect to a
Team Foundation Server and switch to the Builds page in Team Explorer, you’ll
see your new section, as shown in Figure 2-25.
Figure 2-25. A custom section in Team Explorer.
Stop debugging (by closing the experimental instance of Visual Studio), and now
we’ll add some static content to this section. To do this, create a user control that
will contain our content as follows:
1. Right-click the project and choose Add, User Control. Name the user
control BuildStatisticsSectionView and click Add.
2. Add a TextBlock with the text “Hello World” as a child of the Grid
element, so that the XAML now looks like this:
<UserControl
x:Class="Company.BuildStatisticsSample.BuildStatisticsSectionView"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:mc="http://schemas.openxmlformats.org/markup-
compatibility/2006"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
mc:Ignorable="d"
d:DesignHeight="300" d:DesignWidth="300">
<Grid>
<TextBlock Text="Hello World" />
</Grid>
</UserControl>
public BuildStatisticsSection()
{
Title = "Build Statistics";
IsExpanded = true;
IsVisible = true;
IsBusy = false;
SectionContent = new BuildStatisticsSectionView();
}
Now if you start debugging again and switch to the Builds page in Team
Explorer, you’ll see that the section now contains the static content you added, as
shown in Figure 2-26.
Figure 2-26. The custom section in Team Explorer showing static content.
Stop debugging again, and now we’ll modify the section to display some
dynamic content instead of the static content. Do this by performing the
following steps:
1. Open the BuildStatisticsSectionView user control code-behind file (by
right-clicking it and choosing View Code).
public BuildStatisticsSectionView()
{
InitializeComponent();
DataContext = this;
}
3. Add a dependency property that we can use to pass information into this
view. For this example, we’re going to add an integer that represents the
number of builds that have completed in the last hour:
5. Replace the Hello World TextBlock added earlier with the following code:
<StackPanel Orientation="Horizontal">
<TextBlock VerticalAlignment="Top" Margin="0,0,4,0">Recently Completed
Builds:</
TextBlock>
<TextBlock VerticalAlignment="Top" Text="{Binding
RecentlyCompletedBuildCount}" />
</StackPanel>
7. Modify Initialize to call the Refresh method so that a refresh will be forced
when the Initialize method is called by Team Explorer:
var contextManager =
(ITeamFoundationContextManager)m_ServiceProvider.
GetService(
typeof(ITeamFoundationContextManager));
var buildService =
(IVsTeamFoundationBuild)m_ServiceProvider.GetService(
typeof(IVsTeamFoundationBuild));
var buildServer = buildService.BuildServer;
//Performance optimizations
buildDetailSpec.InformationTypes = new string[] { };
buildDetailSpec.QueryOptions = QueryOptions.None;
View.RecentlyCompletedBuildCount = buildQueryResult.Builds.Length;
}
finally
{
IsBusy = false;
}
}
Now when you start debugging and switch to the Builds page in Team Explorer,
you should see the number of recently completed builds (as shown in Figure 2-
27), and this will refresh the view each time you click the Refresh button at the
top of the Team Explorer window.
In Visual Studio 2012, the Publish Web dialog box has been extensively updated.
The dialog box now consists of several different tabs. Even though the dialog
box has more functionality, the overall experience is simpler. This is especially
the case when the Import functionality is used to populate the settings. In
Figure 3-2, you can see the new Publish Web dialog box.
The Publish Web dialog box consists of four tabs. On the Profile tab, you can
manage your profiles. To create a new profile, you can either click Import and
select an existing .publishSettings file, or you can create a new profile manually
by selecting the New option from the Select Or Import A Publish Profile drop-
down list. A .publishSettings file is a simple XML file that contains the
publishing information. This file is produced by many web hosting providers and
can be used with Visual Studio or Web Matrix. If your hosting provider does not
make these files available, you should demand that they do. These
.publishSettings files are different from the .pubxml files created with Visual
Studio. The .pubxml files contain the remote endpoint information, as well as
values that are specific to the publishing requirements of your project. In
contrast, the .publishSettings file just contains the publishing endpoint
information. The other difference is that a .publishSettings file can contain more
than one set of publish settings. For example, Windows Azure Web Sites
includes both a Web Deploy profile and the File Transfer Protocol (FTP)
settings.
Figure 3-2. The Visual Studio 2012 Publish Web dialog box.
Here is a basic publishing scenario: You have an existing ASP.NET project that
you need to publish to a remote web host. Your host provides a .publishSettings
file, which you can import into Visual Studio. In my case, I’m publishing to
Windows Azure Web Sites, but this flow works for any hosting provider that
supports .publishSettings files. To open the Publish Web dialog box, right-click
the web project in Solution Explorer and select Publish, which will open the
dialog box shown in Figure 3-2. You can use the Import button to import the
.publishSettings file. After importing the file, you will be brought to the
Connection tab automatically. You can see this tab in Figure 3-3.
Figure 3-3. The Connection tab of the Publish Web dialog box.
The values from the .publishSettings file are used to populate all the settings on
the Connection tab. Depending on your hosting provider, you may need to
specify the User Name and Password information here. You can also click
Validate Connection to double-check that all the settings are correct. We will
discuss the Connection tab in more detail when we demonstrate creating a
package in the Building web packages section later in this chapter. The next tab
in this dialog box is the Settings tab, shown in Figure 3-4.
Figure 3-4. The Settings tab of the Publish Web dialog box.
On this tab, you can specify the build configuration that should be used during
publishing by choosing an item from the Configuration drop-down list. When
you configure this value, keep in mind that these values are drawn from the
project build configurations, not solution build configurations. If you expect to
see an additional value in the drop-down list but do not, odds are that you
created a solution build configuration, but not a corresponding project build
configuration. You can fix this by using the Configuration Manager in Visual
Studio. One thing to be aware of with respect to this value: it’s used only for the
Visual Studio publishing process. For command-line scenarios, you need to
specify the value for Configuration, as you would for any other build. Sayed has
a good blog post with more details at
http://sedodream.com/2012/10/27/MSBuildHowToSetTheConfigurationProperty.aspx
After you click Next, you will be taken to the Preview tab.
On the Preview tab, you can see the operations that will be performed when you
publish your application. There are two areas: Files and Databases. In Figure 3-
5, you can see the Preview tab populated with data from the SampleWeb project.
Figure 3-5. The Preview tab of the Publish Web dialog box.
NOTE
You can double-click a file to see the difference between the local file and the remote file.
Because this project, SampleWeb, does not contain any databases, you only see
file-related operations. When dealing with files, there are three possible Action
types: Add, Update, and Delete. Because I’ve never published this project
before, all the Action values are set to Add. At this point, we are ready to go, so
click Publish to start the process. You can monitor the progress in the output
window. After publishing your project, if a value was provided for the
Destination URL on the Connection tab, that URL will be opened in a browser
after a successful publish. Now that you have been introduced to the Publish
Web dialog box, let’s discuss how to create a web package in Visual Studio
2012.
2. You can customize the package process by using the .pubxml file.
3. You can package from the command line in the same way that you publish.
When you create the package profile in the Publish Web dialog box, the
Connection tab will look like Figure 3-6.
Figure 3-6. The Connection tab for the package profile.
In Figure 3-6, you can see two input fields: Package Location and
Site/Application. Package Location should contain the path to the .zip file that
you want to produce. This is a required field. The value for Site/Application is
optional, but if you know the website or application that you are publishing to,
you can provide the site name or application path here. When the package is
published, this value will be used for the Web Deploy parameter IIS Web
Application Name. Now let’s create a package and take a look at the .pubxml file
that was created.
Included in the samples is the PackageSample project. If you open that project,
you will see that a package profile is defined. This profile, like other profiles, is
stored in the PublishProfiles folder under Properties (My Project for Microsoft
Visual Basic). Here are the contents of the ToPkg.pubxml file:
<Project ToolsVersion="4.0"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<WebPublishMethod>Package</WebPublishMethod>
<SiteUrlToLaunchAfterPublish >
<DesktopBuildPackageLocation>
C:\InsideMSBuild\PublishOutput\PkgSample-default\PackageSample.zip
<DesktopBuildPackageLocation>
<PackageAsSingleFile>true</PackageAsSingleFile>
<DeployIisAppPath >
<PublishDatabaseSettings>
<Objects xmlns="" >
</PublishDatabaseSettings>
</PropertyGroup>
</Project>
In this profile, you can see that WebPublishMethod is set to Package, which
indicates that this is a profile that can be used to create a package. The path for
the package is stored as the MSBuild property DesktopBuildPackageLocation.
The other notable item here is the PublishDatabaseSettings property. Because
my application did not contain any databases, this property is essentially empty.
Even though it is empty, you should not remove it from the .pubxml file. You can
easily automate the process of creating a package by following the same
technique you use to automate the publishing process. Specifically, you’ll create
a publish profile and then use it to automate the process. Let’s now take a closer
look at publish profiles, including how to use them to automate packaging and
publishing.
Publish profiles
When using the Publish Web dialog box after publishing or packaging, a publish
profile is created. The publish profile contains all the settings entered into the
Publish Web dialog box, as well as options that have not yet been seen in the
dialog box. We can use these profiles from either Visual Studio or the command
line. After your first publish profile is created, when you reopen the Publish Web
dialog box, you are taken to the Preview tab with the most recently used profile
automatically selected. On the Preview tab, you can switch profiles quickly
using the drop-down list at the top of the dialog box. If you need to publish to a
new destination, just go back to the Profile tab and create a new profile. You can
have as many profiles defined as you like.
Publish profiles are saved in a folder named PublishProfiles under Properties
(My Project for Visual Basic projects). Each profile will be saved into its own
file with the extension of .pubxml. These files will be added to the project, and to
source control, by default. Your publishing password will be saved in a .user file,
which can only be decrypted by you, and not checked into version control, so
you don’t have to worry about any unauthorized publishing actions. If you want
to keep a profile out of the sight of others, you can simply exclude the .pubxml
file from the project and source control. When the Publish Web dialog box is
opened, it will inspect the folder for the list of all profiles, not just profiles that
are a part of the project. Now let’s take a closer look at a sample .pubxml file.
In the following code block, you will see the contents of a Visual Studio publish
profile that was created when I imported a .publishSettings file (these files are
provided by hosting companies):
<Project ToolsVersion="4.0"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<WebPublishMethod>MSDeploy</WebPublishMethod>
<LastUsedBuildConfiguration>Release</LastUsedBuildConfiguration>
<LastUsedPlatform>Any CPU</LastUsedPlatform>
<SiteUrlToLaunchAfterPublish>http://sayedha.azurewebsites.net</SiteUrlToLaunchAfterPublish>
<ExcludeApp_Data>False</ExcludeApp_Data>
<MSDeployServiceURL>
waws-prod-blu-
001.publish.azurewebsites.windows.net:443</MSDeployServiceURL>
<DeployIisAppPath>sayedha</DeployIisAppPath>
<SkipExtraFilesOnServer>True</SkipExtraFilesOnServer>
<MSDeployPublishMethod>WMSVC</MSDeployPublishMethod>
<EnableMSDeployBackup>True</EnableMSDeployBackup>
<UserName>$sayedha</UserName>
<_SavePWD>True</_SavePWD>
</PropertyGroup>
</Project>
From this code block, you can see that the .pubxml file is an MSBuild file. The
properties declared are specific to the publishing method that is being used. Each
.pubxml file has a single profile and contains all the values that are used by the
Publish Web dialog box for this particular profile. This file is used by the Visual
Studio user interface, but you can also employ this from the command line.
Command-line publishing is supported only for the following publishing
methods: Web Deploy, Web Deploy Package, and File System.
Web publish profiles are designed to allow you to extend the build and
publishing process for a given publishing operation. When a publish profile is
used to publish your application, the publish profile will be imported into the
project itself. Because the .pubxml file is imported into the project file, you have
full access to all MSBuild properties and items defined in the project. Because of
this, from the .pubxml file, you can customize the build process and the
publishing process. From the second edition, you may remember that you could
customize the publishing process by editing the .wpp.targets file. Let’s look at
how to use this profile to publish the project from the command line.
NOTE
Depending on the web host that you are publishing to, you may need to also add
/p:AllowUntrustedCertificate=true.
TIP
If you are building a Visual Studio project file instead of the solution file, you should also
specify the value for /p:VisualStudioVersion=11.0. Without this, the default value of 10.0
will be used.
With this command, the solution file will be built and published. When the
DeployOnBuild property is set to True, the build process will be extended to
publish the project as well. The name of the publish profile is passed in as the
PublishProfile property. When specifying the value for PublishProfile, you have
two options. You can pass in the name of the profile, in which case the build will
use the named profile from the default location, or you can pass in the file path
to the .pubxml file. Now let’s look at how to use this same approach to create
packages.
With this command, you can override specific properties as well. For example,
we showed previously that the package location is stored in the .pubxml file as
an MSBuild property, DesktopBuildPackageLocation. If you would like to
override the location where the package is created, pass the property as a
command-line argument. For example, if I wanted to publish the package to
C:\Temp\AltDest\Mypackage.zip, you can use the following command (which
shows the value for DesktopBuildPackageLocation in bold).
You want to extend the publishing process for all publish profiles.
If you have existing projects with a .wpp.targets file, you do not need to modify
them. They will continue to work. For new projects, you should place publish
customizations in the publish profile. Let’s move on to discuss the database
support that exists in the web publish support.
In Figure 3-7, you can see the ContactsContext class on the Settings tab of the
Publish Web dialog box. In this case, there is a message indicating that you will
need to create EF Code First migrations to publish the database associated with
the context. Once you add migrations for the context, then the Execute Code
First Migrations check box will be enabled. After adding a migration, when you
re-enter the Publish Web dialog box, you can enter a destination connection
string and enable the migrations to be executed. The connection string provided
will be used for executing both the migrations and the run-time connection
string.
TIP
If you have a project with an EF Code First context and do not see it in the Publish Web dialog
box, close the dialog box, rebuild the project, and then reopen the dialog box.
When you publish or package your web project, the final Web.config file will
have the elements required to invoke the migrations. The migrations will be
executed the first time that the EF Code First context is accessed. If your
Web.config file does not have a connection string entry for the EF Code First
context, then one will be added automatically to the published Web.config file.
Now that we have discussed EF Code First contexts, let’s move on to discuss the
DACPAC support that is built in.
In other words, a DACPAC contains all the schema artifacts that the database
consists of. The significance of the words portable artifact should be highlighted
here. The aspect that makes a DACPAC portable is the incremental publish
support that is built on top of it. When using a DACPAC during publish time, the
schema captured in the DACPAC is compared to that of the target database. The
publish process will compute the difference between the DACPAC and the target
database and then execute the difference against the target database. If the two
are equal, then a no-op will be performed. Let’s see how this works during the
Publish Web workflow.
When you open the Publish Web dialog box, if you have any connection strings
in the Web.config file that are not associated with an EF Code First context, then
you will see those on the Settings tab. For example, in Figure 3-8, you can see
the Settings tab for the ContactsSample project.
Figure 3-8. The Publish Web dialog box with a database selected for publishing.
When you check the Update Database check box on the Settings tab when your
web project is published or packaged, a DACPAC is created from the source
connection string. This DACPAC is then transferred to the remote server to
publish the database-related artifacts. This is facilitated by the new dbDacFx
Web Deploy provider. This process is depicted in Figure 3-9.
Figure 3-9. A Web and DACPAC publishing diagram.
In Figure 3-9, you can see that a DACPAC is created from the source database
and placed in a Web Deploy package (or a folder for the direct publish case), and
the web content is also placed there. The database schema will be published first,
followed by any web updates. Both of these processes will be incremental; that
is, only the changes will be applied, not a full publish. In Figure 3-9, the dotted
line represents a firewall that may be in place. When publishing, if you do not
have direct access to the remote database (which is common for many cloud
hosting providers by default), that is OK so long as the Web Deploy server has
access to it. When creating a Web Deploy package, the DACPAC will be placed
inside the package and Web Deploy parameters will be created so that you can
update the connection string during publishing. We will now discuss how to
create a Web Deploy package with a DACPAC.
In the samples, you will find the ContactsSample project, which is a basic web
application that stores contacts in a Microsoft SQL Server database. When
creating a package for this on the Settings tab, I’ve chosen to package the
database and provide a default connection string as well. This was shown
previously in Figure 3-8. The resulting package will have the DACPAC for the
source database in the root of the package. Let’s see what happens when you
import this package using the Microsoft IIS Manager user interface. Using IIS
Manager, you can right-click a site and then select Import Application under the
Deploy menu to import a Web Deploy package (see Figure 3-10).
Figure 3-10. The Import Application option is the IIS Manager.
TIP
If you do not see the Import Application option, you need to install Web Deploy with the IIS
Manager Extensions option checked.
After selecting the package to be imported, you will be prompted to fill in the
values for the Web Deploy parameters (as shown in Figure 3-11).
Figure 3-11. Parameter prompts in IIS Manager for the ContactsSample package.
In Figure 3-11, you can see three parameters. The first parameter will define the
IIS App path where your application will be installed. The next two parameters
are connection strings for the DACPAC. The first is for the connection string
used to publish the database related artifacts, and the final one is for the run-time
connection string that goes in the Web.config file. If you want to use a lower-
privileged connection string at run time, you can do so. After clicking Next, the
database publish operations will be performed, followed by an update of the site
itself. Now that we’ve discussed database publishing with DACPACs, let discuss
the updates that are available for Web.config transforms.
When the Web.config file is being transformed, if either the build configuration
transform or the profile-specific transform does not exist, that particular
transform will simply be skipped. Let’s take a look at how this works.
When Visual Studio 2012 was initially released, the underlying support to
invoke these transforms existed in the web MSBuild targets, but there was no
way to create these transforms easily. You had to create the transforms manually.
In the ASP.NET 2012.2 update for Visual Studio 2012, a new context menu was
added to help you create these transforms. With this update, you can create a
profile-specific transform easily by right-clicking the .pubxml file and selecting
Add Config Transform. You can see this new menu option in Figure 3-13.
Figure 3-13. The Add Config Transform menu option for publish profiles.
When you invoke the Add Config Transform command, it will create the
Web.config transform in the root of the project with the correct name and open it
automatically. In the samples, you will find a project, TransformSample, that
contains the ToPackage.pubxml publish profile. This publish profile is used
when creating a web deploy package for this project. In this project, we have
created the following transforms:
Web.debug.config
Web.release.config
Web.ToPackage.config
Along with the Web.config file, the contents of these transforms are shown next.
We will leave off the Web.debug.config file because it is not used in this demo.
Web.config file
<configuration>
<appSettings>
<add key="default" value="default">
<appSettings>
<system.web>
<compilation debug="true" targetFramework="4.0" >
<system.web>
</configuration>
Web.release.config
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<appSettings>
<add key="release" value="from-release" xdt:Transform="Insert">
<appSettings>
<system.web>
<compilation xdt:Transform="RemoveAttributes(debug)" >
<system.web>
</configuration>
Web.ToPackage.config
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<appSettings>
<add key="to-package" value="from-ToPackage-transform" xdt:Transform="Insert">
<appSettings>
</configuration>
<configuration>
<appSettings>
<add key="default" value="default">
<add key="release" value="from-release">
<add key="to-package" value="from-ToPackage-transform">
<appSettings>
<system.web>
<compilation targetFramework="4.0" >
<system.web>
</configuration>
In this file, you can see that the release transform inserted the release app setting
and removed the debug attribute from the compilation element. You can also see
that the Web.ToPackage.config transform was invoked. Another subtle thing to
notice here is the order in which the app settings were inserted. The release
setting was inserted before the to-package element. This indicates that the
Web.release.config transform was invoked before Web.ToPackage.config.
Another feature released with Visual Studio 2012 is the ability to preview these
transforms. In Visual Studio 2010, if you wanted to see the resulting Web.config
transform, you would have to either publish or package your project, which
made developing these transforms much more difficult than it should have been.
In Visual Studio 2012, however, you can now preview Web.config transforms
easily. The preview functionality works for build configuration transforms as
well as profile-specific ones. You can right-click and select Preview Transform
on any of the transforms. This new option is shown in Figure 3-14.
With these updates for Web.config transforms, it’s much easier to create and use
Web.config transforms. This concludes the Web.config transform content, as
well as the section covering the new features. We will now move on to look at
some real-world examples.
Cookbook
Install-Package PackageWeb
Once the package has been installed in your project, it will extend the package
process. When you create a package after installing PackageWeb, you will see a
new file, Publish-Interactive.ps1, in the output location. This is a Windows
PowerShell script that can be used to publish this package. From a PowerShell
prompt, you can invoke this script to start the publish process. Once you invoke
this script, you will be prompted for the following set of values:
Web.config transform to execute
After providing these values, the Web.config file will be transformed with the
given transform and the publish operation will be invoked. Let’s see this in
action. In the samples, the project PkgWebDemo already has PackageWeb
installed. After creating the package, you can invoke Publish-Interactive.ps1 to
start the publish process. Figure 3-16 shows PackageWeb prompts for the Web
Deploy publish settings. Because there are no Web Deploy parameters created
for the sample, you are not prompted for those.
Figure 3-16. PackageWeb prompts for the Web Deploy settings.
In Figure 3-17, you can see that the source folder structure is replicated inside
the generated PackageSample.zip file. This behavior is annoying, but it can go
beyond that and cause real difficulties if you need to expand this package. When
publishing with Web Deploy, the depth of these folders does not matter, but if
you expand them on disk and manipulate the files, you may exceed the
maximum path length. To avoid this, it would be better to create a Web Deploy
package that did not have these unnecessary folders. Let’s see what it would take
to simplify the folder structure here.
When creating a package using web projects, the following basic steps are
followed:
1. Build a project.
Step 3 is executed by creating an XML file that describes how to create the
package. This is referred to as a Source Manifest file. You can find this file in the
same folder in which the package is created. If you inspect the file generated
when packaging the PackageSample project, you will find the contents to be as
shown in the following code block:
In this manifest, you can see that three Web Deploy providers will be called
when the package is created. Each of these providers references the full path to
the temporary package folder. These are the values shown in bold in this code,
and this is what we want to update during the package creation process. You can
see the goal in Figure 3-18.
Figure 3-18. The process to update file paths for the generated package.
As you can see, we will replace the path during the package operation. We will
do this with a Web Deploy replace rule. Let’s see how to do that with some
customizations to the .pubxml file.
When creating a package using Web Projects, you have the ability to replace
values as the .zip file is being created. This is facilitated by using a replace rule.
The replace rule that we want to create should match the package path and
replace it with a much simpler value. To add a Web Deploy replace rule, you
need to populate the MSDeployReplaceRules item list in the .pubxml file before
the package is created. The next code fragment needs to be added to the .pubxml
file to simplify these paths. The entire profile can be found in the
PackagePath.pubxml file in the PackageSample project.
<PropertyGroup>
<PackagePath Condition=" '$(PackagePath)'=='' ">website</PackagePath>
<PackageDependsOn>
$(PackageDependsOn);
AddReplaceRuleForAppPath;
</PackageDependsOn>
</PropertyGroup>
<Target Name="AddReplaceRuleForAppPath">
<PropertyGroup>
<_PkgPathFull Condition=" '$(WPPAllFilesInSingleFolder)'!='' ">
$([System.IO.Path]::GetFullPath($(WPPAllFilesInSingleFolder)))</_PkgPathFull>
<!-- $(WPPAllFilesInSingleFolder) is not available on VS2010 so fall back to
$(_PackageTempDir) -->
<_PkgPathFull Condition=" '$(_PkgPathFull)' == '' ">
$([System.IO.Path]::GetFullPath($(_PackageTempDir)))</_PkgPathFull>
</PropertyGroup>
In this fragment, you can see the AddReplaceRuleForAppPath target. This target
is injected into the package process by appending it to the PackageDependsOn
property. When this target is invoked, it will determine the full path to the
temporary package folder. This path is converted to a regular-expression format
by using the EscapeTextForRegularExpressions task. Then the value is appended
to the MSDeployReplaceRules item list. As a result, when the package is
created, the complex folder structure will be replaced with a folder named
Website, defined in the PackagePath property. When you create the package
after these changes, you can see the new structure of the created .zip file in
Figure 3-19.
Figure 3-19. A simplified view of the package structure.
In Figure 3-19, you can see that the complex folder structure has been replaced
with the Website folder, in which all the web content that will be published
resides. Now that we’ve shown how to create better web packages, we will move
on to the next sample.
msdeploy.exe
-verb:sync
-source:contentPath="<source-path>"
-dest:contentPath="<dest-path>"
Because we are attempting to synchronize two folders, the sync verb is used and
we use contentPath for both the source and destination. The source folder that
we want to publish is C:\InsideMSBuild\Ch03\FolderPublish\ToPublish, and we
would like to publish it to the Media folder under the FolderPub site. Let’s make
a first attempt to figure out what the final command might look like:
msdeploy.exe
-verb:sync
-source:contentPath="C:\InsideMSBuild\ch03\FolderPublish\ToPublish"
-dest:contentPath="FolderPub/Media"
This command would work great if the site you want to publish to was running
on the local box. Because it is not, we will need to start adding some information
to the destination to indicate the server against which this command should
execute. We will need to add the following parameters to the command:
ComputerName. The URL, or computer name, that will handle the publish
operation.
In this case, the values that we will use for these are
ComputerName. https://waws-prod-bay-
001.publish.azurewebsites.windows.net/msdeploy.axd?site=FolderPub
Username. $FolderPub
AuthType. Basic
For Windows Azure Web Sites, you can find these values in the publish profile,
which you can download from the Azure portal. Let’s add these values to the
command:
msdeploy.exe
-verb:sync -source:contentPath="C:\InsideMSBuild\ch03\FolderPublish\ToPublish"
-dest:contentPath='FolderPub/Media'
,ComputerName="https://waws-prod-bay-
001.publish.azurewebsites.windows.net/msdeploy.axd?
site=FolderPub"
,UserName='$FolderPub'
,Password='%password%'
,AuthType='Basic'
-enableRule:DoNotDeleteRule
-whatif
In this command, we’ve added the destination values, as well as two additional
options: -enableRule:DoNotDeleteRule and -whatif. We pass the
DoNotDeleteRule to ensure that any files in the folder that are on the server but
not the client remain on the server. For now, we are also passing -whatif, which
displays the command’s operations without actually performing them, but we
will remove that when we are ready to publish the folder. You can find the result
of this command in Figure 3-20.
At this point, we are ready to execute this command and publish the folder. You
can find this command in the samples for Chapter 3 in the file
FolderPublish\publishFolder-standard.cmd. There is another cmd file in that
same folder, called PublishFolder-auto.cmd. This file shows how you can use
this same technique with the -dest:auto provider. We won’t cover that here, but it
is in the samples for you to reference.
In this chapter, we have covered a lot of new material, including the Publish Web
dialog box, updates to website project publishing, packaging, publish profiles,
and more. That’s a lot of material to discuss in just a few pages, and we didn’t
even cover all the new features. This chapter should serve as a solid starting
point for your journey in web publishing. From here, the best thing to do is
practice. If you get stuck, try StackOverflow.com (and you can typically find
Sayed hanging around there as well—if you see him, say hello).
Appendix A. About the authors
Symbols
$(MSBuildExtensionsPath) property, How to extend the solution build
.pubxml file
comparison to .publishSettings file, Overview of the new Publish Web
dialog box
replace rules in, Customizing the folder structure inside the package
.sln (solution file), What’s new in MSBuild 4.5, How to extend the solution
build
A
accessing
diagnostic logs, Diagnostic logging
adding
build agents to build controllers, Workflow Runtime
applications
debugging, XML updates with SlowCheetah
B
batching, Batching
build configurations
application, Package Restore
build controllers
adding build agent to existing, Workflow Runtime
build definitions
all build definitions feature, My Builds
filtering, My Builds
Build Explorer
filtering build definitions in, My Builds
requests, Batching
debugging, Batching
builds
adding sections to, Extending Team Explorer
batching, Batching
C
C# expressions, Workflow Runtime
command line
building from, VisualStudioVersion property
configuring
on-premise build machines, Connect on-premise build machines to the
Team Foundation Service
Connection tab (Publish Web), Overview of the new Publish Web dialog
box, Overview of the new Publish Web dialog box
CopyFiles target, How to execute a target only if the project is actually built
customizations
source folder structure, How to publish a package to multiple
destinations
D
DACPACs (DAC packages), Incremental database publishing with
DACPACs
debugging
applications, XML updates with SlowCheetah
E
EF (Entity Framework) Code First, Relationship between publish profiles
and .wpp.targets
-enableRule
DoNotDeleteRule option, How to publish a folder with Web Deploy
F
file format version number, What’s new in MSBuild 4.5
files
building, VisualStudioVersion property
folders
customizing package, How to publish a package to multiple destinations
G
gated check-ins, My Builds, Pausing build definitions
H
Hosted Build Controller, Team Foundation Service, Connect on-premise
build machines to the Team Foundation Service
I
IIS Manager, Incremental database publishing with DACPACs
installing
NuGet packages, Managing NuGet packages
L
libraries, custom activity, Workflow Runtime
M
Managed Extensibility Framework, Extending Team Explorer
MSBuild 4.5
build process, How to execute a target only if the project is actually built
N
NTLM, How to publish a folder with Web Deploy
O
operational logs, Diagnostic logging
P
Package Location field, Building web packages
package management, Phantom task parameters
packages
customizing source folder structures in, How to publish a package to
multiple destinations
Preview tab (Publish Web), Overview of the new Publish Web dialog box,
Building web packages
Profile tab (Publish Web), Overview of the new Publish Web dialog box,
Overview of the new Publish Web dialog box, Building web packages
projects
build configurations, Overview of the new Publish Web dialog box
replace rules in, Customizing the folder structure inside the package
publish profiles
benefits of using, for packaging, Overview of the new Publish Web
dialog box
in Visual Studio 2010, Overview of the new Publish Web dialog box
publishing
automating, Publish profiles
folders, with Web Deploy, Customizing the folder structure inside the
package
Q
Queue Build link, Web Access
R
Rebuild target, How to extend the solution build
requests
batching multiple, Batching
S
SectionContent property, Extending Team Explorer
servers
build, SlowCheetah build server support
Settings tab (Publish Web), Overview of the new Publish Web dialog box
solution file (.sln), What’s new in MSBuild 4.5, How to extend the solution
build
Source Manifest file, Customizing the folder structure inside the package
Staging Location, Team Foundation Service
T
target injection, How to execute a target only if the project is actually built
targets
.wpp, Automating web publishing using a publish profile
copy file, How to execute a target only if the project is actually built
skipping of, How to execute a target only if the project is actually built
U
unit testing frameworks, Web Access
updating
applications, Package Restore
V
version number, file format, What’s new in MSBuild 4.5
building web packages, Overview of the new Publish Web dialog box
Publish Web dialog box, Overview of the new Publish Web dialog box
W
Web Access, Team Foundation Service, All Build Definitions
Web Deploy Package method, Overview of the new Publish Web dialog box,
Publish profiles, Incremental database publishing with DACPACs, Profile-
specific Web.config transforms, How to publish a package to multiple
destinations
web packages
automating, Automating web publishing using a publish profile
workflow activities
adding comments to, Auto-Surround with Sequence
X
XML Document Transforms (XDTs), Package Restore
About the Authors
Sayed Ibrahim Hashimi is a consultant, trainer, and senior software developer
who has designed large-scale distributed applications using a variety of
programming languages and platforms, with specific expertise on MSBuild.
William Bartholomew is a software development engineer in the Microsoft
Developer Division Engineering Systems group, which includes the build lab
responsible for building and shipping Microsoft Visual Studio software.
Special Upgrade Offer
If you purchased this ebook from a retailer other than O’Reilly, you can upgrade
it for $4.99 at oreilly.com by clicking here.
Supplement to Inside the Microsoft® Build Engine: Using
MSBuild and Team Foundation Build
Sayed Ibrahim Hashimi
William Bartholomew
Editor
Devon Musgrave
Copyright © 2013
All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by
any means without the written permission of the publisher.
Library of Congress Control Number (PCN): 2013935725
Microsoft Press books are available through booksellers and distributors worldwide. If you need support
related to this book, email Microsoft Press Book Support at mspinput@microsoft.com. Please tell us what
you think of this book at http://www.microsoft.com/learning/booksurvey.
“Microsoft and the trademarks listed at
http://www.microsoft.com/about/legal/en/us/IntellectualProperty/Trademarks/EN-US.aspx are trademarks of
the Microsoft group of companies. All other marks are property of their respective owners.”
The example companies, organizations, products, domain names, email addresses, logos, people, places,
and events depicted herein are fictitious. No association with any real company, organization, product,
domain name, email address, logo, person, place, or event is intended or should be inferred.
This book expresses the author’s views and opinions. The information contained in this book is provided
without any express, statutory, or implied warranties. Neither the authors, Microsoft Corporation, nor its
resellers, or distributors will be held liable for any damages caused or alleged to be caused either directly or
indirectly by this book.
Acquisitions Editor: Devon Musgrave
Developmental Editor: Devon Musgrave
Project Editor: Valerie Woolley
Editorial Production: Christian Holdener, S4Carlisle Publishing Services
Technical Reviewer: Marc Young w/ CM; Technical Review services provided by Content Master, a member of CM Group,
Copyeditor: Susan McClung
Indexer: Jean Skipp
Cover: Twist Creative • Seattle and Joel Panchot
Microsoft Press
A Division of Microsoft Corporation
One Microsoft Way Redmond, Washington 98052-6399
2013-04-15T12:07:22-07:00