Web Services

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

.

NET Web Services

Introduction

Microsoft promoted Web Services a great deal with the launch of .NET. This seems to
have faded a little with Web Services perhaps not having taken off as much as was
thought. The concept of interoperability over the Internet is still a great one
however. In this article we'll quickly review the basics of Web Services before
continuing on in subsequent articles to look at Web Services in more depth. The
current plan is to consider the following topics as follows:

Article 1:
Introduction: Overview, SOAP, DISCO, UDDI and WSDL; creating and consuming a
WebService in VS.NET.

Article 2:
Customising the WebMethod attribute
Disco and UDDI practicalities
The disco.exe and wsdl.exe tools

Article 3:
Creating and using SOAP extensions
Creating asynchronous web methods
Controlling XML wire format

Hopefully I'll be able to squeeze all that into three articles!

First, to the basics.

What are Web Services?

Web Services enable the exchange of data and the remote invocation of application
logic using XML messaging to move data through firewalls and between
heterogeneous systems. Although remote access of data and application logic is not
a new concept, doing so in such a loosely coupled fashion is. The only assumption
between the Web Service client and the Web Service itself is that recipients will
understand the messages they receive. As a result, programs written in any
language, using any component model, and running on any operating system can
access and utilize Web Services.

In this article we'll take a look at the key foundation concepts of Web Services as
well as showing how to both consume Web Services and implement a simple Web
Service in the .NET environment.

The Protocols

The key to understanding Web Services is knowledge of the underlying protocols.


Importantly, by default, all communications between Web Services servers and
clients is through XML messages transmitted over the HTTP protocol. This has 3
benefits:

1. XML text based formatting means these messages are reasonably easy for us
humans to read and understand.
2. As HTTP is used these messages will not normally be blocked by firewalls and
hence will reach their target destination.

3. XML text based formatting can be interpreted by a wide variety of software on


many operating systems.

SOAP

SOAP (Simple Object Access Protocol) is the protocol that allows us to encapsulate
object calls as XML. An example SOAP message is:
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<GetArticles xmlns="http://www.cymru-web.net/my_articles_WS/Articles" />
</soap:Body>
</soap:Envelope>
We'll return to SOAP in later articles.

Disco and UDDI

You need to know where and how to locate a Web Service in order to be able to use
it – a process known as discovery. The aim of these two protocols is to facilitate the
discovery process.

Disco is a Microsoft standard for the creation of discovery documents. A Disco


document is kept in a standard location on a Web Services server (i.e. a web server
which hosts Web Services). It contains information such as the path to the
corresponding WSDL file (see next section). A Disco document supports static
discovery – a potential client must know the location of the document in order to use
it.

For VS.NET projects you would not normally use Disco documents as discovery
information is available anyway from their base URL, e.g.

http://myServer/myWebService/base_class.asmx?wsdl

would provide discovery information.

However you may also add Disco files to your Web Services project.

UDDI (Universal Description, Discovery, and Integration) is a standardised method of


finding web services via a central repository or directory. It applies not only to Web
Services but any online resource. UDDI registries are searchable sites that contain
information available via UDDI. UDDI provides dynamic discovery – you can discover
a Web Service without having to know its location first.

UDDI registries can be private (Intranet based) or public (Internet based). To add
your WebServices to a UDDI resgistry you must use the tools provide by the
particular registry.

WSDL
WSDL (WebServices Description Language) does what it says – allows description of
the Web Service – it specifies the SOAP messages that it can send and receive. The
WSDL file defines the public interface of the Web Service: the data types it can
process, the methods it exposes and the URLs through which those methods can be
accessed.

Here's an example WSDL file, actually from the example later in this article:
<?xml version="1.0" encoding="utf-8" ?>
- <definitions xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:s="http://www.w3.org/2001/XMLSchema"
xmlns:s0="http://www.cymru-web.net/my_articles_WS/Articles"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/"
xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
targetNamespace="http://www.cymru-web.net/my_articles_WS/Articles"
xmlns="http://schemas.xmlsoap.org/wsdl/">
- <types>
- <s:schema elementFormDefault="qualified"
targetNamespace="http://www.cymru-web.net/my_articles_WS/Articles">
<s:import namespace="http://www.w3.org/2001/XMLSchema" />
- <s:element name="GetArticles">
<s:complexType />
</s:element>
- <s:element name="GetArticlesResponse">
- <s:complexType>
- <s:sequence>
- <s:element minOccurs="0" maxOccurs="1"
name="GetArticlesResult">
- <s:complexType>
- <s:sequence>
<s:element ref="s:schema" />
<s:any />
</s:sequence>
</s:complexType>
</s:element>
</s:sequence>
</s:complexType>
</s:element>
</s:schema>
</types>
- <message name="GetArticlesSoapIn">
<part name="parameters" element="s0:GetArticles" />
</message>
- <message name="GetArticlesSoapOut">
<part name="parameters" element="s0:GetArticlesResponse" />
</message>
- <portType name="ArticlesSoap">
- <operation name="GetArticles">
<input message="s0:GetArticlesSoapIn" />
<output message="s0:GetArticlesSoapOut" />
</operation>
</portType>
- <binding name="ArticlesSoap" type="s0:ArticlesSoap">
<soap:binding transport="http://schemas.xmlsoap.org/soap/http"
style="document" />
- <operation name="GetArticles">
<soap:operation soapAction="http://www.cymru-
web.net/my_articles_WS/Articles/GetArticles" style="document" />
- <input>
<soap:body use="literal" />
</input>
- <output>
<soap:body use="literal" />
</output>
</operation>
</binding>
- <service name="Articles">
- <port name="ArticlesSoap" binding="s0:ArticlesSoap">
<soap:address location="http://localhost/my_articles_WS/articles.asmx" />
</port>
</service>
</definitions>
We'll return to WSDL in later articles.

Creating A Web Service

I'm going to create a Web Service practical for me ... it's going to return the latest
list of the articles I have written. Thus I'll only have to maintain the code and data
for this application in one place and the functionality will be easily accessible from
the several sites where I need to display this information.

Feel free to choose a small application more useful to yourself whilst implementing
via the framework we shall now explore, amending the process accordingly. I'm
going to use VS.NET as the IDE for the mini-project. Please adjust for your own IDE
and if the IDE amounts to notepad and the command line compiler my former article
on Web Services on ASPAlliance may prove useful. This also dovetails nicely into the
introduction of the data source of the application: an XML document, a snippet of
which will tell you where to find the aforementioned article:
<article
name="An Introduction to Web Services"
url="http://www.aspalliance.com/sullyc/articles/intro_to_web_services.aspx"
PubDate="2003-01-30" />
Thus the Web Service shall load the XML, and XML Schema, from files (see the
accompanying links with for sample files to download) and list of articles to the client
Web Services consumer, returned as a DataSet object for direct data binding to a
DataGrid (in this case - obviously the client is free to do whatever they like with the
returned object).

First, create a new project in VS.NET, selecting the ASP.NET Web Service template
from the VB Project Type. Specify a location which should be your local web server
and an appropriate application name, in my case: HTTP://localhost/my_articles_WS.
VS.NET will create a default Web Service file, service1.asmx which will be visible in
the solution explorer. Rename this to something more meaningful to your
application, in my case articles.asmx.

Switch to the code view of articles.asmx which will by default already be open within
VS.NET. Ensure the class name matches your filename for consistency. Add your
code, taking care not to alter any of the VS.NET Web Services designer generated
code. Highlighting only the new / key code:

Imports System.Web.Services

<System.Web.Services.WebService(Namespace:="http://www.cymru-
web.net/my_articles_WS/Articles")> _
Public Class Articles
Inherits System.Web.Services.WebService

<WebMethod()> _
Public Function GetArticles() As DataSet

Dim dsDMS As DataSet = New DataSet

dsDMS.ReadXmlSchema(Server.MapPath("articles_schema.xml"))
dsDMS.ReadXml(Server.MapPath("articles.xml"))

Return dsDMS
dsDMS = Nothing
End Function

End Class
Note you can change the namespace to your own. As you are probably aware
already this should simply be unique – the exact value is unimportant.

Finally build the new Web Service. Easy wasn't it?! VS.NET hides much of the
complexity from you meaning you only have to perform 3 actions:

1. Build your project from the ASP.NET Web Services template.

2. Write and mark the classes that should be available via the Web Service with
the Web Service attribute.

3. Write and mark the methods that should be available via the Web Service
with the WebMethod attribute.

Testing the created Web Service

While you could create a client application to test the Web Service, and we shall
shortly, VS.NET includes tools hosted on a web page for testing the Web Service
without resorting to the additional overhead of developing a client application.

In VS.NET view your Web Service in Internet Explorer. You'll get the default test
page for the Web Service including a list of links to supported operations (in this case
one link to GetArticles) and a link to the service description of the Web Service (as
used as an example in the WSDL section above).
If you click the GetArticles link you'll be able to invoke the web method and you'll
see the dataset returned within the XML SOAP message content.

Consuming a Web Service

How do we consume this Web Service? Let's do this from a web form and display the
results in a DataGrid. Ordinarily the Web Service and Web Service client would not
exist on the same machine but it makes little difference. Add an ASP.NET web
application project to your VS.NET solution.

Add a 'web reference' to the Web Service – right click on the References directory of
your project and select 'Add Web Reference'. Locate the Web Service and select 'Add
Reference'.

Alter the default web form as follows: add a button named btnInvoke with a label
'Invoke' and a DataGrid named dgArticles to the form. Double click the button and
enter the following code to invoke the Web Service when the user clicks the button.

Dim articles As localhost.Articles = New localhost.Articles


dgArticles.DataSource = articles.GetArticles()
dgArticles.DataBind()
View the page in your web browser and you should get the list of articles back, albeit
basically formatted. Feel free to tidy up.

Conclusion

That concludes article 1 in this series within which I've provided an introduction to
Web Services including examples of creating and consuming them using VS.NET. In
the next article we'll delve a little deeper into WebMethods, Disco, UDDI and the
available supporting toolset.

References

.NET SDK

Developing XML WebServices and Server Components with VB.NET and the .NET
Framework
Mike Gunderloy
Que

Saving a DataSet as an XML File


XML is very well integrated with .NET. Many server controls have XML methods built
in making XML capabilities only a method or two away. The DataSet class contains
several of these XML methods and we will examine a couple of them in this article.
The task in this example is to read a database table into a DataSet and then write
the DataSet out to the file system as an XML file. We will also do the same thing with
the schema for the XML file.

The code is ridiculously simple! Our .aspx file, as a matter of fact contains essentially
no code other than the bare minimum for an html page. All we have done is add a
Done! message to it. The work, what there is of it, is done in the code-behind file.
The .aspx file is shown below.
<%@ Page Language="vb" Src="/Portals/57ad7180-c5e7-49f5-b282-
c6475cdb7ee7/DataSetToXML.aspx.vb" Inherits="DataSetToXML" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>DataSetToXML</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name=vs_defaultClientScript content="JavaScript">
<meta name=vs_targetSchema
content="http://schemas.microsoft.com/intellisense/ie5">
</head>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
</form>
<h3>Done!</h3>
</body>
</html>
The code-behind file is not much more complicated. Most of it is the usual code for
filling a DataSet using a DataAdapter. To keep the XML file to a shorter length we are
selecting only the TOP 10 from the Customers table of the Northwind database. The
two lines that actually write out the XML file and the Schema are highlighted in blue.
WriteXML and WriteXMLSchema are both methods of the DataSet class.
Server.MapPath is utilized to write the two files to the root directory of your web. The
two files are named "Customers.xml" and "Custmers.xsd".

Imports System
Imports System.Data
Imports System.Data.SqlClient
Imports System.Configuration

Public Class DataSetToXML : Inherits System.Web.UI.Page

Private Sub Page_Load(ByVal sender As System.Object, ByVal e As System.EventArgs)


Handles MyBase.Load
Dim objConn As SqlConnection
Dim strSql As String

strSql = "SELECT TOP 10 * FROM Customers"


objConn = New
SqlConnection(ConfigurationSettings.AppSettings("ConnectionString"))

Dim sdaCust As New SqlDataAdapter(strSql, objConn)


Dim dstCust As New DataSet()

sdaCust.Fill(dstCust, "Customers")
'Save data to xml file and schema file
dstCust.WriteXML(Server.MapPath("Customers.xml"),XmlWriteMode.IgnoreSchema)
dstCust.WriteXMLSchema(Server.MapPath("Customers.xsd"))
End Sub

End Class
I started to apologize for the brevity of this article, but really, .Net is to blame for
making it so easy to convert database tables to XML! I hope you agree.

Use the DataList Control to Present and Edit Data


The DataList is not as powerful as the DataGrid. It requires more work from you
since it has no default data presentation format. However, the DataGrid begins to
get very cumbersome as the number of columns of data you present increases.
Anything more than half a dozen columns or so and you probably induce horizontal
scrolling - a real no-no for me. If you put such a DataGrid into edit mode then you
really have a horizontal scrolling problem.

The DataList, with its ItemTemplate and EditItemTemplate, make it very easy for
you to control the appearance (and screen real estate) of the data. As I said before,
it requires more coding but the results may well be worth the effort.

In this article and example program we will deal with the Northwind Customers table.
I have included nine columns of editable data. I have divided the work between an
aspx page and a code-behind page. In the aspx page we layout our presentation of
data, while the code-behind file places the DataList in edit mode, and handles the
updating of modified data. The aspx file will be shown below in several sections to
make it easier to explain what each section does. This first section is the usual top-
of-page "stuff" and the definition of the DataList Control. The only items of note are
that we have set the OnEditCommand, OnUpdateCommand, and OnCancelCommand
properties to the names of the corresponding event handlers which are defined in the
code-behind file.

<%@ Page Language="vb" Src="/Portals/57ad7180-c5e7-49f5-b282-


c6475cdb7ee7/DataListEdit.aspx.vb" Inherits="Main" %>

<html>
<head>
<title>DataList Edit</title>
<style rel="stylesheet">
.customers { font: 9pt Verdana, Arial, sans-serif; }
.customersHead { font: bold 8pt Verdana, Arial, sans-serif;
background-color:#4A3C8C; color:white; }
a { text-decoration:underline; }
a:hover { text-decoration:underline; color:#4A3C8C; }
</style>
</head>
<body>
<form runat="server" ID="Form1">
<div align="center">
<h3>Customers Table</h3>
</div>
<asp:DataList id="dtlcustomers"
runat="server"
width="760"
BorderWidth="1"
HeaderStyle-CssClass="customersHead"
AlternatingItemStyle-BackColor="#DEDFDE"
Font-Size="10"
Align="Center"
OnEditCommand="dtlcustomers_Edit"
OnUpdateCommand="dtlcustomers_Update"
OnCancelCommand="dtlcustomers_Cancel">
The following section includes the ItemTemplate for presentation of our data. The
code (markup) is fairly long, but all we are doing is creating an html table to present
the data. The CompanyName column is shown in a TD element of its own. The rest
of the data and column descriptions are show two columns abreast. Notice that we
are specifically naming the column headings in one TD element and using the Eval
method of the DataBinder class to obtain the actual database table data. We are also
using a Button control to induce edit mode in the code-behind file. You can use a
LinkButton if you prefer a textual presentation. This may look a little messy at first,
but if you run the program (from the link at the bottom of the article) and compare
the output to what you see below, I belive you find it very straight forward.
<ItemTemplate>
<table cellpadding="2" cellspacing="0" width="100%">
<tr>
<td colspan="4" class="customersHead">
<h3><%# DataBinder.Eval(Container.DataItem, "CompanyName") ></h3>
</td>
</tr>
<tr>
<td Width="100%" Align="left" colspan="4">
<asp:button id="btnEdit" Runat="server" CommandName="edit" Text="Edit"
/>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Contact Name</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "ContactName") %>
</td>
<td Width="25%" Align="left">
<b>Contact Title</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "ContactTitle") %>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Address</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "Address") %>
</td>
<td Width="25%" Align="left">
<b>City</b>
</td>
<td width="25%" align="left">
<%# DataBinder.Eval(Container.DataItem, "City") %>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Postal Code</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "PostalCode") %>
</td>
<td Width="25%" Align="left">
<b>Country</b>
</td>
<td width="25%" align="left">
<%# DataBinder.Eval(Container.DataItem, "Country") %>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Phone</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "Phone") %>>
</td>
<td Width="25%" Align="left">
<b>Fax</b>
</td>
<td width="25%" align="left">
<%# DataBinder.Eval(Container.DataItem, "Fax") %>
</td>
</tr>
</Table>
</ItemTemplate>
Next we must decide how our data and column descriptions are to appear while in
edit mode. That is the purpose of the markup below following the EditItemTemplate
tag. The process is much the same as in the ItemTemplate section above. The main
difference is that we are creating TextBox controls to contain the actual data, so that
the data becomes editable. I also chose to present the column descriptions and data
one abreast rather than two abreast as above. I did this for two reasons. One was
just to show that the ItemTemplate and EditItemTemplates stand alone and do not
have to have the same presentation format, and to make more room for several of
the TextBoxes that can hold 30 - 40 characters of data. Again, once you run the
program you will see the difference in presentation.
<EditItemTemplate>
<table cellpadding="2" cellspacing="0" width="100%">
<tr>
<td colspan="2" class="customersHead">
<h3><%# DataBinder.Eval(Container.DataItem, "CompanyName") %></h3>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Company Name</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtCompanyName" runat="server" MaxLength="40"
Columns="40"
Text='<%# DataBinder.Eval(Container.DataItem, "CompanyName") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Contact Name</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtContactName" Runat="server" MaxLength="30"
Columns="30"
Text='<%# DataBinder.Eval(Container.DataItem, "ContactName") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Contact Title</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtContactTitle" Runat="server" MaxLength="30"
Columns="30"
Text='<%# DataBinder.Eval(Container.DataItem, "ContactTitle") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Address</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtAddress" Runat="server" MaxLength="60" Columns="60"
Text='<%# DataBinder.Eval(Container.DataItem, "Address") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>City</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtCity" Runat="server" MaxLength="15" Columns="15"
Text='<%# DataBinder.Eval(Container.DataItem, "City") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Postal Code</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtPostalCode" Runat="server" MaxLength="10"
Columns="10"
Text='<%# DataBinder.Eval(Container.DataItem, "PostalCode") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Country</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtCountry" Runat="server" MaxLength="15" Columns="15"
Text='<%# DataBinder.Eval(Container.DataItem, "Country") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Phone</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtPhone" Runat="server" MaxLength="24" Columns="24"
Text='<%# DataBinder.Eval(Container.DataItem, "Phone") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Fax</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtFax" Runat="server" MaxLength="24" Columns="24"
Text='<%# DataBinder.Eval(Container.DataItem, "Fax") %>'/>
</td>
</tr>
<tr>
<td colspan="2">
<asp:Label id="lblCustomerID" runat="server"
Text='<%# DataBinder.Eval(Container.DataItem, "CustomerID") %>'
Visible="false" />
</td>
</tr>
<tr>
<td Width="50%" Align="right">
<asp:Button id="btnUpdate" Runat="server" CommandName="update"
Text="Update" />
<asp:Button id="btnCancel" Runat="server" CommandName="cancel"
Text="Cancel" />
</td>
<td Width="50%" Align="Left">

</td>
</tr>
</table>
</EditItemTemplate>
</asp:DataList>
</form>
</body>
</html>
Now for the code-behind file. We will also present this file in sections to better
illustrate and explain the code. First are the Page_Load and BindTheData()
subroutines. The Page_Load simply checks to make sure this is the first time the
page has been loaded and calls the BindTheData subroutine. BindTheData uses a
DataAdapter to obtain the data from the table, fills a DataSet and binds the data to
the DataList control (dtlCustomers).
Imports System
Imports System.Data
Imports System.Data.SqlClient
Imports System.Web.UI
Imports System.Web.UI.WebControls
Imports System.Configuration

Public Class Main : Inherits Page

Private strConn As String =


ConfigurationSettings.AppSettings("ConnectionString")
Public dtlCustomers As DataList

Public Sub Page_Load(sender as Object, e as EventArgs)

If Not IsPostBack Then


BindTheData()
End If

End Sub

Private Sub BindTheData()

Dim objConn as new SqlConnection(strConn)


Dim strSQL as String
strSQL = "SELECT Top 5 * FROM Customers"
Dim sda as new SqlDataAdapter(strSQL, objConn)
Dim ds as new DataSet()
sda.Fill(ds,"Customers")
dtlCustomers.DataSource = ds.Tables("Customers").DefaultView
dtlCustomers.DataBind()
End Sub
Next are two short bits of code to handle the dtlCustomers_Edit and
dtlCustomers_Cancel subroutines. Remember in the aspx file above we set several
properties of the Datalist control to call events in the code-behind file. Edit and
Cancel were two of those. We set the OnEditCommand property equal to
"dtlcustomers_Edit". We also created a button with a CommandName of "edit". The
combination of the two brings us to the edit subroutine presented below. We use the
ItemIndex property to know which row is to be edited. We also created a Cancel
button (and set the OnCancelCommand property) to get us out of edit mode if we
want to abandon changes rather than going ahead with the update of the row.
Cancel is handled easily simply by setting the EditItemIndex property to -1.
Public Sub dtlCustomers_Edit(sender as Object, e as DataListCommandEventArgs)

dtlCustomers.EditItemIndex = e.Item.ItemIndex
BindTheData()

End Sub

Public Sub dtlCustomers_Cancel(sender as Object, e as DataListCommandEventArgs)

dtlCustomers.EditItemIndex = -1
BindTheData()

End Sub
The last section of code presented is the dtlCustomers_Update subroutine and is by
far the longest section. As you may recall in the aspx page EditItemTemplate we
created TextBoxes to present data for editing. The value in those TextBox controls
are used to change the data. The values of the textboxes are gathered and placed in
string variables in the code immediately below using the FindControl method. We
now have the data after any editing that took place. Immediately below that is our
update statement which is constructed using parameters for the column values.
Below that we Add parameters to the sqlCommand object and set the parameter
values to the string variables holding our edited data. Following that we simply do
the update and then rebind the DataList control.
Public Sub dtlCustomers_Update(sender As Object, e As DataListCommandEventArgs)

Dim strCompanyName, strContactName, strContactTitle, strCustomerID As String


Dim strAddress, strCity, strPostalCode, strCountry, strPhone, strFax As
String

strCompanyName = CType(e.Item.FindControl("txtCompanyName"), TextBox).Text


strContactName = CType(e.Item.FindControl("txtContactName"), TextBox).Text
strContactTitle = CType(e.Item.FindControl("txtContactTitle"), TextBox).Text
strAddress = CType(e.Item.FindControl("txtAddress"), TextBox).Text
strCity = CType(e.Item.FindControl("txtCity"), TextBox).Text
strPostalCode = CType(e.Item.FindControl("txtPostalCode"),TextBox).Text
strCountry = CType(e.Item.FindControl("txtCountry"),TextBox).Text
strPhone = CType(e.Item.FindControl("txtPhone"),TextBox).Text
strFax = CType(e.Item.FindControl("txtFax"),TextBox).Text
strCustomerID = CType(e.Item.FindControl("lblCustomerID"), Label).Text

Dim strSQL As String


strSQL = "Update Customers " _
& "Set CompanyName = @CompanyName," _
& "ContactName = @ContactName," _
& "ContactTitle = @ContactTitle, " _
& "Address = @Address, " _
& "City = @City, " _
& "PostalCode = @PostalCode, " _
& "Country = @Country, " _
& "Phone = @Phone, " _
& "Fax = @Fax " _
& "WHERE CustomerID = @CustomerID"

Dim objConn As New SqlConnection(strConn)


Dim cmdSQL As New SqlCommand(strSQL, objConn)
cmdSQL.Parameters.Add(new SqlParameter("@CompanyName", SqlDbType.NVarChar,
40))
cmdSQL.Parameters("@CompanyName").Value = strCompanyName
cmdSQL.Parameters.Add(new SqlParameter("@ContactName", SqlDbType.NVarChar,
30))
cmdSQL.Parameters("@ContactName").Value = strContactName
cmdSQL.Parameters.Add(new SqlParameter("@ContactTitle", SqlDbType.NVarChar,
30))
cmdSQL.Parameters("@ContactTitle").Value = strContactTitle
cmdSQL.Parameters.Add(new SqlParameter("@Address", SqlDbType.NVarChar, 60))
cmdSQL.Parameters("@Address").Value = strAddress
cmdSQL.Parameters.Add(new SqlParameter("@City", SqlDbType.NVarChar, 15))
cmdSQL.Parameters("@City").Value = strCity
cmdSQL.Parameters.Add(new SqlParameter("@PostalCode", SqlDbType.NVarChar,
10))
cmdSQL.Parameters("@PostalCode").Value = strPostalCode
cmdSQL.Parameters.Add(new SqlParameter("@Country", SqlDbType.NVarChar, 15))
cmdSQL.Parameters("@Country").Value = strCountry
cmdSQL.Parameters.Add(new SqlParameter("@Phone", SqlDbType.NVarChar, 24))
cmdSQL.Parameters("@Phone").Value = strPhone
cmdSQL.Parameters.Add(new SqlParameter("@Fax", SqlDbType.NVarChar, 24))
cmdSQL.Parameters("@Fax").Value = strFax
cmdSQL.Parameters.Add(new SqlParameter("@CustomerID", SqlDbType.NChar, 5))
cmdSQL.Parameters("@CustomerID").Value = strCustomerID

objConn.Open()
cmdSQL.ExecuteNonQuery()
objConn.Close()

dtlCustomers.EditItemIndex = -1
BindTheData()

End Sub

End Class

Conclusion: You have seen a lot coding necessary to presenting a DataList, placing it
in edit mode, and then canceling or updating the data after making changes. If you
take the code one section at a time and see what each section actually does, I
believe you will find that you can finely tune how your data is presented both for
viewing and for editing, and how to accomplish the update. Best of luck!

.NET Remoting - Part I

Part I: Introductory Theory: Processes, Applications, Distributed Applications


Knowledge assumed for series: VB.NET, VS.NET

Introduction

Remoting provides a flexible architecture for distributed applications in .NET. This


series of articles shall examine the topic of remoting. In particular I shall be looking
at the following sections of information:

1. Introductory theory: Processes, Applications, Distributed Applications

2. .NET Remoting Architecture

3. Applying .NET Remoting


Creating a remotable class
Creating a server activated object
Creating a client activated object
Configuration issues
Interface assemblies
IIS
Asynchronous remoting

In this first article I'll attempt to summarise and introduce the necessary background
theory of sections 1 and 2 before proceeding in article II to demonstrate the practical
application of the theory which shall continue into article III.

Applications, Application Boundaries and Processes

A process is an application under execution. Windows isolates processes from one


another so that the code running in one process cannot adversely affect other
processes. Such process isolation ensures that:

? each process has its own virtual address space, executable code and data.

? each process cannot directly access the code or data of another process.

? each process runs only one applications, so if the application crashes it does
not affect other applications.

Such process and application isolation thus has obvious benefits but the necessary
process handling is resource intensive, notably the activity of process switching (so
that each process receives its allocated share of CPU time).

Things are slightly different when it comes to .NET. The Common Language Runtime
(CLR) provides a managed execution environment for .NET applications. The
characteristics of the CLR allow the provision of isolation between running
applications at a lower resource cost than a process boundary.

Within the CLR, instead of the fundamental unit of isolation being a process it is an
application domain (AppDomain) and several AppDomains can run within a single
process. This is achieved while also providing the same level of isolation between
applications as provided by a Windows process. Further with less processes, the
overhead of process switching becomes less of an issue and the performance of
applications is increased as a consequence.
You can programmatically create AppDomins but normally they are created and
managed by the runtime hosts that execute your code. By default with the .NET
Framework three runtime hosts are configured for use: Windows shell, ASP.NET and
Internet Explorer.

You may be wondering 'why IE?' at this point. Internet Explorer creates application
domains in which to run managed controls. The .NET Framework supports the
download and execution of browser-based controls. The runtime interfaces with the
extensibility mechanism of Microsoft Internet Explorer through a mime filter to
create application domains in which to run the managed controls. By default, one
application domain is created for each Web site.

Distributed Applications

So, both process and application domains provide boundaries between applications
affording their necessary protection and as part of this protection objects situated on
either side of these boundaries are not permitted to communicate with each other.

This is obviously going to be a problem in the world of distributed applications – they


need to support a mechanism for enabling such communication. We'll return to how
remoting enables such communication shortly but first, why would we want to
implement our application in a distributed fashion? Well, a distributed application has
the potential to improve on a non-distributed solution in the following areas:
availability, scalability and robustness. For enterprise application these are highly
desirable features.

Several efforts have been made to design frameworks for developing distributed
applications, for example:

? Distributed Computing Environment/ Remote Procedure Calls

? Distributed Component Object Model

? Common Object Request Broker Architecture

? Java Remote Method Invocation

Most of these are still widely deployed in enterprises. However they all have
limitations that mean they fail on one or more of the following criteria:

? Allow rapid development

? Integrate well with legacy applications

? Offer good interoperability

.NET’s Solutions

.NET provides two main pre-built frameworks for designing and implementing
distributed applications: .NET remoting and ASP.NET WebServices. Both offer similar
functionality and in fact, WebServices are built on the .NET remoting infrastructure.
So why would you choose one over the other? The choice depends on the type of
application you want to create:
You should use .NET remoting when both the client and server of the distributed
application are under your control, for example when an application is being
designed for use within a corporate network.

You should used ASP.NET WebServices when you do not have such control, for
example when your application is interoperating with an application of a business
partner.

Remoting Architecture

.NET remoting allows communication between programmatic objects in different


application domains and, particularly, application domains that are separated across
the network. In these instances remoting transparently handles the details
concerned with the necessary network communication.

The key question to address is: how can remoting establish cross-application domain
communication when an application domain does not allow direct calls across its
boundary? This is achieved, as with WebServices, via proxies. A proxy is an
abstraction of the actual object required that can be treated as if it were the real
object.

When a client object wishes to create an instance of a server object the remoting
system on the client side instead creates a proxy of the server object. Thus as far as
the client is concerned the server object is in the client's process – the proxy deals
with any complexities arising from the fact that this is not in fact the case.

When the client makes a request of the proxy the remoting system, which is
overseeing the proxy activity, passes this request on to the remoting system on the
server via a communication channel established between the two application
domains.

The remoting system on the server handles the request, passing it on to the server
object for action. The results are passed back to the server remoting system from
the server object, which then passes these on back to the client via the established
communication channel. The remoting system at the client then passes the results
onto the client object via the proxy.

The process of packaging and sending method calls between the objects and across
application boundaries via serialisation and deserialization as described above is
known as marshalling. Object marshalling is a key concept which we shall now
discuss before continuing to other important topics.

Object Marshalling
To facilitate remoting we need remotable objects – objects that can be marshalled
across the application domains. There are two types of remotable objects,
marshalled either by value or by reference:

Marshal-by-value (MBV) objects


As you might have guessed from your parameter passing knowledge, MBV objects
are copied and passed from the server application domain to the client application
domain. When the client invokes a method on the MBV object it is serialized and
transferred over the network to the client. The called method can then be invoked
directly on the client and the object is no longer a remote object – no marshalling or
proxy is required as the object is now locally available.

MBV can provide faster performance as network roundtrips are reduced but for large
objects you have the initial overhead of transferring them from server to client.
Further, you are consuming additional client resources as the object is no longer
running on the server.

Marshal-by-reference (MBR) objects


MBR objects are accessed on the client side using a proxy. The client just holds a
reference to these objects. Thus these are true remote objects and are preferable
when the objects are large and/ or complex and require the server environment to
function properly.

Channels
Channels are the devices that facilitate the communication across remoting
boundaries. The .NET remoting framework ensures that before a remote object can
be called, it has registered at least one channel with the remoting system on the
server. A client object must similarly specify a channel to use when communicating
with the remote object.

A channel has two end points. The channel object at the receiving end of a channel
(the server) listens for a particular protocol using a specified port number, whereas
the channel object at the sending end of the channel (the client) sends information
to the receiving end using the protocol and port number specified by the channel
object on the receiving end.

To participate in .NET remoting the channel object at the receiving end must
implement the IChannelReceiver interface while the channel object at the sending
end must implement the IChannelSender interface.

Which protocols can you use for channel communication? The .NET Framework
provides implementations for HTTP (Hypertext Transfer Protocol) and TCP
(Transmission Control Protocol). The tools are also available to allow the
programmer to define their own channel communication protocol implementation if
so desired.

Why would you choose HTTP over TCP and vice versa? Here are a few pointers:

? HTTP channels can be used over the Internet because firewalls do not
generally block HTTP communication whereas TCP would normally require
opening of specific ports in the firewalls between the communicating servers.

? HTTP is a more bulky protocol than TCP meaning communications are more
efficient with TCP.

? HTTP offers greater immediate accessibility to security features of the HTTP


server, e.g. IIS offers SSL and Windows authentication. With TCP you would
need to develop your own security system from the .NET Framework base
classes.
Formatters
Formatters are the objects used to encode and serialize data into an appropriate
format before they are transmitted over a channel. For .NET remoting the formatter
must implement the IFormatter interface. Two such formatter classes are provided
within .NET: BinaryFormatter and SoapFormatter. If your requirements differ from
those satisfied by these two classes you can and should build your own.

You will no doubt have heard of SOAP (Simple Object Access Protocol), probably in
connection with Web Services, and you will also no doubt know that it is an XML
based protocol for exchanging information between applications. SOAP is an
extensible and modular protocol not bound to a particular transport mechanism such
as HTTP or TCP.

As you might expect from its Web Services background, SOAP is ideal for
communicating between applications that use incompatible architectures. However,
SOAP is very verbose as you might again expect from the use of text based XML. The
equivalent binary messages transfer information much more efficiently. However, the
binary format used by .NET is proprietary and hence can only be understood by
other .NET applications. These considerations should guide your choice of formatter
class.

Channels and Formatters Summary


Thus we have a trade-off between efficiency and interoperability as we look at the
combinations of supplied channels and formatters in .NET. In decreasing order of
efficiency but increasing interoperability these combinations are:
TCP, binary most efficient, least interoperable
TCP, SOAP
HTTP, binary
HTTP, SOAP least efficient, most interoperable
Note that by default, the HTTP channel uses the SOAP formatter and the TCP channel
uses the binary formatter.

Remote Object Activation

Recall the distinction between MBV and MBR objects: it is only MBR objects that can
be activated remotely as MBV objects are transferred to the client. MBR objects can
be server-activated or client-activated.

SAOs
Server Activated Objects (SAOs) are remote objects whose lifetime is controlled by
the server. The remote object is instantiated/ activated when the client calls a
method on the proxy object.

SAOs can only be instantiated using their default (parameter-less) constructors.

SAOs can be activated in two modes:


Firstly, SingleCall: an object is instantiated to service a single client request, after
which it is garbage collected. They are also known as stateless because they cannot
store state between requests as there is only one request. SingleCall potentially
allows for greater server scalability. This is the appropriate choice when:

? the overhead of creating an object is not significant

? the object is not required to maintain its state

? the server needs to support a large number of requests for the object

? the object needs to be supported in a load balanced environment

Secondly, Singleton: one object services the requests of all clients. Also known as
stateful as they can maintain state across requests. This state however is globally
shared between all clients which generally limits the usefulness of storing state
information. Its lifetime is determined by the 'lifetime lease' of the object, a concept
we'll return to in the next section. This is an appropriate choice when:

? the overhead of creating an object is substantial

? the object is required to maintain its state over a prolonged period

? several clients need to work on shared state

CAOs
In contrast to SAOs, Client Activated Objects (CAOs) are remote objects whose
lifetime are directly controlled by the client. CAOs are created on the server as soon
as the client requests that the object be created – there is no delay until a method
call is made. Further a CAO can be created using any of the available constructors of
the class – it is not limited as per SAOs. A CAO instance serves only the client that
created it, and the CAO does not get discarded with each request – thus a CAO can
maintain state for each client it is serving but it cannot share common state. Again
lifetime is determined by 'Lifetime Leases', to be detailed shortly. CAOs are
appropriate when:

1. the clients need to maintain a private session with the remote object

2. the clients need to have more control over how objects are created and how
long they will exist

Activation Types Summary


The different activation types offer a compromise between flexibility and scalability,
in increasing order of flexibility and decreasing scalability:
singlecall server most scalable, least flexible
singleton server
client activation least scalable, most flexible

Lifetime Leases
A lifetime lease is the period of time an object will remain in memory until its
resources are reclaimed by the Framework. Singleton SAOs and CAOs use lifetime
leases.

A lifetime lease is represented by an object that implements the ILease interface.


The object would normally work as follows:

? When an object is created its lifetime lease (CurrentLeaseTime) is set using


the value of the InitialLeaseTime property (5 mins default).

? Whenever the object receives a call its CurrentLeaseTime is reset to the time
specified by the value of the RenewOnCallTime property (2 mins default).

? The client can also renew the lease for a remote object by directly calling the
ILease.Renew() method.

? When the value of CurrentLeaseTime reaches 0 the .NET Framework contacts


any sponsors registered with the lease to check if they wish to renew the
object's lease. If the sponsor does not renew or cannot be contacted within
the duration specified by the SponsorshipTimeout property, the object is
marked for garbage collection.

Conclusion

In this article I've presented an overview of the background information necessary


before we progress to examine the practicalities of the remoting architecture. This
we shall do in articles II and III in this series of three articles.

References

.NET SDK

Developing XML WebServices and Server Components with VB.NET and the .NET
Framework
Mike Gunderloy
Que

.NET Remoting - Part II

Introduction

Remoting provides a very flexible environment for distributed applications in the


.NET arena. In part one of this series of four articles (note that this is an extension of
the initially planned three as indicated in article I) I introduced the background to
distributed applications and the .NET remoting architecture that aims to support such
applications. In this article and the next two articles in the series we'll look at some
examples of the application of this theory. In particular in this article we shall be
creating a remotable class, a server activated object and a client activated object.
The remaining topics for articles III and IV will then be declarative configuration
issues, interface assemblies, the role of IIS and asynchronous remoting.

Creating a remoteable class


Remotable classes are created by inheriting from the MarshalByRefObject class. Our
example class is going to connect and retrieve data from as instance of SQLServer.
In our implementation this instance, somewhat artificially for a remoting scenario,
shall be local. If this isn’t your scenario and your database isn't local you'll need to
amend the corresponding occurrences of connection string information in the code.
Similar applies if you're not using integrated security.

Note that this series of examples is based on those presented in Mike Gunderloy’s
'Developing XML Web Services and Server Components with VB.NET and the .NET
Framework' (see references), a book I can recommend, particularly if studying for
Microsoft exam 70-310.

The first thing we need to do is create a new VS.NET solution in which to create the
projects we'll need. Next add a VB.NET class library project to this solution named
RemotingDb, rename the default class library to DBConnect and add the following
code:
Imports System
Imports System.Data
Imports System.Data.SqlClient

'Marshal-By-Reference remotable object


Public Class DbConnect
Inherits MarshalByRefObject
Private sqlConn As SqlConnection

'default constructor connects to the Northwind database


Public Sub New()
sqlConn = New SqlConnection("data source=(local);" & _
"initial cata log=Northwind;integrated security=SSPI")
Console.WriteLine("Created a new connection to the Northwind database")
End Sub

'parameterized constructor connects to the specified database


Public Sub New(ByVal DbName As String)
sqlConn = New SqlConnection("data source=(local);" & _
"initial catalog=" & DbName & ";integrated security=SSPI")
Console.WriteLine("Created a new connection to the " & DbName & " database")
End Sub

Public Function ExecuteQuery(ByVal strQuery As String) As DataSet


Console.Write("Starting to execute the query...")

'create a SqlCommand to represent the query


Dim sqlcmd As SqlCommand = sqlconn.CreateCommand()
sqlcmd.CommandType = CommandType.Text
sqlcmd.CommandText = strQuery

'create a SqlDataAdapter object


Dim sqlDa As SqlDataAdapter = New SqlDataAdapter
sqlda.SelectCommand = sqlcmd

'create a DataSet to hold the results


Dim ds As DataSet = New DataSet
Try
'fill the DataSet using the DataAdapter
sqlDa.Fill(ds, "Results")
Catch ex As Exception
Console.WriteLine(ex.Message, "Error executing query")
End Try
Console.WriteLine("Done.")
ExecuteQuery = ds
End Function

End Class
Build the project. The implementation is simple so far: as you can see it's just a
standard class that inherits from the MarshalByRefObject class to provide the
necessary infrastructure. We now have a remotable class but for it to be useful we
now need to connect it to the remoting framework.

Creating a SAO

A remotable class is usually connected with the remoting framework through a


separate server application, a remoting server. This application listens for the client
request on a specified communication channel and instantiates the remote object or
invokes its class members as required.

A remoting server must complete the following steps:

1. Create a server channel that listens on a particular port for the incoming
requests from connected application domains.

2. Register this channel with the remoting framework, telling the framework
which requests received via this channel should be directed to a given server
application.

3. Register the remotable class with the remoting framework, telling the
framework which classes this server application can create for remote clients.

I'll introduce implementation of these steps for both the activation modes of an SAO:
SingleCall and Singleton, albeit briefly in the case of the latter. I'll also show how we
link up the client application at the other end of the channel thus providing a
complete working example.

The server process for the example will be long running UI-less process that will
continue to listen for incoming client requests on a channel. We're going to
implement the classes as console applications. Any reader who knows anything
about Windows services might be wondering why we're not implementing such an
activity as a Windows service. Normally you would, or you might use an existing
Windows service such as IIS to work as a remoting server. The choice is down to
helping you, the reader, understand what's going on … the console Window shall be
utilised to display various informative messages to reinforce the involved concepts.

SingleCall SAO

Create a new VS.NET VB.NET project of type console application within the existing
solution. Add a reference to System.Runtime.Remoting. Rename the default vb file to
DBConnectSingleCallServer and add the following code:
Imports System.Runtime.Remoting
Imports System.Runtime.Remoting.Channels
Imports System.Runtime.Remoting.Channels.Tcp
Imports RemotingDB
Module DbConnectSingleCallServer

Public Sub Main()

'step 1: create and register a TCP server channel that listens on port 54321
Dim channel As TcpServerChannel = New TcpServerChannel(54321)

'step 2: register the channel


ChannelServices.RegisterChannel(channel)

'step 3: register the service that publishes DbConnect for remote access in SingleCall mode
RemotingConfiguration.RegisterWellKnownServiceType(GetType(DbConnect), "DbConnect",
WellKnownObjectMode.SingleCall)

'write an informative message to the console


Console.WriteLine("Started server in the " & "SingleCall mode")
Console.WriteLine("Press <ENTER> to terminate " & "server...")
Console.ReadLine()

End Sub

End Module
Note that we create a TCP channel on a semi-arbitrary port 54321. This is a number
in the private range so this should be fine on a company network assuming the port
is not being used by another application. For a more widely distributed Internet
application you will need to register the number with the appropriate authority – the
IANA (Internet Assigned Numbers Authority).

The Client

We now have a remotable object and a remoting server. We now need a remoting
client which must perform the following steps:

1. Create and register a compatible client channel that is used by the remoting
framework to send messages to the remoting server.

2. Register the remotable class as a valid type in the client’s application domain.

3. Instantiate the SAO on the server. Remember you can only use the default
constructor with SAOs.

We shall implement the client as a Windows form. Add a Windows form project to
the solution naming it DBConnectClientSAO. Add references to
System.Runtime.Remoting and to the RemotingDB dll just created. Rename the
default form to DBConnectClient. Add a test box to enter the query (txtQuery), a
button to execute the query (btnExecute) and a datagrid (dgResults) to display the
results of the query. Add the following code behind for the form:
Imports System.Runtime.Remoting
Imports System.Runtime.Remoting.Channels
Imports System.Runtime.Remoting.Channels.Tcp
Imports RemotingDB

Public Class DbConnectClient


Inherits System.Windows.Forms.Form
'declare a remote object
Dim dbc As DbConnect

Private Sub btnExecute_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles btnExecute.Click
Try
'invoke a method on the remote object
Me.dgResults.DataSource = dbc.ExecuteQuery(Me.txtQuery.Text)
dgResults.DataMember = "Results"
Catch ex As Exception
MessageBox.Show(ex.Message, "Query Execution Error")
End Try
End Sub

Private Sub DbConnectClient_Load(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles MyBase.Load

'step 1: create and register a TCP client channel


Dim channel As TcpClientChannel = New TcpClientChannel
ChannelServices.RegisterChannel(channel)

'step 2: register the remote class as a valid type in the client's application domain
RemotingConfiguration.RegisterW ellKnownClientType(GetType(DbConnect),
"tcp://localhost:54321/DbConnect")

'step 3: instantiate the remote class using the default constructor


dbc = New DbConnect

End Sub

End Class
As previously, comments are used in the above as explanatory text.

We now have three projects in our solution. Build the complete solution. We also
need to specify multiple startup projects and in what order they are started. This is
achieved via the property pages of the solution (Startup project). Select 'Multiple
Startup Projects' and specify the settings and order as follows:
RemotingDB none
DBConnectSingleCallServer start
DBConnectClientSAO start
Run the client project. You should get a command window popping up telling you the
server object has been created in SingleCall mode, shortly followed by the Windows
client. Enter a query like 'SELECT * from Customers'.

Have you spotted an issue with the client implementation above? The code imports a
reference to the server dll, which is also in the client project. Why? Because:

? the project won’t compile without it as it needs to resolve the code that uses
it and

? the client program won’t execute without it – to create the proxy object from
the remoting class the CLR must have the metadata that describes it.

This situation is counter-intuitive as well as being undesirable for a variety of other


reasons. The situation can be improved however via the use of interface assemblies
– assemblies that contain just interface information, not actual business logic. We'll
return to this topic in the next article in this series.

Singleton SAO

We've implemented a SingleCall SAO. What about a SingletonSAO? The same


remotable object can be activated in different modes without making any changes to
the remotable object itself and with SAOs the choice of activation is specified at the
server. Thus we can use the same client program as per the last example, simply
modifying the remoting server code that registers the SAO, as follows:
RemotingConfiguration.RegisterWellKnownServiceType(GetType(DbConnect), "DbConnect", _
WellKnownObjectMode.Singleton)
Of course when you run this you will see little difference … but you'll know things are
operating slightly differently under the hood and your appreciation of these
differences when you come to more complex real world scenarios is key.

Creating a CAO

Progressing onto Client Activated Objects, no changes are required to the remotable
class itself but changes are required to the remoting server (i.e. registration of the
remotable class) and to the client.

Both tasks are very similar to what we've already seen with SAOs, so we'll jump into
the code just highlighting the differences. You'll see that in this code we make use of
one on the benefits of using CAOs – multiple constructors – via an extension of the
implementation. We are going to additionally enable selection of a database within
the SQLServer instance from the client.

Create a new Console application project in the same solution we've been using
throughout this article, renaming the default file to DBConnectCAOServer and add
the following code:
Imports System.Runtime.Remoting
Imports System.Runtime.Remoting.Channels
Imports System.Runtime.Remoting.Channels.Tcp
Imports RemotingDB

Module DbConnectCAOServer

Sub Main()

'create and register a TCP Channel on port 54321


Dim channel As TcpServerChannel = New TcpServerChannel(54321)
ChannelServices.RegisterChannel(channel)

'register the client activated object


RemotingConfiguration.RegisterActivatedServiceType(GetType(DbConnect))

'write some informative output to the console


Console.WriteLine("Started server in the Client Activation mode")
Console.WriteLine("Press <ENTER> to terminate server...")
Console.ReadLine()
End Sub

End Module
Next we need our new client application. Create a new Windows form project called
DBConnectClientCAO; rename the default form to DBConnectClient. The form UI
elements are as per the previous client application with the addition of a drop down
list box (aka combobox) to allow selection of the target database and a
corresponding button. The former should be named cboDatabases and the latter
named btnSelect with a text property of 'select'. The controls have also been
grouped into areas: Database, Query, Results with groupbox controls named
grpDatabases, grpQuery and grpResults. A screen shot of the form design might
assist:

The code behind also differs a little:


Imports System.Runtime.Remoting
Imports System.Runtime.Remoting.Channels
Imports System.Runtime.Remoting.Channels.Tcp
Imports RemotingDB

Public Class DbConnectClient


Inherits System.Windows.Forms.Form

'declare a remote object


Dim dbc As DbConnect

Private Sub DbConnectClient_Load(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles MyBase.Load
cboDatabases.SelectedIndex = 0
grpQuery.Enabled = False
End Sub
Private Sub btnSelect_Click(ByVal sender As System.Object, _
ByVal e As System.EventArgs) Handles btnSelect.Click
'disable the databases group box and enable the Query group box
grpDatabases.Enabled = False
grpQuery.Enabled = True

'register a TCP client channel


Dim channel As TcpClientChannel = New TcpClientChannel
ChannelServices.RegisterChannel(channel)

'register the remote class as a valid type in the client's application domain
'by passing the Remote class and its URL
RemotingConfiguration.RegisterActivatedClientType( _
GetType(DbConnect), "tcp://localhost:54321")

'instantiate the remote class


dbc = New DbConnect(cboDatabases.SelectedItem.ToString())
End Sub

Private Sub btnExecute_Click(ByVal sender As System.Object, _


ByVal e As System.EventArgs) Handles b tnExecute.Click
Try
'invoke a method on the remote object
Me.dgResults.DataSource = _
dbc.ExecuteQuery(Me.txtQuery.Text)
dgResults.DataMember = "Results"
Catch ex As Exception
MessageBox.Show(ex.Message, "Query Execution Error")
End Try
End Sub

End Class
Again we need to configure the project properties: ensure the SAO server and
previous associated client are not set to start and that DBConnectCAOServer and
DBConnectCAOClient are set to start without debugging, and in this order. Make sure
the startup objects for the individual projects are also set correctly, similarly to the
SAO example. Start the solution without debugging and make sure all is working.

Note that:

1. we are now able to use the parameterized constructor.

2. an instance of the remotable object is created for each client.

Conclusion

We've run through implementations of the different approaches to remoting:


singlecall and singleton SAOs and CAOs. We highlighted the interface assembly
issue, which we'll discuss further in the next article in this series along with the use
of declarative configuration files and the benefits they bring to the table.

References

.NET SDK
Developing XML WebServices and Server Components with VB.NET and the .NET
Framework
Mike Gunderloy
Que

Using the Microsoft Managed Provider for Oracle

Microsoft has brought the power and speed of the Managed Provider for SQL Server
to the Oracle database. In this article we see how to use it directly, and to call a
stored procedure with a join.

The SqlClient Managed Provider is clearly faster than the OleDb connection class for
Sql Server. Until recently, if you used Oracle you were stuck with OleDb as the best
way to connect. Now, however, Microsoft has made available a Managed Provider for
Oracle databases as well. (Oracle just recently released their own version - perhaps
in a later article we can take a look at their class also.) Since the Managed Provider
for Oracle was not a part of the original framework, you must download it from the
following address:
http://www.microsoft.com/downloads/details.aspx?FamilyID=4f55d429-17dc-45ea-
bfb3-076d1c052524&DisplayLang=en.

Once you have downloaded and installed the provider (in the form of a DLL) you
must add a reference for it in Visual Studio .Net. You do this by right-clicking on
References in the Solution Explorer. Once you do that, click on Add Reference. That
will bring up a dialog box with three tabs. The .Net tab should be selected by default.
Select it if not. Scroll down the list of Component Names until you come to
"System.Data.OracleClient.Dll". Click on the file name to highlight it and then click on
the "Select" button at the upper right of the form. Then click the "OK" button at the
bottom of the form. You should now see the class listed in your References in
Solution Explorer. That should be all you need to do - but maybe not! On some
machines (including mine [argh!]) the Oracle managed provider class stuff would
show up in Intellisense when in code-behind, but when I attempted to run the
program I got a configuration error saying (at the first line it came to where the
OracleClient was referenced) "Namespace or type 'OracleClient' for the Imports
'System.Data.OracleClient' cannot be found. Intellisense knew about the class, but
somehow the runtime did not. After a lot of knashing of teeth I opened up the
References tree in Solution Explorer, right-clicked on the "System.Data.OracleClient"
reference and selected Properties. In the Properties window one of the properties is
"Local Copy" True|False. I changed it to True. That resulted in a copy of the
System.Data.OracleClient.DLL being placed in my bin directory. After that my
problems went away. Since then I've heard from a couple of people who have had to
do the same thing, although most have not. Apparently there is some configuration
issue that I've yet to figure out.

Now to some code. In this first example, we are just going to connect to Oracle (the
Scott / Tiger database that ships with Oracle) and bring back the rows in the emp
table. We will use a very basic (and ugly) datagrid here. The .aspx file code is below.
<%@ Page Language="vb" Src="/Portals/57ad7180-c5e7-49f5-b282-
c6475cdb7ee7/OracleDirect.aspx.vb" Inherits="OracleDirect" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>OracleDirect</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name=vs_defaultClientScript content="JavaScript">
<meta name=vs_targetSchema
content="http://schemas.microsoft.com/intellisense/ie5">
</head>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
<asp:DataGrid id="dtgOracle" runat="server" cellpadding="4" />
</form>
</body>
</html>
Now for the code-behind file where we see the managed provider class in use.
Actually there is no magic to it. If you have been using OleDb to connect to Oracle all
along, all you do is change the OleDb prefix to object names to an Oracle prefix. In
other words OleDbConnection becomes OracleConnection, OleDbDataAdapter
becomes OracleDataAdapter. If you are coming from Sql Server and have been using
its managed provider you just change "Sql" prefixes to "Oracle" prefixes. That's all
there is to it. Keep in mind that in the connection string you will have to use the
correct Data Source for your setup. I doubt you have one named "kilgo"!
Imports System
Imports System.Data
Imports System.Data.OracleClient

Public Class OracleDirect : Inherits System.Web.UI.Page

Protected dtgOracle As System.Web.UI.WebControls.DataGrid

Private Sub Page_Load(ByVal sender As System.Object, ByVal e As


System.EventArgs) Handles MyBase.Load
Dim objConn As OracleConnection
Dim dataAdapter As OracleDataAdapter
Dim dataSet as New DataSet()
Dim strSql As String
Dim strConn As String = "Data Source=kilgo;User Id=scott;Password=tiger;"

objConn = New OracleConnection(strConn)


objConn.Open()

strSql = "SELECT * FROM emp"

dataAdapter = New OracleDataAdapter(strSql, objConn)


dataAdapter.Fill(dataSet)
dtgOracle.DataSource = dataSet
dtgOracle.DataBind()

End Sub

End Class
In this next example we will return a resultset using a stored procedure which
generates a ref cursor. We will also pass in a parameter so that we can see how that
works also. The resultset will be the result of a join with the scott/tiger dept table.
The above may seem to be making things a little complicated, but actually it is pretty
simple if you just follow the code carefully. We might as well show as many
techniques as we can while we are at it. The code that follows will include a simple
.aspx page, a code-behind page, and a pl/sql package containing the stored
procedure. First the .aspx page (with a little prettier grid this time).
<%@ Page Language="vb" Src="/Portals/57ad7180-c5e7-49f5-b282-
c6475cdb7ee7/OracleStoredProc.aspx.vb" Inherits="OracleStoredProc" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>OracleStoredProc</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name=vs_defaultClientScript content="JavaScript">
<meta name=vs_targetSchema
content="http://schemas.microsoft.com/intellisense/ie5">
</head>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
<asp:DataGrid id="dtgOracle" style="Z-INDEX: 102; LEFT: 185px; POSITION:
absolute; TOP: 37px"
runat="server"
BorderColor="#CCCCCC"
BorderStyle="None"
BorderWidth="1px"
BackColor="White"
CellPadding="3">
<ItemStyle ForeColor="#000066" />
<HeaderStyle Font-Bold="True" ForeColor="White" BackColor="#006699" />
</asp:DataGrid>
</form>
</body>
</html>
Now for the code-behind file. Things are a little more involved here than in the first
program, but the same process holds true. Substitute "Oracle" for "Sql" or "OleDb" in
the database handling object names and you are home. Of course, again, you will
have to use a Data Source correct for your setup rather than "kilgo" that I used in
my connection string. Also, I normally put my connection strings in Web.config but
have left it in the code here for demonstration purposes.

Notice the fourth line under Page_Load


(objCmd.CommandText="scott.employee.get_emp_info"). That is a fully qualified
path to the stored procedure we are using to return the result set. "scott" is the
schema (database) name. "employee" is what I named the PL/SQL Package, and
"get_emp_info" is what I named the procedure. Two lines below that we add a
parameter named pDeptNo and pass in the value of 30. We are asking the stored
procedure to give us only employees who work in dept 30 (you will see it in the
WHERE clause when we get to the stored procedure). In the next line we let the
system know that we are expecting "pCursor" to be an Output parameter. The rest of
the code is pretty straight forward.
Imports System.Data
Imports System.Data.OracleClient

Public Class OracleStoredProc : Inherits System.Web.UI.Page

Protected dtgOracle As System.Web.UI.WebControls.DataGrid

Private Sub Page_Load(ByVal sender As System.Object, ByVal e As


System.EventArgs) Handles MyBase.Load
Dim objConn As New OracleConnection("Data Source=kilgo;User
Id=scott;Password=tiger;")
Dim objCmd As New OracleCommand
objCmd.Connection = objConn
objCmd.CommandText = "scott.employee.get_emp_info"
objCmd.CommandType = CommandType.StoredProcedure
objCmd.Parameters.Add("pDeptNo", 30)
objCmd.Parameters.Add("pCursor", System.Data.OracleClient.OracleType.Cursor)
objCmd.Parameters("pCursor").Direction = ParameterDirection.Output
Dim dataAdapter As New OracleClient.OracleDataAdapter(objCmd)
Dim dataSet As New DataSet()
Try
dataAdapter.Fill(dataSet)
dtgOracle.DataSource = dataSet
dtgOracle.DataBind()
Finally
If objConn.State = ConnectionState.Open Then
objConn.Close()
End If
objConn.Dispose()
End Try
End Sub

End Class
Following are the package body and spec containing the stored procedure. I'll not go
into much detail explaining the code. If you are reading this you probably already
have some experience with Oracle and don't need my explanation.

Package Body:
PACKAGE BODY EMPLOYEE
IS
PROCEDURE GET_EMP_INFO ( pDeptNo IN Number, pCursor OUT outputCursor )
IS
BEGIN
OPEN pCursor FOR
SELECT a.empno, a.ename, a.job, a.mgr, a.hiredate, a.sal, a.comm, a.deptno,
b.dname, b.loc
FROM emp a, dept b
WHERE a.deptno = pDeptNo
AND a.deptno = b.deptno;

END; -- Procedure GET_EMP_INFO

END; -- Package EMPLOYEE


Package Spec:
PACKAGE EMPLOYEE
IS
TYPE outputCursor IS REF CURSOR;
PROCEDURE GET_EMP_INFO ( pDeptNo IN Number, pCursor OUT outputCursor );

END; -- Package EMPLOYEE

Page and Data Caching in .Net

Abstract: How to accelerate ASP.NET web site access for browsers via caching.

Introduction
In this article we’re going to take a look at the features available to the ASP.NET
programmer that enable performance improvement via the use of caching. Caching
is the keeping of frequently used data in memory for ready access by your ASP.NET
application. As such caching is a resource trade-off between those needed to obtain
the data and those needed to store the data. You should be aware that there is this
trade-off - there is little point caching data that is going to be requested infrequently
as this is simply wasteful of memory and may have a negative impact on your
system performance. However, on the other hand, if there is data that is required
every time a user visits the home page of your application and this data is only going
to change once a day, then there are big resource savings to be made by storing this
data in memory rather than retrieving this data every time a user hits that
homepage. This even considering that it is likely that the DBMS will also be doing it’s
own caching. Typically you will want to try and minimise requests to your data store
as, again typically, these will be the most resource hungry operations associated with
your application.
In ASP.NET there are two areas where caching techniques arise:

? Caching of rendered pages, page fragments or WebService output: termed


‘output caching’. Output caching can be implemented either declaratively or
programmatically.

? Caching of data / data objects programmatically via the cache class.

We'll return to the Cache class later in the article, but let’s focus on Output Caching to start with
and Page Output Caching in particular.

You can either declaratively use the Output Caching support available to web forms/pages, page
fragments and WebServices as part of their implementation or you can cache programmatically
using the HttpCachePolicy class exposed through the HttpResponse.Cache property available
within the .NET Framework. I'll not look at WebServices options in any detail here, only
mentioning that the WebMethod attribute that is assigned to methods to enable them as
Webservices has a CacheDuration attribute which the programmer may specify.

Page Output Caching


Let’s consider a base example and then examine in a little detail the additional
parameters available to us programmers. To minimally enable caching for a web
forms page (or user controls) you can use either of the following:
1. Declarative specification via the @OutputCache directive e.g.:
<%@ OutputCache Duration="120" VaryByParam="none" %>
2. Programmatic specification via the Cache property of the HttpResponse class, e.g.:
Response.Cache.SetExpires(datetime,now,addminutes(2))
Response.Cache.SetCacheability(HttpCacheability.Public)
These are equivalent and will cache the page for 2 minutes. What does this mean
exactly? When the document is initially requested the page is cached. Until the
specified expiration all page requests for that page will be served from the cache. On
cache expiration the page is removed from the cache. On the next request the page
will be recompiled, and again cached.
In fact @OutputCache is a higher-level wrapper around the HttpCachePolicy class exposed via
the HttpResponse class so rather than just being equivalent they ultimately resolve to exactly the
same IL code.
Looking at the declarative example and explaining the VaryByParam="none". HTTP supports two
methods of maintaining state between pages: POST and GET. Get requests are characterised by
the use of the query string to pass parameters, e.g. default.aspx?id=1&name=chris, whereas post
indicates that the parameters were passed in the body of the HTTP request. In the example
above caching for such examples based on parameters is disabled. To enable, you would set
VaryByParam to be ‘name’, for example – or any parameters on which basis you wish to cache.
This would cause the creation of different cache entries for different parameter values. For
example, the output of default.aspx?id=2&name=maria would also be cached. Note that the
VaryByParam attribute is mandatory.

Returning to the programmatic example and considering when you would choose this second
method over the first. Firstly, as it’s programmatic, you would use this option when the cache
settings needed to be set dynamically. Secondly, you have more flexibility in option setting with
HttpCachePolicy as exposed by the HttpResponse.cache property.

You may be wondering exactly what


Response.Cache.SetCacheability(HttpCacheability.Public)
achieves. This sets the cache control HTTP header - here to public - to specify that
the response is cacheable by clients and shared (proxy) caches - basically everybody
can cache it. The other options are nocache, private and server.
We’ll return to Response.Cache after looking at the directive option in more detail.

The @OutputCache Directive


First an example based on what we've seen thus far: output caching based on
querystring parameters:
Note this example requires connectivity to a standard SQLServer installation, in particular the
Northwind sample database. You maye need to change the string constant strConn to an
appropriate connection string for your system for the sample code presented in this article to
work. If you have no easy access to SQLServer, you could load some data in from an XML file or
simply pre-populate a datalist (for example) and bind the datagrid to this datastructure.

output_caching_directive_example.aspx
<%@ OutputCache Duration="30" VaryByParam="number" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>

<html>
<head></head>
<body>

<a href="output_caching_directive_example.aspx?number=1">1</a>-
<a href="output_caching_directive_example.aspx?number=2">2</a>-
<a href="output_caching_directive_example.aspx?number=3">3</a>-
<a href="output_caching_directive_example.aspx?number=4">4</a>-
<a href="output_caching_directive_example.aspx?number=5">5</a>-
<a href="output_caching_directive_example.aspx?number=6">6</a>-
<a href="output_caching_directive_example.aspx?number=7">7</a>-
<a href="output_caching_directive_example.aspx?number=8">8</a>-
<a href="output_caching_directive_example.aspx?number=9">9</a>

<p>
<asp:Label id="lblTimestamp" runat="server" maintainstate="false" />
<p>
<asp:DataGrid id="dgProducts" runat="server" maintainstate="false" />
</body>
</html>

<script language="vb" runat="server">

const strConn = "server=localhost;uid=sa;pwd=;database=Northwind"

Sub Page_Load(sender as Object, e As EventArgs)

If Not Request.QueryString("number") = Nothing Then


lblTimestamp.Text = DateTime.Now.TimeOfDay.ToString()

dim SqlConn as new SqlConnection(strConn)


dim SqlCmd as new SqlCommand("SELECT TOP " _
& Request.QueryString("number") & _
" * FROM Products", SqlConn)
SqlConn.Open()

dgProducts.DataSource = SqlCmd.ExecuteReader(CommandBehavior.CloseConnection)
Page.DataBind()

End If
End Sub

</script>
Thus, if you click through some of the links to the parameterised pages and then
return to them you will see the timestamp remains the same for each parameter
setting until the 30 seconds has elapsed when the data is loaded again. Further
caching is performed per parameter file, as indicated by the different timestamps.
The full specification of the OutputCache directive is:
<%@ OutputCache Duration="#ofseconds"
Location="Any | Client | Downstream | Server | None"
VaryByControl="controlname"
VaryByCustom="browser | customstring"
VaryByHeader="headers"
VaryByParam="parametername" %>
Examining these attributes in turn:
Duration
This is the time, in seconds, that the page or user control is cached. Setting this attribute on a
page or user control establishes an expiration policy for HTTP responses from the object and will
automatically cache the page or user control output. Note that this attribute is required. If you do
not include it, a parser error occurs.

Location
This allows control of from where the client receives the cached document and should be one of
the OutputCacheLocation enumeration values. The default is Any. This attribute is not supported
for @OutputCache directives included in user controls. The enumeration values are:
Any: the output cache can be located on the browser client (where the request originated), on a
proxy server (or any other server) participating in the request, or on the server where the request
was processed.
Client: the output cache is located on the browser client where the request originated.
Downstream: the output cache can be stored in any HTTP 1.1 cache-capable devices other than
the origin server. This includes proxy servers and the client that made the request.
None: the output cache is disabled for the requested page.
Server: the output cache is located on the Web server where the request was processed.

VaryByControl
A semicolon-separated list of strings used to vary the output cache. These strings represent fully
qualified names of properties on a user control. When this attribute is used for a user control, the
user control output is varied to the cache for each specified user control property. Note that this
attribute is required in a user control @OutputCache directive unless you have included a
VaryByParam attribute. This attribute is not supported for @OutputCache directives in ASP.NET
pages.

VaryByCustom
Any text that represents custom output caching requirements. If this attribute is given a value of
browser, the cache is varied by browser name and major version information. If a custom string is
entered, you must override the HttpApplication.GetVaryByCustomString method in your
application's Global.asax file. For example, if you wanted to vary caching by platform you would
set the custom string to be ‘Platform’ and override GetVaryByCustomString to return the platform
used by the requester via HttpContext.request.Browser.Platform.

VaryByHeader
A semicolon-separated list of HTTP headers used to vary the output cache. When this attribute is
set to multiple headers, the output cache contains a different version of the requested document
for each specified header. Example headers you might use are: Accept-Charset, Accept-
Language and User-Agent but I suggest you consider the full list of header options and consider
which might be suitable options for your particular application. Note that setting the
VaryByHeader attribute enables caching items in all HTTP/1.1 caches, not just the ASP.NET
cache. This attribute is not supported for @OutputCache directives in user controls.

VaryByParam
As already introduced this is a semicolon-separated list of strings used to vary the output cache.
By default, these strings correspond to a query string value sent with GET method attributes, or a
parameter sent using the POST method. When this attribute is set to multiple parameters, the
output cache contains a different version of the requested document for each specified
parameter. Possible values include none, *, and any valid query string or POST parameter name.
Note that this attribute is required when you output cache ASP.NET pages. It is required for user
controls as well unless you have included a VaryByControl attribute in the control's
@OutputCache directive. A parser error occurs if you fail to include it. If you do not want to
specify a parameter to vary cached content, set the value to none. If you want to vary the output
cache by all parameter values, set the attribute to *.

Returning now to the programmatic alternative for Page Output Caching:

Response.Cache
As stated earlier @OutputCache is a higher-level wrapper around the
HttpCachePolicy class exposed via the HttpResponse class. Thus all the functionality
of the last section is also available via HttpResponse.Cache. For example, our
previous code example can be translated as follows to deliver the same functionality:
output_caching_programmatic_example.aspx
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>

<html>
<head></head>
<body>

<a href="output_caching_programmatic_example.aspx?number=1">1</a>-
<a href="output_caching_programmatic_example.aspx?number=2">2</a>-
<a href="output_caching_programmatic_example.aspx?number=3">3</a>-
<a href="output_caching_programmatic_example.aspx?number=4">4</a>-
<a href="output_caching_programmatic_example.aspx?number=5">5</a>-
<a href="output_caching_programmatic_example.aspx?number=6">6</a>-
<a href="output_caching_programmatic_example.aspx?number=7">7</a>-
<a href="output_caching_programmatic_example.aspx?number=8">8</a>-
<a href="output_caching_programmatic_example.aspx?number=9">9</a>

<p>
<asp:Label id="lblTimestamp" runat="server" maintainstate="false" />

<p>

<asp:DataGrid id="dgProducts" runat="server" maintainstate="true" />

</body>
</html>

<script language="vb" runat="server">

const strConn = "server=localhost;uid=sa;pwd=;database=Northwind"

Sub Page_Load(sender as Object, e As EventArgs)

Response.Cache.SetExpires(dateTime.Now.AddSeconds(30))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.VaryByParams("number")=true

If Not Request.QueryString("number") = Nothing Then

lblTimestamp.Text = DateTime.Now.TimeOfDay.ToString()

dim SqlConn as new SqlConnection(strConn)


dim SqlCmd as new SqlCommand("SELECT TOP " _
& Request.QueryString("number") & " * FROM Products", SqlConn)
SqlConn.Open()

dgProducts.DataSource = SqlCmd.ExecuteReader(CommandBehavior.CloseConnection)
Page.DataBind()

End If
End Sub

</script>
The three lines of importance are:
Response.Cache.SetExpires(dateTime.Now.AddSeconds(30))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.VaryByParams("number")=true

It is only the third line you’ve not seen before. This is equivalent to VaryByParam="number" in the
directive example. Thus you can see that the various options of the OutputCache directive are
equivalent to different classes exposed by Response.Cache. Apart from the method of access the
pertinent information is, unsurprisingly, very similar to that presented above for the directive
version.

Thus, in addition to VaryByParams there is a VaryByHeaders class as well as a


SetVaryByCustom method. If you are interested in the extra functionality exposed via these and
associated classes I would suggest you peruse the relevant sections of the .NET SDK
documentation.

Fragment Caching
Fragment caching is really a minor variation of page caching and almost all of what we’ve
described already is relevant. The ‘fragment’ referred to is actually one or more user controls
included on a parent web form. Each user control can have different cache durations. You simply
specify the @OutputCache for the user controls and they will be cached as per those
specifications. Note that any caching in the parent web form overrides any specified in the
included user controls. So, for example, if the page is set to 30 secs and the user control to 10
the user control cache will not be refreshed for 30 secs.

It should be noted that of the standard options only the VaryByParam attribute is valid for
controlling caching of controls. An additional attribute is available within user controls:
VaryByControl, as introduced above, allowing multiple representations of a user control
dependent on one or more of its exposed properties. So, extending our example above, if we
implemented a control that exposed the SQL query used to generate the datareader which is
bound to the datagrid we could cache on the basis of the property which is the SQL string. Thus
we can create powerful controls with effective caching of the data presented.

Programmatic Caching: using the Cache Class to Cache Data


ASP.NET output caching is a great way to increase performance in your web
applications. However, it does not give you control over caching data or objects that
can be shared, e.g. sharing a dataset from page to page. The cache class, part of the
system.web.caching namespace, enables you to implement application-wide caching
of objects rather than page wide as with the HttpCachePolicy class. Note that the
lifetime of the cache is equivalent to the lifetime of the application. If the IIS web
application is restarted current cache settings will be lost.
The public properties and methods of the cache class are:

Public Properties

Count: gets the number of items stored in the cache.

Item: gets or sets the cache item at the specified key.

Public Methods

Add: adds the specified item to the Cache object with dependencies, expiration and priority
policies, and a delegate you can use to notify your application when the inserted item is removed
from the Cache.

Equals: determines whether two object instances are equal.

Get: retrieves the specified item from the Cache object.

GetEnumerator: retrieves a dictionary enumerator used to iterate through the key settings and
their values contained in the cache.

GetHashCode: serves as a hash function for a particular type, suitable for use in hashing
algorithms and data structures like a hash table.

GetType: gets the type of the current instance.

Insert: inserts an item into the Cache object. Use one of the versions of this method to overwrite
an existing Cache item with the same key parameter.

Remove: removes the specified item from the application's Cache object.

ToString: returns a String that represents the current Object.


We'll now examine some of the above to varying levels of detail, starting with the most complex,
the insert method:

Insert
Data is inserted into the cache with the Insert method of the cache object.
Cache.Insert has 4 overloaded methods with the following signatures:
Overloads Public Sub Insert(String, Object)

Inserts an item into the Cache object with a cache key to reference its location and using default
values provided by the CacheItemPriority enumeration.

Overloads Public Sub Insert(String, Object, CacheDependency)

Inserts an object into the Cache that has file or key dependencies.

Overloads Public Sub Insert(String, Object, CacheDependency, DateTime, TimeSpan)

Inserts an object into the Cache with dependencies and expiration policies.

Overloads Public Sub Insert(String, Object, CacheDependency, DateTime, TimeSpan,


CacheItemPriority, CacheItemRemovedCallback)

Inserts an object into the Cache object with dependencies, expiration and priority policies, and a
delegate you can use to notify your application when the inserted item is removed from the
Cache.

Summary of parameters:
String the name reference to the object to be cached
Object the object to be cached
CacheDependency file or cache key dependencies for the new item
Datetime indicates absolute expiration
sliding expiration – object removed if greater than
Timespan
timespan after last access
an enumeration that will decide order of item removal
CacheItemPriorities
under heavy load
an enumeration; items with a fast decay value are
CacheItemPriorityDecay
removed if not used frequently
a delegate that is called when an item is removed from
CacheItemRemovedCallback
the cache
Picking out one of these options for further mention: CacheDependency. This attribute allows the
validity of the cache to be dependent on a file or another cache item. If the target of such a
dependency changes, this can be detected. Consider the following scenario: an application reads
data from an XML file that is periodically updated. The application processes the data in the file
and represents this via an aspx page. Further, the application caches that data and inserts a
dependency on the file from which the data was read. The key aspect is that when the file is
updated .NET recognizes the fact as it is monitoring this file. The programmer can interrogate the
CacheDependency object to check for any updates and handle the situation accordingly in code.

Remove
Other methods of the cache class expose a few less parameters than Insert.
Cache.Remove expects a single parameter – the string reference value to the Cache
object you want to remove.
Cache.Remove(“MyCacheItem”)

Get
You can either use the get method to obtain an item from the cache or use the item
property. Further, as the item property is the default property, you do not have to
explicitly request it. Thus the latter three lines below are equivalent:
Cache.Insert(“MyCacheItem”, Object)
Dim obj as object
obj = Cache.get(“MyCacheItem”)
obj = Cache.Item("MyCacheItem")
obj = Cache(“MyCacheItem”)

GetEnumerator
Returns a dictionary (key/ value pairs) enumerator enabling you enumerate through
the collection, adding and removing items as you do so if so inclined. You would use
as follows:
dim myEnumerator as IDictionaryEnumerator
myEnumerator=Cache.GetEnumerator()

While (myEnumerator.MoveNext)
Response.Write(myEnumerator.Key.ToString() & “<br>”)
'do other manipulation here if so desired
End While

An Example
To finish off with an example, we’ll cache a subset of the data from our earlier
examples using a cache object.
cache_class_example.aspx
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>

<html>
<head></head>
<body>
<asp:datagrid id="dgProducts" runat="server" maintainstate="false" />
</body>
</html>

<script language="vb" runat="server">

public sub Page_Load(sender as Object, e as EventArgs)

const strConn = "server=localhost;uid=sa;pwd=;database=Northwind"

dim dsProductsCached as object = Cache.Get("dsProductsCached")

if dsProductsCached is nothing then

Response.Write("Retrieved from database:")


dim dsProducts as new DataSet()
dim SqlConn as new SqlConnection(strConn)
dim sdaProducts as new SqlDataAdapter("select Top 10 * from products",
SqlConn)
sdaProducts.Fill(dsProducts, "Products")
dgProducts.DataSource = dsProducts.Tables("Products").DefaultView

Cache.Insert("dsProductsCached", dsProducts, nothing, _


DateTime.Now.AddMinutes(1), TimeSpan.Zero)

else

Response.Write("Cached:")

dgProducts.DataSource = dsProductsCached

end if

DataBind()

end sub </script>


The important concept here is that if you view the above page, then within 1 minute
save and view the same page after renaming it, you will receive the cached version
of the data. Thus the cache data is shared between pages/ visitors to your web site.

Wrapping matters up
A final few pointers for using caching, largely reinforcing concepts introduced earlier,
with the latter two applying to the use of the cache class:
? Don't cache everything: caching uses memory resources - could these be
better utilized elsewhere? You need to trade-off whether to regenerate items,
or store them in memory.

? Prioritise items in the cache: if memory is becoming a limited system resource


.NET may need to release items from the cache to free up memory. Each time
you insert something into the cache, you can use the overloaded version of
Insert that allows you to indicate how important it is that the item is cached
to your application. This is achieved using one of the CacheItemPriority
enumeration values.

? Configure centrally. To maximize code clarity and ease of maintenance store


your cache settings, and possibly also instantiate your cache objects, in a key
location, for example within global.asax.

You might also like