Professional Documents
Culture Documents
Web Services
Web Services
Web Services
Introduction
Microsoft promoted Web Services a great deal with the launch of .NET. This seems to
have faded a little with Web Services perhaps not having taken off as much as was
thought. The concept of interoperability over the Internet is still a great one
however. In this article we'll quickly review the basics of Web Services before
continuing on in subsequent articles to look at Web Services in more depth. The
current plan is to consider the following topics as follows:
Article 1:
Introduction: Overview, SOAP, DISCO, UDDI and WSDL; creating and consuming a
WebService in VS.NET.
Article 2:
Customising the WebMethod attribute
Disco and UDDI practicalities
The disco.exe and wsdl.exe tools
Article 3:
Creating and using SOAP extensions
Creating asynchronous web methods
Controlling XML wire format
Web Services enable the exchange of data and the remote invocation of application
logic using XML messaging to move data through firewalls and between
heterogeneous systems. Although remote access of data and application logic is not
a new concept, doing so in such a loosely coupled fashion is. The only assumption
between the Web Service client and the Web Service itself is that recipients will
understand the messages they receive. As a result, programs written in any
language, using any component model, and running on any operating system can
access and utilize Web Services.
In this article we'll take a look at the key foundation concepts of Web Services as
well as showing how to both consume Web Services and implement a simple Web
Service in the .NET environment.
The Protocols
1. XML text based formatting means these messages are reasonably easy for us
humans to read and understand.
2. As HTTP is used these messages will not normally be blocked by firewalls and
hence will reach their target destination.
SOAP
SOAP (Simple Object Access Protocol) is the protocol that allows us to encapsulate
object calls as XML. An example SOAP message is:
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<GetArticles xmlns="http://www.cymru-web.net/my_articles_WS/Articles" />
</soap:Body>
</soap:Envelope>
We'll return to SOAP in later articles.
You need to know where and how to locate a Web Service in order to be able to use
it – a process known as discovery. The aim of these two protocols is to facilitate the
discovery process.
For VS.NET projects you would not normally use Disco documents as discovery
information is available anyway from their base URL, e.g.
http://myServer/myWebService/base_class.asmx?wsdl
However you may also add Disco files to your Web Services project.
UDDI registries can be private (Intranet based) or public (Internet based). To add
your WebServices to a UDDI resgistry you must use the tools provide by the
particular registry.
WSDL
WSDL (WebServices Description Language) does what it says – allows description of
the Web Service – it specifies the SOAP messages that it can send and receive. The
WSDL file defines the public interface of the Web Service: the data types it can
process, the methods it exposes and the URLs through which those methods can be
accessed.
Here's an example WSDL file, actually from the example later in this article:
<?xml version="1.0" encoding="utf-8" ?>
- <definitions xmlns:http="http://schemas.xmlsoap.org/wsdl/http/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
xmlns:s="http://www.w3.org/2001/XMLSchema"
xmlns:s0="http://www.cymru-web.net/my_articles_WS/Articles"
xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:tm="http://microsoft.com/wsdl/mime/textMatching/"
xmlns:mime="http://schemas.xmlsoap.org/wsdl/mime/"
targetNamespace="http://www.cymru-web.net/my_articles_WS/Articles"
xmlns="http://schemas.xmlsoap.org/wsdl/">
- <types>
- <s:schema elementFormDefault="qualified"
targetNamespace="http://www.cymru-web.net/my_articles_WS/Articles">
<s:import namespace="http://www.w3.org/2001/XMLSchema" />
- <s:element name="GetArticles">
<s:complexType />
</s:element>
- <s:element name="GetArticlesResponse">
- <s:complexType>
- <s:sequence>
- <s:element minOccurs="0" maxOccurs="1"
name="GetArticlesResult">
- <s:complexType>
- <s:sequence>
<s:element ref="s:schema" />
<s:any />
</s:sequence>
</s:complexType>
</s:element>
</s:sequence>
</s:complexType>
</s:element>
</s:schema>
</types>
- <message name="GetArticlesSoapIn">
<part name="parameters" element="s0:GetArticles" />
</message>
- <message name="GetArticlesSoapOut">
<part name="parameters" element="s0:GetArticlesResponse" />
</message>
- <portType name="ArticlesSoap">
- <operation name="GetArticles">
<input message="s0:GetArticlesSoapIn" />
<output message="s0:GetArticlesSoapOut" />
</operation>
</portType>
- <binding name="ArticlesSoap" type="s0:ArticlesSoap">
<soap:binding transport="http://schemas.xmlsoap.org/soap/http"
style="document" />
- <operation name="GetArticles">
<soap:operation soapAction="http://www.cymru-
web.net/my_articles_WS/Articles/GetArticles" style="document" />
- <input>
<soap:body use="literal" />
</input>
- <output>
<soap:body use="literal" />
</output>
</operation>
</binding>
- <service name="Articles">
- <port name="ArticlesSoap" binding="s0:ArticlesSoap">
<soap:address location="http://localhost/my_articles_WS/articles.asmx" />
</port>
</service>
</definitions>
We'll return to WSDL in later articles.
I'm going to create a Web Service practical for me ... it's going to return the latest
list of the articles I have written. Thus I'll only have to maintain the code and data
for this application in one place and the functionality will be easily accessible from
the several sites where I need to display this information.
Feel free to choose a small application more useful to yourself whilst implementing
via the framework we shall now explore, amending the process accordingly. I'm
going to use VS.NET as the IDE for the mini-project. Please adjust for your own IDE
and if the IDE amounts to notepad and the command line compiler my former article
on Web Services on ASPAlliance may prove useful. This also dovetails nicely into the
introduction of the data source of the application: an XML document, a snippet of
which will tell you where to find the aforementioned article:
<article
name="An Introduction to Web Services"
url="http://www.aspalliance.com/sullyc/articles/intro_to_web_services.aspx"
PubDate="2003-01-30" />
Thus the Web Service shall load the XML, and XML Schema, from files (see the
accompanying links with for sample files to download) and list of articles to the client
Web Services consumer, returned as a DataSet object for direct data binding to a
DataGrid (in this case - obviously the client is free to do whatever they like with the
returned object).
First, create a new project in VS.NET, selecting the ASP.NET Web Service template
from the VB Project Type. Specify a location which should be your local web server
and an appropriate application name, in my case: HTTP://localhost/my_articles_WS.
VS.NET will create a default Web Service file, service1.asmx which will be visible in
the solution explorer. Rename this to something more meaningful to your
application, in my case articles.asmx.
Switch to the code view of articles.asmx which will by default already be open within
VS.NET. Ensure the class name matches your filename for consistency. Add your
code, taking care not to alter any of the VS.NET Web Services designer generated
code. Highlighting only the new / key code:
Imports System.Web.Services
<System.Web.Services.WebService(Namespace:="http://www.cymru-
web.net/my_articles_WS/Articles")> _
Public Class Articles
Inherits System.Web.Services.WebService
<WebMethod()> _
Public Function GetArticles() As DataSet
dsDMS.ReadXmlSchema(Server.MapPath("articles_schema.xml"))
dsDMS.ReadXml(Server.MapPath("articles.xml"))
Return dsDMS
dsDMS = Nothing
End Function
End Class
Note you can change the namespace to your own. As you are probably aware
already this should simply be unique – the exact value is unimportant.
Finally build the new Web Service. Easy wasn't it?! VS.NET hides much of the
complexity from you meaning you only have to perform 3 actions:
2. Write and mark the classes that should be available via the Web Service with
the Web Service attribute.
3. Write and mark the methods that should be available via the Web Service
with the WebMethod attribute.
While you could create a client application to test the Web Service, and we shall
shortly, VS.NET includes tools hosted on a web page for testing the Web Service
without resorting to the additional overhead of developing a client application.
In VS.NET view your Web Service in Internet Explorer. You'll get the default test
page for the Web Service including a list of links to supported operations (in this case
one link to GetArticles) and a link to the service description of the Web Service (as
used as an example in the WSDL section above).
If you click the GetArticles link you'll be able to invoke the web method and you'll
see the dataset returned within the XML SOAP message content.
How do we consume this Web Service? Let's do this from a web form and display the
results in a DataGrid. Ordinarily the Web Service and Web Service client would not
exist on the same machine but it makes little difference. Add an ASP.NET web
application project to your VS.NET solution.
Add a 'web reference' to the Web Service – right click on the References directory of
your project and select 'Add Web Reference'. Locate the Web Service and select 'Add
Reference'.
Alter the default web form as follows: add a button named btnInvoke with a label
'Invoke' and a DataGrid named dgArticles to the form. Double click the button and
enter the following code to invoke the Web Service when the user clicks the button.
Conclusion
That concludes article 1 in this series within which I've provided an introduction to
Web Services including examples of creating and consuming them using VS.NET. In
the next article we'll delve a little deeper into WebMethods, Disco, UDDI and the
available supporting toolset.
References
.NET SDK
Developing XML WebServices and Server Components with VB.NET and the .NET
Framework
Mike Gunderloy
Que
The code is ridiculously simple! Our .aspx file, as a matter of fact contains essentially
no code other than the bare minimum for an html page. All we have done is add a
Done! message to it. The work, what there is of it, is done in the code-behind file.
The .aspx file is shown below.
<%@ Page Language="vb" Src="/Portals/57ad7180-c5e7-49f5-b282-
c6475cdb7ee7/DataSetToXML.aspx.vb" Inherits="DataSetToXML" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>DataSetToXML</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name=vs_defaultClientScript content="JavaScript">
<meta name=vs_targetSchema
content="http://schemas.microsoft.com/intellisense/ie5">
</head>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
</form>
<h3>Done!</h3>
</body>
</html>
The code-behind file is not much more complicated. Most of it is the usual code for
filling a DataSet using a DataAdapter. To keep the XML file to a shorter length we are
selecting only the TOP 10 from the Customers table of the Northwind database. The
two lines that actually write out the XML file and the Schema are highlighted in blue.
WriteXML and WriteXMLSchema are both methods of the DataSet class.
Server.MapPath is utilized to write the two files to the root directory of your web. The
two files are named "Customers.xml" and "Custmers.xsd".
Imports System
Imports System.Data
Imports System.Data.SqlClient
Imports System.Configuration
sdaCust.Fill(dstCust, "Customers")
'Save data to xml file and schema file
dstCust.WriteXML(Server.MapPath("Customers.xml"),XmlWriteMode.IgnoreSchema)
dstCust.WriteXMLSchema(Server.MapPath("Customers.xsd"))
End Sub
End Class
I started to apologize for the brevity of this article, but really, .Net is to blame for
making it so easy to convert database tables to XML! I hope you agree.
The DataList, with its ItemTemplate and EditItemTemplate, make it very easy for
you to control the appearance (and screen real estate) of the data. As I said before,
it requires more coding but the results may well be worth the effort.
In this article and example program we will deal with the Northwind Customers table.
I have included nine columns of editable data. I have divided the work between an
aspx page and a code-behind page. In the aspx page we layout our presentation of
data, while the code-behind file places the DataList in edit mode, and handles the
updating of modified data. The aspx file will be shown below in several sections to
make it easier to explain what each section does. This first section is the usual top-
of-page "stuff" and the definition of the DataList Control. The only items of note are
that we have set the OnEditCommand, OnUpdateCommand, and OnCancelCommand
properties to the names of the corresponding event handlers which are defined in the
code-behind file.
<html>
<head>
<title>DataList Edit</title>
<style rel="stylesheet">
.customers { font: 9pt Verdana, Arial, sans-serif; }
.customersHead { font: bold 8pt Verdana, Arial, sans-serif;
background-color:#4A3C8C; color:white; }
a { text-decoration:underline; }
a:hover { text-decoration:underline; color:#4A3C8C; }
</style>
</head>
<body>
<form runat="server" ID="Form1">
<div align="center">
<h3>Customers Table</h3>
</div>
<asp:DataList id="dtlcustomers"
runat="server"
width="760"
BorderWidth="1"
HeaderStyle-CssClass="customersHead"
AlternatingItemStyle-BackColor="#DEDFDE"
Font-Size="10"
Align="Center"
OnEditCommand="dtlcustomers_Edit"
OnUpdateCommand="dtlcustomers_Update"
OnCancelCommand="dtlcustomers_Cancel">
The following section includes the ItemTemplate for presentation of our data. The
code (markup) is fairly long, but all we are doing is creating an html table to present
the data. The CompanyName column is shown in a TD element of its own. The rest
of the data and column descriptions are show two columns abreast. Notice that we
are specifically naming the column headings in one TD element and using the Eval
method of the DataBinder class to obtain the actual database table data. We are also
using a Button control to induce edit mode in the code-behind file. You can use a
LinkButton if you prefer a textual presentation. This may look a little messy at first,
but if you run the program (from the link at the bottom of the article) and compare
the output to what you see below, I belive you find it very straight forward.
<ItemTemplate>
<table cellpadding="2" cellspacing="0" width="100%">
<tr>
<td colspan="4" class="customersHead">
<h3><%# DataBinder.Eval(Container.DataItem, "CompanyName") ></h3>
</td>
</tr>
<tr>
<td Width="100%" Align="left" colspan="4">
<asp:button id="btnEdit" Runat="server" CommandName="edit" Text="Edit"
/>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Contact Name</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "ContactName") %>
</td>
<td Width="25%" Align="left">
<b>Contact Title</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "ContactTitle") %>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Address</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "Address") %>
</td>
<td Width="25%" Align="left">
<b>City</b>
</td>
<td width="25%" align="left">
<%# DataBinder.Eval(Container.DataItem, "City") %>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Postal Code</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "PostalCode") %>
</td>
<td Width="25%" Align="left">
<b>Country</b>
</td>
<td width="25%" align="left">
<%# DataBinder.Eval(Container.DataItem, "Country") %>
</td>
</tr>
<tr>
<td Width="25%" Align="left">
<b>Phone</b>
</td>
<td Width="25%" Align="left">
<%# DataBinder.Eval(Container.DataItem, "Phone") %>>
</td>
<td Width="25%" Align="left">
<b>Fax</b>
</td>
<td width="25%" align="left">
<%# DataBinder.Eval(Container.DataItem, "Fax") %>
</td>
</tr>
</Table>
</ItemTemplate>
Next we must decide how our data and column descriptions are to appear while in
edit mode. That is the purpose of the markup below following the EditItemTemplate
tag. The process is much the same as in the ItemTemplate section above. The main
difference is that we are creating TextBox controls to contain the actual data, so that
the data becomes editable. I also chose to present the column descriptions and data
one abreast rather than two abreast as above. I did this for two reasons. One was
just to show that the ItemTemplate and EditItemTemplates stand alone and do not
have to have the same presentation format, and to make more room for several of
the TextBoxes that can hold 30 - 40 characters of data. Again, once you run the
program you will see the difference in presentation.
<EditItemTemplate>
<table cellpadding="2" cellspacing="0" width="100%">
<tr>
<td colspan="2" class="customersHead">
<h3><%# DataBinder.Eval(Container.DataItem, "CompanyName") %></h3>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Company Name</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtCompanyName" runat="server" MaxLength="40"
Columns="40"
Text='<%# DataBinder.Eval(Container.DataItem, "CompanyName") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Contact Name</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtContactName" Runat="server" MaxLength="30"
Columns="30"
Text='<%# DataBinder.Eval(Container.DataItem, "ContactName") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Contact Title</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtContactTitle" Runat="server" MaxLength="30"
Columns="30"
Text='<%# DataBinder.Eval(Container.DataItem, "ContactTitle") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Address</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtAddress" Runat="server" MaxLength="60" Columns="60"
Text='<%# DataBinder.Eval(Container.DataItem, "Address") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>City</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtCity" Runat="server" MaxLength="15" Columns="15"
Text='<%# DataBinder.Eval(Container.DataItem, "City") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Postal Code</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtPostalCode" Runat="server" MaxLength="10"
Columns="10"
Text='<%# DataBinder.Eval(Container.DataItem, "PostalCode") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Country</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtCountry" Runat="server" MaxLength="15" Columns="15"
Text='<%# DataBinder.Eval(Container.DataItem, "Country") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Phone</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtPhone" Runat="server" MaxLength="24" Columns="24"
Text='<%# DataBinder.Eval(Container.DataItem, "Phone") %>'/>
</td>
</tr>
<tr>
<td Width="50%" Align="Left">
<b>Fax</b>
</td>
<td Width="50%" Align="left">
<asp:TextBox id="txtFax" Runat="server" MaxLength="24" Columns="24"
Text='<%# DataBinder.Eval(Container.DataItem, "Fax") %>'/>
</td>
</tr>
<tr>
<td colspan="2">
<asp:Label id="lblCustomerID" runat="server"
Text='<%# DataBinder.Eval(Container.DataItem, "CustomerID") %>'
Visible="false" />
</td>
</tr>
<tr>
<td Width="50%" Align="right">
<asp:Button id="btnUpdate" Runat="server" CommandName="update"
Text="Update" />
<asp:Button id="btnCancel" Runat="server" CommandName="cancel"
Text="Cancel" />
</td>
<td Width="50%" Align="Left">
</td>
</tr>
</table>
</EditItemTemplate>
</asp:DataList>
</form>
</body>
</html>
Now for the code-behind file. We will also present this file in sections to better
illustrate and explain the code. First are the Page_Load and BindTheData()
subroutines. The Page_Load simply checks to make sure this is the first time the
page has been loaded and calls the BindTheData subroutine. BindTheData uses a
DataAdapter to obtain the data from the table, fills a DataSet and binds the data to
the DataList control (dtlCustomers).
Imports System
Imports System.Data
Imports System.Data.SqlClient
Imports System.Web.UI
Imports System.Web.UI.WebControls
Imports System.Configuration
End Sub
dtlCustomers.EditItemIndex = e.Item.ItemIndex
BindTheData()
End Sub
dtlCustomers.EditItemIndex = -1
BindTheData()
End Sub
The last section of code presented is the dtlCustomers_Update subroutine and is by
far the longest section. As you may recall in the aspx page EditItemTemplate we
created TextBoxes to present data for editing. The value in those TextBox controls
are used to change the data. The values of the textboxes are gathered and placed in
string variables in the code immediately below using the FindControl method. We
now have the data after any editing that took place. Immediately below that is our
update statement which is constructed using parameters for the column values.
Below that we Add parameters to the sqlCommand object and set the parameter
values to the string variables holding our edited data. Following that we simply do
the update and then rebind the DataList control.
Public Sub dtlCustomers_Update(sender As Object, e As DataListCommandEventArgs)
objConn.Open()
cmdSQL.ExecuteNonQuery()
objConn.Close()
dtlCustomers.EditItemIndex = -1
BindTheData()
End Sub
End Class
Conclusion: You have seen a lot coding necessary to presenting a DataList, placing it
in edit mode, and then canceling or updating the data after making changes. If you
take the code one section at a time and see what each section actually does, I
believe you will find that you can finely tune how your data is presented both for
viewing and for editing, and how to accomplish the update. Best of luck!
Introduction
In this first article I'll attempt to summarise and introduce the necessary background
theory of sections 1 and 2 before proceeding in article II to demonstrate the practical
application of the theory which shall continue into article III.
? each process has its own virtual address space, executable code and data.
? each process cannot directly access the code or data of another process.
? each process runs only one applications, so if the application crashes it does
not affect other applications.
Such process and application isolation thus has obvious benefits but the necessary
process handling is resource intensive, notably the activity of process switching (so
that each process receives its allocated share of CPU time).
Things are slightly different when it comes to .NET. The Common Language Runtime
(CLR) provides a managed execution environment for .NET applications. The
characteristics of the CLR allow the provision of isolation between running
applications at a lower resource cost than a process boundary.
Within the CLR, instead of the fundamental unit of isolation being a process it is an
application domain (AppDomain) and several AppDomains can run within a single
process. This is achieved while also providing the same level of isolation between
applications as provided by a Windows process. Further with less processes, the
overhead of process switching becomes less of an issue and the performance of
applications is increased as a consequence.
You can programmatically create AppDomins but normally they are created and
managed by the runtime hosts that execute your code. By default with the .NET
Framework three runtime hosts are configured for use: Windows shell, ASP.NET and
Internet Explorer.
You may be wondering 'why IE?' at this point. Internet Explorer creates application
domains in which to run managed controls. The .NET Framework supports the
download and execution of browser-based controls. The runtime interfaces with the
extensibility mechanism of Microsoft Internet Explorer through a mime filter to
create application domains in which to run the managed controls. By default, one
application domain is created for each Web site.
Distributed Applications
So, both process and application domains provide boundaries between applications
affording their necessary protection and as part of this protection objects situated on
either side of these boundaries are not permitted to communicate with each other.
Several efforts have been made to design frameworks for developing distributed
applications, for example:
Most of these are still widely deployed in enterprises. However they all have
limitations that mean they fail on one or more of the following criteria:
.NET’s Solutions
.NET provides two main pre-built frameworks for designing and implementing
distributed applications: .NET remoting and ASP.NET WebServices. Both offer similar
functionality and in fact, WebServices are built on the .NET remoting infrastructure.
So why would you choose one over the other? The choice depends on the type of
application you want to create:
You should use .NET remoting when both the client and server of the distributed
application are under your control, for example when an application is being
designed for use within a corporate network.
You should used ASP.NET WebServices when you do not have such control, for
example when your application is interoperating with an application of a business
partner.
Remoting Architecture
The key question to address is: how can remoting establish cross-application domain
communication when an application domain does not allow direct calls across its
boundary? This is achieved, as with WebServices, via proxies. A proxy is an
abstraction of the actual object required that can be treated as if it were the real
object.
When a client object wishes to create an instance of a server object the remoting
system on the client side instead creates a proxy of the server object. Thus as far as
the client is concerned the server object is in the client's process – the proxy deals
with any complexities arising from the fact that this is not in fact the case.
When the client makes a request of the proxy the remoting system, which is
overseeing the proxy activity, passes this request on to the remoting system on the
server via a communication channel established between the two application
domains.
The remoting system on the server handles the request, passing it on to the server
object for action. The results are passed back to the server remoting system from
the server object, which then passes these on back to the client via the established
communication channel. The remoting system at the client then passes the results
onto the client object via the proxy.
The process of packaging and sending method calls between the objects and across
application boundaries via serialisation and deserialization as described above is
known as marshalling. Object marshalling is a key concept which we shall now
discuss before continuing to other important topics.
Object Marshalling
To facilitate remoting we need remotable objects – objects that can be marshalled
across the application domains. There are two types of remotable objects,
marshalled either by value or by reference:
MBV can provide faster performance as network roundtrips are reduced but for large
objects you have the initial overhead of transferring them from server to client.
Further, you are consuming additional client resources as the object is no longer
running on the server.
Channels
Channels are the devices that facilitate the communication across remoting
boundaries. The .NET remoting framework ensures that before a remote object can
be called, it has registered at least one channel with the remoting system on the
server. A client object must similarly specify a channel to use when communicating
with the remote object.
A channel has two end points. The channel object at the receiving end of a channel
(the server) listens for a particular protocol using a specified port number, whereas
the channel object at the sending end of the channel (the client) sends information
to the receiving end using the protocol and port number specified by the channel
object on the receiving end.
To participate in .NET remoting the channel object at the receiving end must
implement the IChannelReceiver interface while the channel object at the sending
end must implement the IChannelSender interface.
Which protocols can you use for channel communication? The .NET Framework
provides implementations for HTTP (Hypertext Transfer Protocol) and TCP
(Transmission Control Protocol). The tools are also available to allow the
programmer to define their own channel communication protocol implementation if
so desired.
Why would you choose HTTP over TCP and vice versa? Here are a few pointers:
? HTTP channels can be used over the Internet because firewalls do not
generally block HTTP communication whereas TCP would normally require
opening of specific ports in the firewalls between the communicating servers.
? HTTP is a more bulky protocol than TCP meaning communications are more
efficient with TCP.
You will no doubt have heard of SOAP (Simple Object Access Protocol), probably in
connection with Web Services, and you will also no doubt know that it is an XML
based protocol for exchanging information between applications. SOAP is an
extensible and modular protocol not bound to a particular transport mechanism such
as HTTP or TCP.
As you might expect from its Web Services background, SOAP is ideal for
communicating between applications that use incompatible architectures. However,
SOAP is very verbose as you might again expect from the use of text based XML. The
equivalent binary messages transfer information much more efficiently. However, the
binary format used by .NET is proprietary and hence can only be understood by
other .NET applications. These considerations should guide your choice of formatter
class.
Recall the distinction between MBV and MBR objects: it is only MBR objects that can
be activated remotely as MBV objects are transferred to the client. MBR objects can
be server-activated or client-activated.
SAOs
Server Activated Objects (SAOs) are remote objects whose lifetime is controlled by
the server. The remote object is instantiated/ activated when the client calls a
method on the proxy object.
? the server needs to support a large number of requests for the object
Secondly, Singleton: one object services the requests of all clients. Also known as
stateful as they can maintain state across requests. This state however is globally
shared between all clients which generally limits the usefulness of storing state
information. Its lifetime is determined by the 'lifetime lease' of the object, a concept
we'll return to in the next section. This is an appropriate choice when:
CAOs
In contrast to SAOs, Client Activated Objects (CAOs) are remote objects whose
lifetime are directly controlled by the client. CAOs are created on the server as soon
as the client requests that the object be created – there is no delay until a method
call is made. Further a CAO can be created using any of the available constructors of
the class – it is not limited as per SAOs. A CAO instance serves only the client that
created it, and the CAO does not get discarded with each request – thus a CAO can
maintain state for each client it is serving but it cannot share common state. Again
lifetime is determined by 'Lifetime Leases', to be detailed shortly. CAOs are
appropriate when:
1. the clients need to maintain a private session with the remote object
2. the clients need to have more control over how objects are created and how
long they will exist
Lifetime Leases
A lifetime lease is the period of time an object will remain in memory until its
resources are reclaimed by the Framework. Singleton SAOs and CAOs use lifetime
leases.
? Whenever the object receives a call its CurrentLeaseTime is reset to the time
specified by the value of the RenewOnCallTime property (2 mins default).
? The client can also renew the lease for a remote object by directly calling the
ILease.Renew() method.
Conclusion
References
.NET SDK
Developing XML WebServices and Server Components with VB.NET and the .NET
Framework
Mike Gunderloy
Que
Introduction
Note that this series of examples is based on those presented in Mike Gunderloy’s
'Developing XML Web Services and Server Components with VB.NET and the .NET
Framework' (see references), a book I can recommend, particularly if studying for
Microsoft exam 70-310.
The first thing we need to do is create a new VS.NET solution in which to create the
projects we'll need. Next add a VB.NET class library project to this solution named
RemotingDb, rename the default class library to DBConnect and add the following
code:
Imports System
Imports System.Data
Imports System.Data.SqlClient
End Class
Build the project. The implementation is simple so far: as you can see it's just a
standard class that inherits from the MarshalByRefObject class to provide the
necessary infrastructure. We now have a remotable class but for it to be useful we
now need to connect it to the remoting framework.
Creating a SAO
1. Create a server channel that listens on a particular port for the incoming
requests from connected application domains.
2. Register this channel with the remoting framework, telling the framework
which requests received via this channel should be directed to a given server
application.
3. Register the remotable class with the remoting framework, telling the
framework which classes this server application can create for remote clients.
I'll introduce implementation of these steps for both the activation modes of an SAO:
SingleCall and Singleton, albeit briefly in the case of the latter. I'll also show how we
link up the client application at the other end of the channel thus providing a
complete working example.
The server process for the example will be long running UI-less process that will
continue to listen for incoming client requests on a channel. We're going to
implement the classes as console applications. Any reader who knows anything
about Windows services might be wondering why we're not implementing such an
activity as a Windows service. Normally you would, or you might use an existing
Windows service such as IIS to work as a remoting server. The choice is down to
helping you, the reader, understand what's going on … the console Window shall be
utilised to display various informative messages to reinforce the involved concepts.
SingleCall SAO
Create a new VS.NET VB.NET project of type console application within the existing
solution. Add a reference to System.Runtime.Remoting. Rename the default vb file to
DBConnectSingleCallServer and add the following code:
Imports System.Runtime.Remoting
Imports System.Runtime.Remoting.Channels
Imports System.Runtime.Remoting.Channels.Tcp
Imports RemotingDB
Module DbConnectSingleCallServer
'step 1: create and register a TCP server channel that listens on port 54321
Dim channel As TcpServerChannel = New TcpServerChannel(54321)
'step 3: register the service that publishes DbConnect for remote access in SingleCall mode
RemotingConfiguration.RegisterWellKnownServiceType(GetType(DbConnect), "DbConnect",
WellKnownObjectMode.SingleCall)
End Sub
End Module
Note that we create a TCP channel on a semi-arbitrary port 54321. This is a number
in the private range so this should be fine on a company network assuming the port
is not being used by another application. For a more widely distributed Internet
application you will need to register the number with the appropriate authority – the
IANA (Internet Assigned Numbers Authority).
The Client
We now have a remotable object and a remoting server. We now need a remoting
client which must perform the following steps:
1. Create and register a compatible client channel that is used by the remoting
framework to send messages to the remoting server.
2. Register the remotable class as a valid type in the client’s application domain.
3. Instantiate the SAO on the server. Remember you can only use the default
constructor with SAOs.
We shall implement the client as a Windows form. Add a Windows form project to
the solution naming it DBConnectClientSAO. Add references to
System.Runtime.Remoting and to the RemotingDB dll just created. Rename the
default form to DBConnectClient. Add a test box to enter the query (txtQuery), a
button to execute the query (btnExecute) and a datagrid (dgResults) to display the
results of the query. Add the following code behind for the form:
Imports System.Runtime.Remoting
Imports System.Runtime.Remoting.Channels
Imports System.Runtime.Remoting.Channels.Tcp
Imports RemotingDB
'step 2: register the remote class as a valid type in the client's application domain
RemotingConfiguration.RegisterW ellKnownClientType(GetType(DbConnect),
"tcp://localhost:54321/DbConnect")
End Sub
End Class
As previously, comments are used in the above as explanatory text.
We now have three projects in our solution. Build the complete solution. We also
need to specify multiple startup projects and in what order they are started. This is
achieved via the property pages of the solution (Startup project). Select 'Multiple
Startup Projects' and specify the settings and order as follows:
RemotingDB none
DBConnectSingleCallServer start
DBConnectClientSAO start
Run the client project. You should get a command window popping up telling you the
server object has been created in SingleCall mode, shortly followed by the Windows
client. Enter a query like 'SELECT * from Customers'.
Have you spotted an issue with the client implementation above? The code imports a
reference to the server dll, which is also in the client project. Why? Because:
? the project won’t compile without it as it needs to resolve the code that uses
it and
? the client program won’t execute without it – to create the proxy object from
the remoting class the CLR must have the metadata that describes it.
Singleton SAO
Creating a CAO
Progressing onto Client Activated Objects, no changes are required to the remotable
class itself but changes are required to the remoting server (i.e. registration of the
remotable class) and to the client.
Both tasks are very similar to what we've already seen with SAOs, so we'll jump into
the code just highlighting the differences. You'll see that in this code we make use of
one on the benefits of using CAOs – multiple constructors – via an extension of the
implementation. We are going to additionally enable selection of a database within
the SQLServer instance from the client.
Create a new Console application project in the same solution we've been using
throughout this article, renaming the default file to DBConnectCAOServer and add
the following code:
Imports System.Runtime.Remoting
Imports System.Runtime.Remoting.Channels
Imports System.Runtime.Remoting.Channels.Tcp
Imports RemotingDB
Module DbConnectCAOServer
Sub Main()
End Module
Next we need our new client application. Create a new Windows form project called
DBConnectClientCAO; rename the default form to DBConnectClient. The form UI
elements are as per the previous client application with the addition of a drop down
list box (aka combobox) to allow selection of the target database and a
corresponding button. The former should be named cboDatabases and the latter
named btnSelect with a text property of 'select'. The controls have also been
grouped into areas: Database, Query, Results with groupbox controls named
grpDatabases, grpQuery and grpResults. A screen shot of the form design might
assist:
'register the remote class as a valid type in the client's application domain
'by passing the Remote class and its URL
RemotingConfiguration.RegisterActivatedClientType( _
GetType(DbConnect), "tcp://localhost:54321")
End Class
Again we need to configure the project properties: ensure the SAO server and
previous associated client are not set to start and that DBConnectCAOServer and
DBConnectCAOClient are set to start without debugging, and in this order. Make sure
the startup objects for the individual projects are also set correctly, similarly to the
SAO example. Start the solution without debugging and make sure all is working.
Note that:
Conclusion
References
.NET SDK
Developing XML WebServices and Server Components with VB.NET and the .NET
Framework
Mike Gunderloy
Que
Microsoft has brought the power and speed of the Managed Provider for SQL Server
to the Oracle database. In this article we see how to use it directly, and to call a
stored procedure with a join.
The SqlClient Managed Provider is clearly faster than the OleDb connection class for
Sql Server. Until recently, if you used Oracle you were stuck with OleDb as the best
way to connect. Now, however, Microsoft has made available a Managed Provider for
Oracle databases as well. (Oracle just recently released their own version - perhaps
in a later article we can take a look at their class also.) Since the Managed Provider
for Oracle was not a part of the original framework, you must download it from the
following address:
http://www.microsoft.com/downloads/details.aspx?FamilyID=4f55d429-17dc-45ea-
bfb3-076d1c052524&DisplayLang=en.
Once you have downloaded and installed the provider (in the form of a DLL) you
must add a reference for it in Visual Studio .Net. You do this by right-clicking on
References in the Solution Explorer. Once you do that, click on Add Reference. That
will bring up a dialog box with three tabs. The .Net tab should be selected by default.
Select it if not. Scroll down the list of Component Names until you come to
"System.Data.OracleClient.Dll". Click on the file name to highlight it and then click on
the "Select" button at the upper right of the form. Then click the "OK" button at the
bottom of the form. You should now see the class listed in your References in
Solution Explorer. That should be all you need to do - but maybe not! On some
machines (including mine [argh!]) the Oracle managed provider class stuff would
show up in Intellisense when in code-behind, but when I attempted to run the
program I got a configuration error saying (at the first line it came to where the
OracleClient was referenced) "Namespace or type 'OracleClient' for the Imports
'System.Data.OracleClient' cannot be found. Intellisense knew about the class, but
somehow the runtime did not. After a lot of knashing of teeth I opened up the
References tree in Solution Explorer, right-clicked on the "System.Data.OracleClient"
reference and selected Properties. In the Properties window one of the properties is
"Local Copy" True|False. I changed it to True. That resulted in a copy of the
System.Data.OracleClient.DLL being placed in my bin directory. After that my
problems went away. Since then I've heard from a couple of people who have had to
do the same thing, although most have not. Apparently there is some configuration
issue that I've yet to figure out.
Now to some code. In this first example, we are just going to connect to Oracle (the
Scott / Tiger database that ships with Oracle) and bring back the rows in the emp
table. We will use a very basic (and ugly) datagrid here. The .aspx file code is below.
<%@ Page Language="vb" Src="/Portals/57ad7180-c5e7-49f5-b282-
c6475cdb7ee7/OracleDirect.aspx.vb" Inherits="OracleDirect" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>OracleDirect</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name=vs_defaultClientScript content="JavaScript">
<meta name=vs_targetSchema
content="http://schemas.microsoft.com/intellisense/ie5">
</head>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
<asp:DataGrid id="dtgOracle" runat="server" cellpadding="4" />
</form>
</body>
</html>
Now for the code-behind file where we see the managed provider class in use.
Actually there is no magic to it. If you have been using OleDb to connect to Oracle all
along, all you do is change the OleDb prefix to object names to an Oracle prefix. In
other words OleDbConnection becomes OracleConnection, OleDbDataAdapter
becomes OracleDataAdapter. If you are coming from Sql Server and have been using
its managed provider you just change "Sql" prefixes to "Oracle" prefixes. That's all
there is to it. Keep in mind that in the connection string you will have to use the
correct Data Source for your setup. I doubt you have one named "kilgo"!
Imports System
Imports System.Data
Imports System.Data.OracleClient
End Sub
End Class
In this next example we will return a resultset using a stored procedure which
generates a ref cursor. We will also pass in a parameter so that we can see how that
works also. The resultset will be the result of a join with the scott/tiger dept table.
The above may seem to be making things a little complicated, but actually it is pretty
simple if you just follow the code carefully. We might as well show as many
techniques as we can while we are at it. The code that follows will include a simple
.aspx page, a code-behind page, and a pl/sql package containing the stored
procedure. First the .aspx page (with a little prettier grid this time).
<%@ Page Language="vb" Src="/Portals/57ad7180-c5e7-49f5-b282-
c6475cdb7ee7/OracleStoredProc.aspx.vb" Inherits="OracleStoredProc" %>
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<title>OracleStoredProc</title>
<meta name="GENERATOR" content="Microsoft Visual Studio.NET 7.0">
<meta name="CODE_LANGUAGE" content="Visual Basic 7.0">
<meta name=vs_defaultClientScript content="JavaScript">
<meta name=vs_targetSchema
content="http://schemas.microsoft.com/intellisense/ie5">
</head>
<body MS_POSITIONING="GridLayout">
<form id="Form1" method="post" runat="server">
<asp:DataGrid id="dtgOracle" style="Z-INDEX: 102; LEFT: 185px; POSITION:
absolute; TOP: 37px"
runat="server"
BorderColor="#CCCCCC"
BorderStyle="None"
BorderWidth="1px"
BackColor="White"
CellPadding="3">
<ItemStyle ForeColor="#000066" />
<HeaderStyle Font-Bold="True" ForeColor="White" BackColor="#006699" />
</asp:DataGrid>
</form>
</body>
</html>
Now for the code-behind file. Things are a little more involved here than in the first
program, but the same process holds true. Substitute "Oracle" for "Sql" or "OleDb" in
the database handling object names and you are home. Of course, again, you will
have to use a Data Source correct for your setup rather than "kilgo" that I used in
my connection string. Also, I normally put my connection strings in Web.config but
have left it in the code here for demonstration purposes.
End Class
Following are the package body and spec containing the stored procedure. I'll not go
into much detail explaining the code. If you are reading this you probably already
have some experience with Oracle and don't need my explanation.
Package Body:
PACKAGE BODY EMPLOYEE
IS
PROCEDURE GET_EMP_INFO ( pDeptNo IN Number, pCursor OUT outputCursor )
IS
BEGIN
OPEN pCursor FOR
SELECT a.empno, a.ename, a.job, a.mgr, a.hiredate, a.sal, a.comm, a.deptno,
b.dname, b.loc
FROM emp a, dept b
WHERE a.deptno = pDeptNo
AND a.deptno = b.deptno;
Abstract: How to accelerate ASP.NET web site access for browsers via caching.
Introduction
In this article we’re going to take a look at the features available to the ASP.NET
programmer that enable performance improvement via the use of caching. Caching
is the keeping of frequently used data in memory for ready access by your ASP.NET
application. As such caching is a resource trade-off between those needed to obtain
the data and those needed to store the data. You should be aware that there is this
trade-off - there is little point caching data that is going to be requested infrequently
as this is simply wasteful of memory and may have a negative impact on your
system performance. However, on the other hand, if there is data that is required
every time a user visits the home page of your application and this data is only going
to change once a day, then there are big resource savings to be made by storing this
data in memory rather than retrieving this data every time a user hits that
homepage. This even considering that it is likely that the DBMS will also be doing it’s
own caching. Typically you will want to try and minimise requests to your data store
as, again typically, these will be the most resource hungry operations associated with
your application.
In ASP.NET there are two areas where caching techniques arise:
We'll return to the Cache class later in the article, but let’s focus on Output Caching to start with
and Page Output Caching in particular.
You can either declaratively use the Output Caching support available to web forms/pages, page
fragments and WebServices as part of their implementation or you can cache programmatically
using the HttpCachePolicy class exposed through the HttpResponse.Cache property available
within the .NET Framework. I'll not look at WebServices options in any detail here, only
mentioning that the WebMethod attribute that is assigned to methods to enable them as
Webservices has a CacheDuration attribute which the programmer may specify.
Returning to the programmatic example and considering when you would choose this second
method over the first. Firstly, as it’s programmatic, you would use this option when the cache
settings needed to be set dynamically. Secondly, you have more flexibility in option setting with
HttpCachePolicy as exposed by the HttpResponse.cache property.
output_caching_directive_example.aspx
<%@ OutputCache Duration="30" VaryByParam="number" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
<html>
<head></head>
<body>
<a href="output_caching_directive_example.aspx?number=1">1</a>-
<a href="output_caching_directive_example.aspx?number=2">2</a>-
<a href="output_caching_directive_example.aspx?number=3">3</a>-
<a href="output_caching_directive_example.aspx?number=4">4</a>-
<a href="output_caching_directive_example.aspx?number=5">5</a>-
<a href="output_caching_directive_example.aspx?number=6">6</a>-
<a href="output_caching_directive_example.aspx?number=7">7</a>-
<a href="output_caching_directive_example.aspx?number=8">8</a>-
<a href="output_caching_directive_example.aspx?number=9">9</a>
<p>
<asp:Label id="lblTimestamp" runat="server" maintainstate="false" />
<p>
<asp:DataGrid id="dgProducts" runat="server" maintainstate="false" />
</body>
</html>
dgProducts.DataSource = SqlCmd.ExecuteReader(CommandBehavior.CloseConnection)
Page.DataBind()
End If
End Sub
</script>
Thus, if you click through some of the links to the parameterised pages and then
return to them you will see the timestamp remains the same for each parameter
setting until the 30 seconds has elapsed when the data is loaded again. Further
caching is performed per parameter file, as indicated by the different timestamps.
The full specification of the OutputCache directive is:
<%@ OutputCache Duration="#ofseconds"
Location="Any | Client | Downstream | Server | None"
VaryByControl="controlname"
VaryByCustom="browser | customstring"
VaryByHeader="headers"
VaryByParam="parametername" %>
Examining these attributes in turn:
Duration
This is the time, in seconds, that the page or user control is cached. Setting this attribute on a
page or user control establishes an expiration policy for HTTP responses from the object and will
automatically cache the page or user control output. Note that this attribute is required. If you do
not include it, a parser error occurs.
Location
This allows control of from where the client receives the cached document and should be one of
the OutputCacheLocation enumeration values. The default is Any. This attribute is not supported
for @OutputCache directives included in user controls. The enumeration values are:
Any: the output cache can be located on the browser client (where the request originated), on a
proxy server (or any other server) participating in the request, or on the server where the request
was processed.
Client: the output cache is located on the browser client where the request originated.
Downstream: the output cache can be stored in any HTTP 1.1 cache-capable devices other than
the origin server. This includes proxy servers and the client that made the request.
None: the output cache is disabled for the requested page.
Server: the output cache is located on the Web server where the request was processed.
VaryByControl
A semicolon-separated list of strings used to vary the output cache. These strings represent fully
qualified names of properties on a user control. When this attribute is used for a user control, the
user control output is varied to the cache for each specified user control property. Note that this
attribute is required in a user control @OutputCache directive unless you have included a
VaryByParam attribute. This attribute is not supported for @OutputCache directives in ASP.NET
pages.
VaryByCustom
Any text that represents custom output caching requirements. If this attribute is given a value of
browser, the cache is varied by browser name and major version information. If a custom string is
entered, you must override the HttpApplication.GetVaryByCustomString method in your
application's Global.asax file. For example, if you wanted to vary caching by platform you would
set the custom string to be ‘Platform’ and override GetVaryByCustomString to return the platform
used by the requester via HttpContext.request.Browser.Platform.
VaryByHeader
A semicolon-separated list of HTTP headers used to vary the output cache. When this attribute is
set to multiple headers, the output cache contains a different version of the requested document
for each specified header. Example headers you might use are: Accept-Charset, Accept-
Language and User-Agent but I suggest you consider the full list of header options and consider
which might be suitable options for your particular application. Note that setting the
VaryByHeader attribute enables caching items in all HTTP/1.1 caches, not just the ASP.NET
cache. This attribute is not supported for @OutputCache directives in user controls.
VaryByParam
As already introduced this is a semicolon-separated list of strings used to vary the output cache.
By default, these strings correspond to a query string value sent with GET method attributes, or a
parameter sent using the POST method. When this attribute is set to multiple parameters, the
output cache contains a different version of the requested document for each specified
parameter. Possible values include none, *, and any valid query string or POST parameter name.
Note that this attribute is required when you output cache ASP.NET pages. It is required for user
controls as well unless you have included a VaryByControl attribute in the control's
@OutputCache directive. A parser error occurs if you fail to include it. If you do not want to
specify a parameter to vary cached content, set the value to none. If you want to vary the output
cache by all parameter values, set the attribute to *.
Response.Cache
As stated earlier @OutputCache is a higher-level wrapper around the
HttpCachePolicy class exposed via the HttpResponse class. Thus all the functionality
of the last section is also available via HttpResponse.Cache. For example, our
previous code example can be translated as follows to deliver the same functionality:
output_caching_programmatic_example.aspx
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
<html>
<head></head>
<body>
<a href="output_caching_programmatic_example.aspx?number=1">1</a>-
<a href="output_caching_programmatic_example.aspx?number=2">2</a>-
<a href="output_caching_programmatic_example.aspx?number=3">3</a>-
<a href="output_caching_programmatic_example.aspx?number=4">4</a>-
<a href="output_caching_programmatic_example.aspx?number=5">5</a>-
<a href="output_caching_programmatic_example.aspx?number=6">6</a>-
<a href="output_caching_programmatic_example.aspx?number=7">7</a>-
<a href="output_caching_programmatic_example.aspx?number=8">8</a>-
<a href="output_caching_programmatic_example.aspx?number=9">9</a>
<p>
<asp:Label id="lblTimestamp" runat="server" maintainstate="false" />
<p>
</body>
</html>
Response.Cache.SetExpires(dateTime.Now.AddSeconds(30))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.VaryByParams("number")=true
lblTimestamp.Text = DateTime.Now.TimeOfDay.ToString()
dgProducts.DataSource = SqlCmd.ExecuteReader(CommandBehavior.CloseConnection)
Page.DataBind()
End If
End Sub
</script>
The three lines of importance are:
Response.Cache.SetExpires(dateTime.Now.AddSeconds(30))
Response.Cache.SetCacheability(HttpCacheability.Public)
Response.Cache.VaryByParams("number")=true
It is only the third line you’ve not seen before. This is equivalent to VaryByParam="number" in the
directive example. Thus you can see that the various options of the OutputCache directive are
equivalent to different classes exposed by Response.Cache. Apart from the method of access the
pertinent information is, unsurprisingly, very similar to that presented above for the directive
version.
Fragment Caching
Fragment caching is really a minor variation of page caching and almost all of what we’ve
described already is relevant. The ‘fragment’ referred to is actually one or more user controls
included on a parent web form. Each user control can have different cache durations. You simply
specify the @OutputCache for the user controls and they will be cached as per those
specifications. Note that any caching in the parent web form overrides any specified in the
included user controls. So, for example, if the page is set to 30 secs and the user control to 10
the user control cache will not be refreshed for 30 secs.
It should be noted that of the standard options only the VaryByParam attribute is valid for
controlling caching of controls. An additional attribute is available within user controls:
VaryByControl, as introduced above, allowing multiple representations of a user control
dependent on one or more of its exposed properties. So, extending our example above, if we
implemented a control that exposed the SQL query used to generate the datareader which is
bound to the datagrid we could cache on the basis of the property which is the SQL string. Thus
we can create powerful controls with effective caching of the data presented.
Public Properties
Public Methods
Add: adds the specified item to the Cache object with dependencies, expiration and priority
policies, and a delegate you can use to notify your application when the inserted item is removed
from the Cache.
GetEnumerator: retrieves a dictionary enumerator used to iterate through the key settings and
their values contained in the cache.
GetHashCode: serves as a hash function for a particular type, suitable for use in hashing
algorithms and data structures like a hash table.
Insert: inserts an item into the Cache object. Use one of the versions of this method to overwrite
an existing Cache item with the same key parameter.
Remove: removes the specified item from the application's Cache object.
Insert
Data is inserted into the cache with the Insert method of the cache object.
Cache.Insert has 4 overloaded methods with the following signatures:
Overloads Public Sub Insert(String, Object)
Inserts an item into the Cache object with a cache key to reference its location and using default
values provided by the CacheItemPriority enumeration.
Inserts an object into the Cache that has file or key dependencies.
Inserts an object into the Cache with dependencies and expiration policies.
Inserts an object into the Cache object with dependencies, expiration and priority policies, and a
delegate you can use to notify your application when the inserted item is removed from the
Cache.
Summary of parameters:
String the name reference to the object to be cached
Object the object to be cached
CacheDependency file or cache key dependencies for the new item
Datetime indicates absolute expiration
sliding expiration – object removed if greater than
Timespan
timespan after last access
an enumeration that will decide order of item removal
CacheItemPriorities
under heavy load
an enumeration; items with a fast decay value are
CacheItemPriorityDecay
removed if not used frequently
a delegate that is called when an item is removed from
CacheItemRemovedCallback
the cache
Picking out one of these options for further mention: CacheDependency. This attribute allows the
validity of the cache to be dependent on a file or another cache item. If the target of such a
dependency changes, this can be detected. Consider the following scenario: an application reads
data from an XML file that is periodically updated. The application processes the data in the file
and represents this via an aspx page. Further, the application caches that data and inserts a
dependency on the file from which the data was read. The key aspect is that when the file is
updated .NET recognizes the fact as it is monitoring this file. The programmer can interrogate the
CacheDependency object to check for any updates and handle the situation accordingly in code.
Remove
Other methods of the cache class expose a few less parameters than Insert.
Cache.Remove expects a single parameter – the string reference value to the Cache
object you want to remove.
Cache.Remove(“MyCacheItem”)
Get
You can either use the get method to obtain an item from the cache or use the item
property. Further, as the item property is the default property, you do not have to
explicitly request it. Thus the latter three lines below are equivalent:
Cache.Insert(“MyCacheItem”, Object)
Dim obj as object
obj = Cache.get(“MyCacheItem”)
obj = Cache.Item("MyCacheItem")
obj = Cache(“MyCacheItem”)
GetEnumerator
Returns a dictionary (key/ value pairs) enumerator enabling you enumerate through
the collection, adding and removing items as you do so if so inclined. You would use
as follows:
dim myEnumerator as IDictionaryEnumerator
myEnumerator=Cache.GetEnumerator()
While (myEnumerator.MoveNext)
Response.Write(myEnumerator.Key.ToString() & “<br>”)
'do other manipulation here if so desired
End While
An Example
To finish off with an example, we’ll cache a subset of the data from our earlier
examples using a cache object.
cache_class_example.aspx
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="System.Data.SqlClient" %>
<html>
<head></head>
<body>
<asp:datagrid id="dgProducts" runat="server" maintainstate="false" />
</body>
</html>
else
Response.Write("Cached:")
dgProducts.DataSource = dsProductsCached
end if
DataBind()
Wrapping matters up
A final few pointers for using caching, largely reinforcing concepts introduced earlier,
with the latter two applying to the use of the cache class:
? Don't cache everything: caching uses memory resources - could these be
better utilized elsewhere? You need to trade-off whether to regenerate items,
or store them in memory.