Professional Documents
Culture Documents
[Step: 1 of 3]
0. Introduction (7 pages)..................................................................................................2
1. Application Design Concepts and Principles (19 pages)..............................................9
2. Common Architectures (28 pages).............................................................................28
3. Integration and Messaging (17 pages).......................................................................56
4. Business Tier Technologies (32 pages).....................................................................73
5. Web Tier Technologies (15 pages)..........................................................................105
6. Applicability of Java EE Technology (19 pages).......................................................120
7. Patterns (113 pages)................................................................................................139
8. Security (30 pages)..................................................................................................252
9. Bibliography (1 pages).............................................................................................282
9. Bibliography ( pages3)
Note: This document is derived from a published material by Mikalai Zaikin and updated last on 16-1-2010
from the sited references and from the web by Adel Almoshaigah (v-adel.al-moshaigah@riyadbank.com).
0. Introduction ( pages3)
Java Certification
Architect Exam
To achieve this certification, candidates must successfully complete three elements:
1) a knowledge-based multiple choice exam,
2) an assignment and
3) Essay exam.
Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5 (Step 1 of
3) (CX-310-052)
Page
- Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5:
Assignment (Step 2 of 3) (CX-310-301A)
2
- Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5: Essay
(Step 3 of 3) (CX-310-062)
- Sun Certified Enterprise Architect for the Java Platform, Enterprise Edition 5:
Assignment Resubmission (CX-310-301R)
- Upgrade Exam: Sun Certified Enterprise Architect for the Java Platform, Enterprise
Edition 5 (CX-310-053)
To define a software product, you will have the following 2 types of requirements:
1) Business requirements
2) Quality of service requirements (aka capabilities are constraint or the
nonfunctional and observable system qualities)
Page
system architecture (components and interfaces among these components) and low level
design of each component spelled in algorithms and data structure regardless of the used 3
programming language. The key difference between the terms architecture and
design is in the level of details. Architecture operates at a high level of
abstraction with less detail. Design operates at a low level of abstraction,
obviously with more of an eye to the details of implementation.
In implementation phase, the coding is done.
In verification phase, various kind of testing like requirement/interface/unit/regressing
(sanity) tests are performed.
In the maintenance phase, the product is deployed in the operation and maintained until
its retirements.
1) Performance
2) Scalability,
3) Reliability,
4) Availability,
5) Extensibility,
6) Maintainability,
7) Manageability; and
8) Security.
An architect has to tradeoffs among these requirements. For example, if the most
important service-level requirement is the performance of the system, you should
sacrifice the maintainability and extensibility of the system to ensure you meet the
performance quality of the service.
Performance Page
4
The performance requirements is usually measured in terms of response time for a given
screen transaction per user. In addition to response time, performance can be measured in
transaction throughput, which is the number of transactions in a given time period,
usually one second. For example, you could have a performance measurement that could
be no more than 3 seconds for each screen form or a transaction throughput of one
hundred transactions in 1 second. Regardless of the measurement, you need to create an
architecture that allows the designers and developers to complete the system without
considering the performance measurement.
Scalability
Reliability
Reliability is the ability to ensure that the product is trustworthy and dependable for all of
its transaction. The system load, for example, should not have any effect on the
correctness of the system transaction for a system to be reliable.
Availability
Availability is the ability to ensure the system is always accessible. The degree to which a
system is accessible can be termed as 24×7 to describe total availability. This aspect of a
system is often coupled with performance. The availability of a system is improved by
setting up an environment of redundant components and failover.
Extensibility
Extensibility is the ability to add or modify additional functionality without impacting
existing system functionality. Without the need of modifying or adding new
functionality, extensibility cannot be evaluated. To ensure extensibility, the product
design needs to have: low coupling and encapsulation.
Maintainability Page
5
Maintainability is the ability to correct flows in the existing functionality without
impacting other components of the system. Characteristics of the products that ensure
maintainability are: low coupling, modularity and documentation.
Manageability
Manageability is the ability to ensure that the system has continued health with respect to
other quality of services requirements such as performance, scalability, reliability,
availability and security. Manageability deals with system monitoring of the QoS
requirements and the ability to change system configuration to improve the QoS
dynamically without changing the system.
The classes of threats includes accidental threats, intentional threats, passive threats
(those that do not change the state of the system but may include loss of confidentiality
but not of integrity or availability), and active threats (those that change the state of the
system, including changes to data and to software).
A security policy is an enterprise’s statement defining the rules that regulate how it will
provide security, handle intrusions, and recover from damage caused by security
breaches. Based on a risk analysis and cost considerations, such policies are most
effective when users understand them and agree to abide by them.
Security services are provided by a system for implementing the security policy of an
organization. A standard set of such services includes the following:
■ Identification and authentication Unique identification and verification of users via
certification servers and global authentication services (single sign-on services)
■ Access control and authorization Rights and permissions that control how users can
access resources
■ Accountability and auditing Services for logging activities on network systems and
linking them to specific user accounts or sources of attacks
■ Data confidentiality Services to prevent unauthorized data disclosure
■ Data integrity and recovery Methods for protecting resources against corruption and
unauthorized modification—for example, mechanisms using checksums and encryption
technologies
Page
■ Data exchange Services that secure data transmissions over communication channels
■ Object reuse Services that provide multiple users secure access to individual resources
6
■ Non-repudiation of origin and delivery Services to protect against attempts by the
sender to falsely deny sending the data, or subsequent attempts by the recipient to falsely
deny receiving the data
■ Reliability Methods for ensuring that systems and resources are available and protected
against failure
Architecture is the fundamental organization of a system embodied in its components,
their relationships to each other, and to the environment, and the principles guiding its
design and evolution. [IEEE 1471] What is a software architecture?
Page
An architecture embodies decisions based on rationale 7
An important aspect of an architecture is not just the end result, the architecture itself, but
the rationale for why it is the way it is. Thus, an important consideration is to ensure that
you document the decisions that have led to this architecture and the rationale for those
decisions.
An architecture balances stakeholder needs
An architecture is created to ultimately address a set of stakeholder needs. However, it is
often not possible to meet all of the needs expressed. For example, a stakeholder may ask
for some functionality within a specified timeframe, but these two needs (functionality
and timeframe) are mutually exclusive. Either the scope can be reduced in order to meet
the schedule or all of the functionality can be provided within an extended timeframe.
Page
8
• [JAVA_DESIGN] Chapter 1.
• Three Sources of a Solid Object-Oriented Design By Gene
Shadrin
Ref • Object Oriented Basic Concepts and Advantages
• Advantages of an Object-Oriented Approach (for new programmers)
• [SCEA-051]
The most basic OO principles include encapsulation, inheritance, and polymorphism. Page
9
Along with abstraction, association, aggregation, and composition, they form the
foundation of the OO approach. These basic principles rest on a concept of objects that
depicts real-world entities such as, say, books, customers, invoices, or birds.
Inheritance is a relationship that defines one entity in terms of another. It designates the
ability to create new classes (types) that contain all the methods and properties of
another class plus additional methods and properties. Inheritance combines interface
inheritance and implementation inheritance. In this case, interface inheritance describes
a new interface in terms of one or more existing interfaces, while implementation
inheritance defines a new implementation in terms of one or more existing
implementations. Both interface inheritance and implementation inheritance are used to
extend the behavior of a base entity.
Page
10
The Single Responsibility Principle specifies that class should have only one reason to
change. It's also known as the cohesion principle and dictates that class should have
only one responsibility, i.e., it should avoid putting together responsibilities that change
for different reasons.
The Open/Closed Principle dictates that software entities should be open to extension
but closed to modification. Modules should be written so that they can be extended
without being modified. In other words, developers should be able to change what the
modules do without changing the modules' source code.
The Liskov Substitution Principle says that subclasses should be able to substitute for
their base classes, meaning that clients that use references to base classes must be
able to use the objects of derived classes without knowing them. This principle is
essentially a generalization of a "design by contract" approach that specifies that a
polymorphic method of a subclass can only replace its pre-condition by a weaker one
and its post-condition by a stronger one.
The Interface Segregation Principle says that clients shouldn't depend on the methods
they don't use. It means multiple client-specific interfaces are better than one general-
purpose interface.
Page
Another powerful feature of OOP, is the concept of inheritance (derived classes in C++),
meaning the derivation of a similar or related object (the derived object) from a more 12
general based object. The derived class can inherit the properties of its base class and also
adds its own data and routines. The concept above is known as single inheritance, but it is
also possible to derive a class from several base classes, which is known as multiple
inheritance, not allowed in Java.
Polymorphism means the sending of a message to an object without concern about how
the software is going to accomplish the task, and furthermore it means that the task can
be executed in completely different ways depending on the object that receives the
message. When the decision as to which actions are going to be executed is made at run-
time, the polymorphism is referred to as late binding. If they are made at compile time
then it is known as early binding.
Scalable OO applications are more scalable then their structured programming roots. As
an object's interface provides a roadmap for reusing the object in new software, it also
Page
provides you with all the information you need to replace the object without affecting
other code. This makes it easy to replace old and aging code with faster algorithms and 13
newer technology.
There are three major features in object-oriented programming: encapsulation, inheritance
and polymorphism.
1. Encapsulation
Encapsulation enforces modularity.
Encapsulation refers to the creation of self-contained modules that bind processing
functions to the data. These user-defined data types are called "classes," and one
instance of a class is an "object. Encapsulation ensures good code modularity, which
keeps routines (i.e. methods) separate and less prone to conflict with each other.
2. Inheritance
The Java EE platform uses a distributed multitier application model for enterprise applications.
In the following description, focus on how the application logic is divided into components
according to function, and the various application components that make up a Java EE
application are installed on different machines depending on the tier in the multitier Java EE
environment to which the application component belongs.
Page
Figure below shows multitier Java EE applications divided into the tiers described in the
following list: 14
Page
• Application clients and applets are components that run on the client.
• Java Servlet, JavaServer Faces, and JavaServer Pages (JSP) technology components
15
are web components that run on the server.
• Enterprise JavaBeans (EJB) components (enterprise beans) are business components
that run on the server.
Java EE Clients
• Web Clients
A Web Client consists of two parts:
○ Dynamic web pages containing various types of markup language (HTML,
XML, and so on), which are generated by web components running in the
web tier.
Page
16
Page
17
Static HTML pages and applets are bundled with web components during application assembly
but are not considered web components by the Java EE specification. Server-side utility
classes can also be bundled with web components and, like HTML pages, are not considered
web components.
As shown in figure below, the web tier, like the client tier, might include a JavaBeans
component to manage the user input and send that input to enterprise beans running in the
business tier for processing.
The enterprise information system (EIS) tier handles EIS software and includes enterprise
infrastructure systems such as enterprise resource planning (ERP), mainframe transaction
processing, database systems, and other legacy information systems. For example, Java EE
application components might need access to enterprise information systems for database
connectivity.
A layer is a horizontal and virtual view of a system on which each layer is built on top of
its lower layer.
Page
Architecture) for a particular EIS. Resource adapter modules are packaged as JAR
files with an .rar (resource adapter archive) extension. 19
(2) Java EE 5 APIs
Page
of request, they are commonly used to extend the applications hosted by web
servers.
20
• JavaServer Pages Technology
JavaServer Pages (JSP) technology lets you put snippets of servlet code directly into
a text-based document. A JSP page is a text-based document that contains two
types of text: static data (which can be expressed in any text-based format such as
HTML, WML, and XML) and JSP elements, which determine how the page constructs
dynamic content.
• JavaServer Pages Standard Tag Library
The JavaServer Pages Standard Tag Library (JSTL) encapsulates core functionality
common to many JSP applications. Instead of mixing tags from numerous vendors in
your JSP applications, you employ a single, standard set of tags. This standardization
Page
• JavaMail API 21
Java EE applications use the JavaMail API to send email notifications. The JavaMail
API has two parts: an application-level interface used by the application components
to send mail, and a service provider interface. The Java EE platform includes
JavaMail with a service provider that allows application components to send Internet
mail.
• JavaBeans Activation Framework
The JavaBeans Activation Framework (JAF) is included because JavaMail uses it. JAF
provides standard services to determine the type of an arbitrary piece of data,
encapsulate access to it, discover the operations available on it, and create the
appropriate JavaBeans component to perform those operations.
• Java API for XML Processing
Page
API and gain access to both of these important registry technologies.
Additionally, businesses can submit material to be shared and search for material
22
that others have submitted. Standards groups have developed schemas for particular
kinds of XML documents; two businesses might, for example, agree to use the
schema for their industry’s standard purchase order form. Because the schema is
stored in a standard business registry, both parties can use JAXR to access it.
• J2EE Connector Architecture (JCA)
The J2EE Connector architecture is used by tools vendors and system integrators to
create resource adapters that support access to enterprise information systems that
can be plugged in to any Java EE product. A resource adapter is a software
component that allows Java EE application components to access and interact with
the underlying resource manager of the EIS. Because a resource adapter is specific
to its resource manager, typically there is a different resource adapter for each type
of database or enterprise information system.
Page
component's source code. A container implements the component's environment and
provides it to the component as a JNDI naming context. 23
• Java Authentication and Authorization Service
The Java Authentication and Authorization Service (JAAS) provides a way for a Java
EE application to authenticate and authorize a specific user or group of users to run
it.
JAAS is a Java programming language version of the standard Pluggable
Authentication Module (PAM) framework, which extends the Java Platform security
architecture to support user-based authorization.
Page
protocols provide for the reliable delivery of streams of data from one host to
another. Internet Protocol (IP), the basic protocol of the Internet, enables
the unreliable delivery of individual packets from one host to another. IP
24
makes no guarantees as to whether the packet will be delivered, how long it
will take, or if multiple packets will arrive in the order they were sent. The
Transport Control Protocol (TCP) adds the notions of connection and
reliability.
○ HTTP 1.0 - Hypertext Transfer Protocol. The Internet protocol used to fetch
hypertext objects from remote hosts. HTTP messages consist of requests
from client to server and responses from server to client.
○ SSL 3.0 - Secure Socket Layer. A security protocol that provides privacy
over the Internet. The protocol allows client-server applications to
communicate in a way that cannot be eavesdropped or tampered with.
Servers are always authenticated and clients are optionally authenticated.
Page
Messaging technologies provide a way to asynchronously send and receive
messages. The Java Message Service API provides an interface for handling
asynchronous requests, reports, or events that are consumed by enterprise
25
applications. JMS messages are used to coordinate these applications. The JavaMail
API provides an interface for sending and receiving messages intended for users.
Although either API can be used for asynchronous notification, JMS is preferred when
speed and reliability are a primary requirement.
○ Java Message Service API
The Java Message Service (JMS) API allows J2EE applications to access
enterprise messaging systems such as IBM MQ Series and TIBCO
Rendezvous. JMS messages contain well-defined information that describe
specific business actions. Through the exchange of these messages,
applications track the progress of enterprise activities. The JMS API supports
both point-to-point and publish-subscribe styles of messaging.
Page
26
Explain appropriate and inappropriate uses for web services in the Java
5 EE platform.
Ref. • [SUN_SL_425]
A two-tier architecture is also known as client server model. The most basic type of Page
27
client-server architecture employs only two types of hosts: clients and servers. This type
of architecture is sometimes referred to as two-tier. It allows devices to share files and
resources. The two tier architecture means that the client acts as one tier and application
in combination with server acts as another tier.
Client-server describes the relationship between two computer programs in which one
program, the client program, makes a service request to another, the server program.
Standard networked functions such as email exchange, web access and database
access, are based on the client-server model.
Two Tier Software Architectures
Two tier architectures consist of three components distributed in two tiers: client
(requester of services) and server (provider of services). The three components are:
The two tier design allocates the user system interface exclusively to the client. It places
database management on the server and splits the processing management between
client and server, creating two layers.
In general, the user system interface client invokes services from the database
management server. In many two tier designs, most of the application portion of
processing is in the client environment. The database management server usually
provides the portion of the processing related to accessing data (often implemented in
store procedures). Clients commonly communicate with the server through SQL
statements or a call-level interface. It should be noted that connectivity between tiers can
be dynamically changed depending upon the user's request for data and services.
Two tier software architectures are used extensively in non-time critical information
processing where management and operations of the system are not complex. This
design is used frequently in decision support systems where the transaction load is light.
Two tier software architectures require minimal operator intervention. The two tier
architecture works well in relatively homogeneous environments with processing rules
(business rules) that do not change very often and when workgroup size is expected to
be fewer than 100 users, such as in small businesses.
Page
Two-Tier Architecture
In the early 1980s, personal computers (PCs) became very popular. They were less 28
expensive and had more processing power than the dumb terminal counterparts. This
paved the way for true distributed, or client-server, computing. The client or the PCs now
ran the user interface programs. It also supported graphical user interfaces (GUIs),
allowing the users to enter data and interact with the mainframe server. The mainframe
server now hosted only the business rules and data. Once the data entry was complete,
the GUI application could optionally perform validations and then send the data to the
server for execution of the business logic. Oracle Forms–based applications are a good
example of two-tier architecture. The forms provide the GUI loaded on the PCs, and the
business logic (coded as stored procedures) and data remain on the Oracle database
server.
• Scalability
Page
The most important limitation of the two-tier architecture is that it is not scalable,
because each client requires its own database session. The two tier design will scale- 29
up to service 100 users on a network. It appears that beyond this number of users,
the performance capacity is exceeded. This is because the client and server
exchange "keep alive" messages continuously, even when no work is being done,
thereby saturating the network.
Implementing business logic in stored procedures can limit scalability because as
more application logic is moved to the database management server, the need for
processing power grows. Each client uses the server to execute some part of its
application code, and this will ultimately reduce the number of users that can be
accommodated.
The most important limitation of the two-tier architecture is that it is not scalable,
because each client requires its own database session.
Performance:
• Inadequate performance for medium to high volume environments,
Page
• Application Distribution: Application changes have to be distributed to each client.
When there are a large number of users, this entails considerable administrative 30
overhead.
• Remote Usage: Remote users (e.g. customers), probably do not want to install
your application on their clients -- they would prefer "thin" clients where minimal
(or no) client software installation is required.
• Database Structure: other applications that access your database will become
dependent on the existing database structure. This means that it is more difficult
to redesign the database since other applications are intimate with the actual
database structure
Advantages of client server architecture:
• Centralization - access, resources, and data security are controlled through the
server.
The approach provides several advantages compared to the traditional 2-tier model:
• Better Re-use: The same logic (in stored procedures & triggers) can be initiated
from many client applications and tools.
• Better Data Integrity: when validation logic is unconditionally initiated in database
triggers (e.g. before inserts and updates), then business integrity of the data can
be ensured.
• Improved Performance for Complex Validations: When the business logic
requires many accesses back-and-forth to the database to perform its
processing, network traffic is significantly reduced when the entire validation is
encapsulated in a stored procedure.
• Improved Security: Stored procedures can improve security since detailed
business logic is encapsulated in a more secure central server.
• Reduced Distribution: Changes to business logic only need to be updated in the
database and do not have to be distributed to all the clients.
• The modified 2-tier approach addresses some of the concerns with the traditional
2-tier model but it still suffers from inherent 2-tier drawbacks. The most notable
continued drawback is scalability which is addressed by the 3-tier model.
• Performance: Adequate performance for low to medium volume environments
• [SUN_SL_425]
Ref. • Multitier architecture
Three-Tier Architecture
The three tier software architecture emerged to overcome the limitations of the two tier
architecture. The third tier (middle tier server) is between the user interface (client) and
the data management (server) components. This middle tier provides process
management where business logic and rules are executed and can accommodate
hundreds of users (as compared to only 100 users with the two tier architecture) by
providing functions such as queuing, application execution, and database staging.
The three tier architecture is used when an effective distributed client/server design is
needed that provides (when compared to the two tier) increased performance, flexibility,
maintainability, reusability, and scalability, while hiding the complexity of distributed
processing from the user.
Page
32
Page
33
• Scalability: The key 3-tier benefit is improved scalability since the application
servers can be deployed on many machines. Also, the database no longer
requires a connection from every client -- it only requires connections from a
smaller number of application servers. In addition, TP monitors or ORBs can be Page
34
used to balance loads and dynamically manage the number of application
server(s) available.
• Better Re-use: The same logic can be initiated from many clients or applications.
If an object standard like COM/DCOM or CORBA is employed, then the specific
language implementation of the middle tier can be made transparent.
• Hidden Database Structure: since the actual structure of the database is hidden
from the caller, it is possible that many database changes can be made
transparently. Therefore, a service in the middle tier that exchanges
information/data with other applications could retain its original interface while the
underlying database structure was enhanced during a new application release.
Page
Basic, PowerBuilder, Delphi) will be foregone or their benefit will be reduced with
a 3-tier architecture.
35
• Fewer Tools: There are many more tools available for a 2-tier model (e.g. most
reporting tools). It is likely that additional programming effort will be required to
manage tasks that an automated tool might handle in a 2-tier environment.
• [SUN_SL_425]
Ref. • Multitier architecture
Page
Other sources include these additional classifications:
• Transaction processing monitors — Provides tools and an environment to
36
develop and deploy distributed applications.
• Application servers — software installed on a computer to facilitate the serving
(running) of other applications.
• Enterprise Service Bus — An abstraction layer on top of an Enterprise
Messaging System.
N-Tier Architecture
With the widespread growth of Internet bandwidth, enterprises around the world have
web-enabled their services. As a result, the application servers are not burdened
The concepts of layer and tier are often used interchangeably. However, one fairly
common point of view is that there is indeed a difference, and that a layer is a logical
structuring mechanism for the elements that make up the software solution, while a tier
is a physical structuring mechanism for the system infrastructure.
A J2EE platform (and application) is a multitier system; we view the system in terms of
tiers. A tier is a logical partition of the separation of concerns in the system. Each tier is
assigned its unique responsibility in the system. We view each tier as logically separated
from one another. Each tier is loosely coupled with the adjacent tier. We represent the
whole system as a stack of tiers:
Page
37
Client Considerations
• Network Considerations
The client depends on the network, and the network is imperfect. Although the client
appears to be a stand-alone entity, it cannot be programmed as such because it is
part of a distributed application. Three aspects of the network:
Latency is non-zero.
Page
○
○ Bandwidth is finite. 39
○ The network is not always reliable.
A well-designed enterprise application must address these issues, starting with the
client. The ideal client connects to the server only when it has to, transmits only as
much data as it needs to, and works reasonably well when it cannot reach the
server.
• Security Considerations
Different networks have different security requirements, which constrain how clients
connect to an enterprise. For example, when clients connect over the Internet, they
usually communicate with servers through a firewall. The presence of a firewall that
is not under your control limits the choices of protocols the client can use. Most
firewalls are configured to allow Hypertext Transfer Protocol (HTTP) to pass across,
Page
as Dynamic HTML (DHTML) documents, is inconsistent across browsers, so creating a
portable DHTML-based client is difficult. 40
Another, more significant cost of using browser clients is potentially low
responsiveness. The client depends on the server for presentation logic, so it must
connect to the server whenever its interface changes. Consequently, browser clients
make many connections to the server, which is a problem when latency is high.
Furthermore, because the responses to a browser intermingle presentation logic with
data, they can be large, consuming substantial bandwidth.
• Validating User Inputs
Consider an HTML form for completing an order, which includes fields for credit card
information. A browser cannot single-handedly validate this information, but it can
certainly apply some simple heuristics to determine whether the information is
invalid. For example, it can check that the cardholder name is not null, or that the
credit card number has the right number of digits. When the browser solves these
Page
pages to generate HTML documents.
• Managing Conversational State
41
Because HTTP is a request-response protocol, individual requests are treated
independently. Consequently, Web-based enterprise applications need a mechanism
for identifying a particular client and the state of any conversation it is having with
that client.
The HTTP State Management Mechanism specification introduces the notion of a
session and session state. A session is a short-lived sequence of service requests by
a single user using a single client to access a server. Session state is the information
maintained in the session across requests. For example, a shopping cart uses session
state to track selections as a user chooses items from a catalog. Browsers have two
mechanisms for caching session state: cookies and URL rewriting.
Page
Web Start automatically downloads all necessary files. It then caches the files so the
user can relaunch the application without having to download them again (unless
42
they have changed, in which case Java Web Start technology takes care of
downloading the appropriate files).
• Applet Clients
Applet clients are user interface components that typically execute in a Web browser,
although they can execute in other applications or devices that support the applet
programming model. They are typically more dependent on a server than are
application clients, but are less dependent than browser clients.
Like application clients, applet clients are packaged inside JAR files. However, applets
are typically executed using Java Plug-in technology. This technology allows applets
to be run using Sun's implementation of the Java 2 Runtime Environment, Standard
Edition (instead of, say, a browser's default JRE).
Page
tier).
○ Web Clients
43
Like browser clients, Java Web clients connect over HTTP to the Web tier of a
J2EE application. This aspect of Web clients is particularly important on the
Internet, where HTTP communication is typically the only way a client can
reach a server. Many servers are separated from their clients by firewalls,
and HTTP is one of the few protocols most firewalls allow through.
Whereas browsers have built-in mechanisms that translate user gestures into
HTTP requests and interpret HTTP responses to update the view, Java clients
must be programmed to perform these actions. A key consideration when
implementing such actions is the format of the messages between client and
server.
Explain appropriate and inappropriate uses for web services in the Java
2.5
Page
EE platform.
45
Ref. • Book Excerpt: When to Use Web Services
1) Heterogeneous Integration
The first and most obvious bell ringer is the need to connect applications from
incompatible environments, such as Windows and UNIX, or .NET and J2EE. Web
services support heterogeneous integration. They support any programming language
on any platform. One thing that's particularly useful about Web services is that you can
use any Web services client environment to talk to any Web services server environment
As shown in Figure 7-3, Einstein was developed as a multitier Web service application.
The backend business functions and data sources are legacy applications implemented
in CICS and DB2 on the mainframe. The middle tier, which accesses and aggregates
the customer information, is implemented as a set of J2EE Web services using IBM
WebSphere. The client environments are implemented using Microsoft .NET. The
browser client is implemented using Microsoft .NET WebForms, and the desktop client is
implemented using Microsoft .NET WinForms. Einstein's architecture also allows
Wachovia to implement other types of client interfaces to support IVR systems, wireless
handsets, twoway pagers, and other devices.
3) Point-to-Point Integration
The first and most basic way to use Web services is for simple point-to-point integration.
For example, Cape Clear uses Web services to connect employees' e-mail clients with
its CRM solution. Cape Clear is a Web services software startup. It uses Salesforce.com
Page
as its CRM solution. Salesforce.com provides a hosted CRM solution using an ASP-style
model. Users typically interface with the CRM solution through a browser, recording
46
customer contact information and correspondence.
Like most software startups, Cape Clear provides e-mail-based customer support. As a
result, quite a bit of customer correspondence takes place via e-mail. But
Salesforce.com didn't provide a simple, easy way for Cape Clear employees to log this
correspondence in the Salesforce.com database. Users had to copy and paste the e-
mail from Outlook into the Salesforce.com browser interface. Cape Clear found that lots
of correspondence wasn't getting recorded.
Salesforce.com provides a programming API, so Cape Clear decided to eat its own dog
food and address this problem using Web services. First Cape Clear used Cape Clear
Figure 7-4: Cape Clear developed a VBA macro using Microsoft SOAP Toolkit that takes
an Outlook e-mail and uses SOAP to pass it to the Salesforce Adapter Web service. The
adapter service then passes the message to Salesforce.com using the Salesforce API
This Outlook macro adds a button to the standard Outlook tool bar labeled "Save to
Salesforce." As shown in Figure 7-4, when the user clicks on this button, the Outlook
macro captures the e-mail message, packages it as a SOAP message, and sends it to
the Salesforce.com adapter Web service. The Web service then forwards the e-mail
using the native API to Salesforce.com, which logs it.
4) Consolidated View
One of the most popular internal integration projects is enabling a consolidated view of
information to make your staff more effective. For example, you probably have many
people in your organization who interact with customers. Each time your staffs interact
with the customer, you want to let them have access to all aspects of the customer
relationship. Unfortunately, the customer relationship information is probably maintained
in variety of systems. The good news is that a consolidated customer view provides a
single point of access to all these systems.
You can use Web services to implement this type of consolidated view. For example,
Coloplast is using Web services to improve its sales and customer support functions.
Coloplast is a worldwide provider of specialized healthcare products and services. As
part of an initiative to improve customer relationships, Coloplast wanted to set up a
state-of-the-art call center system that would give customer representatives real-time
access to complete customer histories and product information. The company selected
Siebel Call Center as the base application, but it needed to connect this system to its
backend AS/400-based ERP systems, which manage the sales, manufacturing, and
Page
distribution functions. It did so using Web services. Coloplast used Jacada Integrator to
create Web services adapters for the legacy AS/400 application systems. Siebel Call 47
Center uses these Web services to deliver a 360 degree view of customer relationships,
including access to backend processes such as open order status, inventory information,
customer credit checking, and special pricing. This solution improves efficiency and
enhances employee and customer satisfaction.
The U.S. government spent $48 billion on information technology in 2002 and will spend
$52 billion in 2003. The Office of Management and Budget estimates that the
government can save more than $1 billion annually in IT expenditures by aligning
redundant IT investments across federal agencies. In addition, this alignment will save
taxpayers several billion dollars annually by reducing operational inefficiencies,
redundant spending, and excessive paperwork.
Page
In October 2001, the President's Management Council approved 24 high-payoff
government-wide initiatives that integrate agency operations and IT investments. One of
those initiatives is E-Travel, which is being run by the U.S. General Services
48
Administration (GSA). E-Travel delivers an integrated, government-wide, Webbased
travel management service. Federal government employees make approximately four
million air and rail trips each year, and until recently each agency and bureau managed
its own travel department. Cumulatively, these various departments used four travel
charge card providers, six online self-service reservation systems, 25 authorization and
voucher processing systems, 40 travel agencies, and a unique payment reimbursement
system for almost every bureau.
By consolidating these travel systems into a single, centralized travel management
system, the U.S. government expects to save $300 million annually, achieving a 649
percent return on investment. In addition, the consolidated system will deliver a 70
percent reduction in the time it takes to process vouchers and reimbursements.
Web services technology enhances portals in two ways. First, Web services deliver
content to the portal as XML. It's then easy for a portal engine to take this XML content
and display the information in a portal frame. It's also easy for the portal engine
toreformat the XML content to support other client devices, such as wireless handsets or
PDAs. Second, Web services technology defines a simple, consistent mechanism that
portlets can use to access backend applications. This consistency allows you to create a
framework to make it quicker and easier to add new content to your portal. Furthermore,
the new OASIS WSRP specification will allow you to add new content to the portal
dynamically. Figure 7-6 shows an overview of WSRP.
Another goal of the U.S. government's E-Gov program is to get a handle on government
portals. As of February 2003, the U.S. government was managing more than 22,000
Web sites with more then 35 million Web pages. These Web sites have been developed,
organized, and managed using the same stovepipe mentality as used in the backend
agency applications. Such decentralization and duplication make it difficult for citizens
and communities to do business with the government. For example, a community that is
attempting to obtain economic development grants must do a tremendous amount of
research to learn about federal grants. There's no single source of information. More
than 250 agencies administer grants, and you would have to file more than 1,000 forms
(most with duplicate information) to apply for all of them. Some of these forms are
available online; others aren't. Currently all forms must be filed by postal mail.
Page
49
The government is working to consolidate this myriad of Web sites into a much more
manageable number of portals, each providing asingle point of entry to a particular line
of business. Each portal will use Web services to access the backend applications that
implement the business process. In many cases the government will consolidate
backend applications to reduce redundant systems and to ensure a simpler experience
for the portal users. For example, the forthcoming E-Grants portal will provide a single
point of entry for anyone looking to obtain or administer federal grants. This site will help
citizens learn about all available grants and allow them to apply for these grants online.
The government expects to save $1 billion by simplifying grant administration as well as
saving $20 million in postage.
Page
The center uses a unique collaborative approach to cancer treatment that makes it one
of the most respected cancer centers in the United States. Rather than rely on a single 50
physician to manage a patient's case, M.D. Anderson brings together a team of
multidisciplinary specialists to collaborate on the best treatment for each individual.
Such collaboration requires a means to dynamically share patient information, such as
the patient's chart, test results, x-rays, andother diagnostic images. Because the clinic
spans multiple buildings, it's inefficient to try to assemble everyone in the same room to
view physical images and discuss a course of treatment. Instead the clinical data is
digitized so that it can be viewed electronically. One challenge, though, is that this
clinical information is stored in 10 systems on a wide range of platforms. To bring all
these systems together, ClinicStation uses Web services built with Microsoft .NET to
provide access to all patient information from any browser throughout the center.
Physicians can now collaborate over the phone while looking at patient records online.
Premier Farnell uses Web services technology to implement a B2B Web procurement
system for its customers. Based in London, Premier Farnell is a small-order distributor of
electronic components and industrial products to the design, maintenance, and
engineering industries throughout Europe, North America, and Asia Pacific.
The Premier Farnell B2B trading solution, implemented using IONA Orbix E2A Web
Services Integration Platform, supports customers using any electronic procurement
system, including SAP, Oracle, Ariba, Commerce One, and custom systems. Even if
each of these systems sends a slightly different purchase order format, the Web service
can handle the situation. It automatically converts all incoming purchase orders into the
format required by the Premier Farnell systems.
10) Software-as-a-Service
You can also use Web services to provide a programmatic interface to a business
service that you license using the software-as-a-service business model. For the most
part, I'm leery of promoting the association of Web services and software-as-a-service.
Web services are Web APIs. You don't sell APIs. Instead you sell the business function
that customers access through the APIs. As I mentioned in Chapter 6, it's hard to be
successful using an ASP-style business model. Looking at history, we can see the
secrets to a successful ASP model:
• The service must be based on strategic intellectual property, something that your
customers can't easily do themselves.
• The service must provide a disruptive value proposition-a new and unique
Page
advantage that's dependent on the service provider model, such as aggregate
information gained through collaboration.[1] 51
• The service provider must establish and maintain a reputation for neutrality and
trustworthiness.
• The service provider must devise a reasonable revenue model that is comfortable
for the customer.
My general take on software-as-a-service is that the business model must be viable on
its own without Web services. A Web API is simply a better way to provide programmatic
access to the service. If you think you have a new, viable idea for an ASP-style service,
then you should provide Web APIs for that service. As I mentioned earlier in this chapter,
Salesforce.com has added Web APIs to its already successful ASP model, making it
Yahoo is an excellent example of a company that has been successful using the ASP
model. Yahoo is the world's leading aggregator of content. The vast majority of Yahoo's
clients access this content for free through the Yahoo public portal. As with most public
portals, Yahoo generates revenue through advertising. But Yahoo also licenses this
content to other businesses as a service. If you are a Yahoo enterprise service
customer, you can display Yahoo content in your corporate portal, and users can
personalize their corporate portal just as public users can personalize their
my.yahoo.com portal.
Yahoo is extending its enterprise software service offering by adding a set of Web APIs.
These Web APIs will let you integrate Yahoo content with your business applications.
For example, you could integrate Yahoo content with your CRM application. When a
salesperson looks up a customer contact in the contact management system, the
application can send a query to Yahoo to retrieve and display the latest headlines about
the customer. Although it's true that this information is available for free through the
Yahoo portal, there's an obvious value to being able to integrate news with a CRM
solution. Yahoo plans to license the Web APIs as part of the Yahoo enterprise service
offering. Yahoo may also license these APIs to CRM application providers to enable a
prepackaged Yahoo-ready solution.
A number of folks position Web services as the death knell for EAI (Enterprise
application integration, the integration of data between applications in a company)
software. My view is that if Web services can replace your EAI software, then EAI
software is overkill for your project. EAI software does many things that SOAP, WSDL,
and UDDI simply can't do by themselves. The software category known as EAI consists
of a collection of various types of software that work together to deliver a comprehensive
integration solution. EAI software includes messaging infrastructure, application
adapters, data extraction and transformation tools, message brokers, and rules engines.
Web services technology could replace the messaging infrastructure, but it can't replace
the rest of the pieces. These other pieces, particularly the application adapters, are
complementary to Web services technology. Most EAI vendors are adding support for
Web.
Page
53
Explain how JCA (Java Connector Architecture) and JMS are used to
3 integrate distinct software components as part of an overall Java EE
application.
• [SCEA-051]
• [.Net J2EE]
Ref. • Designing Enterprise Applications with the J2EETM Platform,
Second Edition
Page
55
Before the JEE Connector Architecture (JCA) was defined, no specification for the Java
platform addressed the problem of providing a standard architecture for integrating an
EIS. We used JNI (Java Native Interface) and RMI (Remote Method Invocation) to
create a Java interface to a process running in its native domain. For example, a Java
program using JNI, RMI, or CORBA (Common Object Request Broker) can call a C++
program running on a Windows NT machine. Most EIS vendors as well as application
server vendors use nonstandard proprietary architectures to provide connectivity
Page
56
Page
57
For example, the code in Figure 6-2 illustrates a Java application that uses a JNI to
access this C library or C-based resource adapter. The JNI is the native programming
interface for Java, and it is part of the Java Developers Kit (JDK). The JNI allows Java
code that runs within a Java Virtual Machine (JVM) to operate with applications and
libraries written in other languages, such as C and C++. Programmers typically use the
JNI to write native methods when they cannot write the entire application in Java. This is
the case when a Java application needs to access an existing library or application
written in another programming language. While the JNI was especially useful before the
advent of the JEE platform, some of its uses may now be replaced by the JEE
Before JCA, each enterprise application integration (EAI) vendor created a proprietary
resource adapter interface for its own EAI product, requiring a resource adapter to be
developed for each EAI vendor and EIS combination (for instance, you need a SAP
resource adapter to use the messaging tools of Tibco). To solve that problem, as one of
Page
its main thrusts, JCA attempts to standardize the resource adapter interfaces. The JCA
provides a Java solution to the problem of connectivity between the many application 58
servers and EISs already in existence. The JCA is based on the technologies that are
defined and standardized as part of JEE.
The JCA defines a standard architecture for connecting the JEE platform to
heterogeneous EISs. Examples of EISs include mainframe transaction processing, such
as IBM CICS; database systems, such as IBM DB2; and legacy applications not written
in the Java programming language, such as IBM COBOL. By defining a set of scalable,
secure, and transactional mechanisms, the JCA enables the integration of EISs with
application servers and enterprise applications.
The JCA enables a vendor to provide a standard resource adapter for its EIS. The
resource adapter is integrated with the application server, thereby providing connectivity
between the EIS and the enterprise application. An EIS vendor provides a standard
Page
59
Resource Adapter
Deployable JCA components are called resource adapters. Basically, resource adapters
manage connections or other resources for interaction with some facility. The definition
is open ended, as resource adapters can be used for almost anything. A resource
adapter manifests itself as an implementation of interfaces in the javax.resource.cci and
javax.resource.spi packages. It will require a system-level software library when you are
Page
60
CORBA
CORBA is a language independent, distributed object model specified by the OMG. This
architecture was created to support the development of object-oriented applications
across heterogeneous computing environments that might contain different hardware
platforms and operating systems. CORBA relies on IIOP for communications between
objects. The center of the CORBA architecture lies in the Object Request Broker (ORB).
The ORB is a distributed programming service that enables CORBA objects to locate
and communicate with one another. CORBA objects have interfaces that expose sets of
Page
Native Language Integration 61
By using IIOP, EJBs can interoperate with native language clients and servers. IIOP
facilitates integration between CORBA and EJB systems. EJBs can access CORBA
servers, and CORBA clients can access EJBs. Also, if a COM/CORBA internetworking
service is used, ActiveX clients can access EJBs, and EJBs can access COM servers.
Eventually there may also be a DCOM implementation of the EJB framework.
Java/RMI
To address the EIS integration problem, the J2EE platform provides the following EIS
integration technologies:
Web services is a service oriented architecture which allows for creating an abstract
definition of a service, providing a concrete implementation of a service, publishing and
finding a service, service instance selection, and interoperable service use. In general a
Web service implementation and client use may be decoupled in a variety of ways.
The service provider defines an abstract service description using the Web Services
Description Language (WSDL). A concrete Service is then created from the abstract
service description yielding a concrete service description in WSDL. The concrete
service description can then be published to a registry such as Universal Description,
Discovery and Integration (UDDI). A service requestor can use a registry to locate a
service description and from that service description select and use a concrete
implementation of the service.
The abstract service description is defined in a WSDL document as a PortType. A
concrete Service instance is defined by the combination of a PortType, transport &
encoding binding and an address as a WSDL port. Sets of ports are aggregated into a
WSDL service.
Web Service
There is no commonly accepted definition for a Web service. For the purposes of this
Page
specification, a Web service is defined as a component with the following characteristics:
• A service implementation implements the methods of an interface that is
64
describable by WSDL. The methods are implemented using a Stateless Session
EJB or JAX-RPC web component.
• A Web service may have its interface published in one or more registries for Web
services during deployment.
• A Web Service implementation, which uses only the functionality described by this
specification, can be deployed in any Web Services for J2EE compliant
application server.
• A service instance, called a Port, is created and managed by a container.
It is worth examining the protocols and specifications (or stack) that make Web services
possible. The Web services stack consists of five layers, as Figure 4.8 illustrates.
Page
65
1) Transport (HTTP)
At the lowest level, two components in a distributed architecture must agree on a
common transport mechanism. Because of the near universal acceptance of port 80 as
a less risky route through a firewall, HTTP became the standard for the transport layer.
However, Web services implementations can run on other transport protocols such as
FTP and SMTP, or even other network stacks, such as Sequenced Packet Exchange
(SPX) or non-routable protocols such as NetBEUI. Changing from the dependence on
HTTP or HTTPS (for encrypted connections) is possible within the bounds of the current
Page
specification.
66
2) Encoding (XML)
After agreeing on the transport, components must deliver messages as correctly
formatted XML documents. This XML dependence ensures the success of the transfer,
because both provider and consumer know to parse and interpret the XML standard.
SOAP structure
4) Description (WSDL)
The description layer provides a mechanism for informing interested parties of the
particular bill of fare that a Web service offers. Web Services Description Language
(WSDL) provides this contract, setting out for each exposed component:
• Component name
• Data types
• Methods
• Parameters
This WDSL description enables a developer for a remote component to query your Web
service and find out what the service can do and how to get it to do it. The WSDL file is
Page
an XSD-based XML document that defines the details of your Web service. It also stores
your Web service contract. The WSDL file is usually the first point of entry for any client
attaching to your Web service so that the client knows how to use it.
67
1) Discovery (UDDI)
Discovery attempts to answer the question “Where.” If you want to connect to a Web
service at an Internet location (for example,
www.nwtraders.msft/services/WeatherService.aspx), you can enter the URL manually.
However, URLs are somewhat unwieldy and not very user friendly, so it would be better
if you could just request the NWTraders Weather Web Service. To do this, NWTraders
could publish their weather service on a Universal Description, Discovery and Integration
(UDDI) server. Finding their weather service is now just a question of connecting to the
UDDI server using an agreed message format to locate the URL for the service.
Explain how JCA (Java Connector Architecture) and JMS are used to
3.3 integrate distinct software components as part of an overall Java EE
application.
Page
68
J2EE Connector Architecture
The J2EE Connector architecture is the standard architecture for integrating J2EE products and
applications with heterogeneous enterprise information systems. The Connector architecture
enables an EIS vendor to provide a standard resource adapter for its enterprise information
system. Because a resource adapter conforms to the Connector architecture specification, it
can be plugged into any J2EE-compliant application server to provide the underlying
infrastructure for integrating with that vendor's EIS. The EIS vendor is assured that its adapter
will work with any J2EE-compliant application server. The J2EE application server, because of
its support for the Connector architecture, is assured that it can connect to multiple EISs.
The J2EE application server and EIS resource adapter collaborate to keep all system-level
mechanisms - transactions, security, connection management - transparent to the application
The application-level contract defines the client API that an application component uses for EIS
access. The Connector architecture does not require that an application component use a
specific client API. The client API may be the Common Client Interface (CCI), which is an API
for accessing multiple heterogeneous EISs, or it may be an API specific to the particular type
of resource adapter and its underlying EIS. There are advantages to using CCI, principally that
tool vendors can build their tools on top of this API. Although the CCI is targeted primarily
towards application development tools and EAI vendors, it is not intended to discourage
vendors from using JDBC APIs. An EAI vendor will typically combine JDBC with CCI by using
the JDBC API to access relational databases and using CCI to access other EISs.
The system-level contracts define a "pluggability" standard between application servers and
EISs. By developing components that adhere to these contracts, an application server and an
EIS know that connecting is a straight-forward operation of plugging in the resource adapter.
The EIS vendor or resource adapter provider implements its side of the system-level contracts
in a resource adapter, which is a system library specific to the EIS. The resource adapter is the
component that plugs into an application server. Examples of resource adapters include an
adapter that connects to an ERP system and one that connects to a mainframe transaction
Page
processing system.
There is also an interface between a resource adapter and its particular EIS. This interface is
69
specific to the EIS, and it may be a native interface or some other type of interface. The
Connector architecture does not define this interface.
The Connector architecture defines the services that the J2EE-compliant application server
must provide. These services - transaction management, security, and connection pooling -
are delineated in the three Connector system-level contracts. The application server may
implement these services in its own specific way. The three system contracts, which together
form a Service Provider Interface (SPI), are as follows:
• Connection management contract - This contract enables an application server to
pool connections to an underlying EIS, while at the same time it enables application
components to connect to an EIS. Pooling connections is important to create a
scalable application environment, particularly when large numbers of clients require
access to the underlying EIS.
Page
70
Page
71
Explain and contrast uses for entity beans, entity classes, stateful and
1 stateless session beans, and message-driven beans, and understand the
advantages and disadvantages of each type.
Explain the benefits of the EJB 3 development model over previous EJB
4 generations for ease of development including how the EJB container
simplifies EJB development.
Explain and contrast uses for entity beans, entity classes, stateful and
4.1 stateless session beans, and message-driven beans, and understand the
advantages and disadvantages of each type.
• [EJB_3.0_CORE]
Ref. • [J2EE Tutorial] Ch.20 & 24
The characteristics of the Enterprise JavaBeans Technology for version 3 of the EJB
Page
specifications are that:
• The objects they implement contain business logic that operates on the
72
enterprise’s data.
• The objects they implement are managed at runtime by a container.
• The objects they implement can be customized at deployment time by editing its
environment entries.
• The objects they implement have various service information, such as
transaction and security attributes, may be specified together with the business
logic of the enterprise bean class in the form of metadata annotations, or
separately, in an XML deployment descriptor. This service information may be
extracted and managed by tools during application assembly and deployment.
• The client access is mediated by the container in which the enterprise bean is
deployed.
Page
manipulate the fields of the entity bean. For example, if you had an entity bean
named StockTransactionBean with a price field and a quantity field, a method 74
named getTransactionAmount() could be created to multiply the two fields and
return the amount of the transaction.
• Lifecycle methods that are called by the EJB container: For example, as with
session beans, the method annotated by the @PostConstruct descriptor is called
after the entity bean has finished its instantiation, but before any of its business
methods are called. These callback methods can be overridden to pass in
initialization values.
When you use Entity beans you dont need to worry about database transaction
handling, database connection pooling etc. which are taken care by the ejb container.
But in case of JDBC you have to explicitly do the above features. what suresh told is
Session Objects
A typical session object has the following characteristics:
• Executes on behalf of a single client.
• Can be transaction-aware.
• Updates shared data in an underlying database.
• Does not represent directly shared data in the database, although it may access
and update such data.
• Is relatively short-lived.
• Is removed when the EJB container crashes. The client has to re-establish a new
session object to continue computation.
There are advantages and disadvantages to making a session bean stateful. The
following are some of the advantages:
• Transient information, such as that described in the stock trading scenario, can be
stored easily in the instance variables of the session bean, as opposed to defining and
using entity beans (or JDBC) to store it in a database.
• Since this transient information is stored in the session bean, the client doesn’t need to
store it and potentially waste bandwidth by sending the session bean the same
information repeatedly with each call to a session bean method. This bandwidth issue is
a big deal when the client is installed on a user’s machine that invokes the session bean
methods over a phone modem, for example. Bandwidth is also an issue when the data is
very large or needs to be sent many times repeatedly.
Advantages:
Page
• It can be used by both web based and non web based clients (like swing etc.).
• It can be used for multiple operations for a single http request.
75
• Stateless Beans Offer Performance and Scalability Advantages.
Disadvantages:
One of the disadvantages of message-driven beans is that they can only listen to a
single queue or topic. A single message-driven bean can't listen to messages from two
different queues.
One of the most important aspects of message-driven beans is that they can consume and
process messages concurrently. This capability provides a significant advantage over
traditional JMS clients, which must be custom-built to manage resources, transactions,
and security in a multithreaded environment. The message-driven bean containers
provided by EJB manage concurrency automatically, so the bean developer can focus on
the business logic of processing the messages. The MDB can receive hundreds of JMS
messages from various applications and process them all at the same time, because
numerous instances of the MDB can execute concurrently in the container.
One of the principal advantages of JMS messaging is that it's asynchronous. In other
words, a JMS client can send a message without having to wait for a reply. Contrast this
flexibility with the synchronous messaging of Java RMI. RMI is an excellent choice for
Page
assembling transactional components, but is too restrictive for some uses. Each time a
client invokes a bean's method it blocks the current thread until the method completes
76
execution. This lock-step processing makes the client dependent on the availability of
the EJB server, resulting in a tight coupling between the client and enterprise bean.
The advantages of using this approach is that it performs faster than either of the
stateless session bean scenarios because the MDB does not need to dispatch incoming
requests to another EJB.
You should consider using enterprise beans if your application has any of the following
requirements:
• The application must be scalable. To accommodate a growing number of users,
you may need to distribute an application’s components across multiple
machines. Not only can the enterprise beans of an application run on different
machines, but also their location will remain transparent to the clients.
• Transactions must ensure data integrity. Enterprise beans support transactions,
the mechanisms that manage the concurrent access of shared objects.
• The application will have a variety of clients. With only a few lines of code,
remote clients can easily locate enterprise beans. These clients can be thin,
various, and numerous.
The most visible difference between message-driven beans and session beans is that
clients do not access message-driven beans through interfaces. Unlike a session bean,
a message-driven bean has only a bean class.
In several respects, a message-driven bean resembles a stateless session bean.
• A message-driven bean’s instances retain no data or conversational state for a
specific client.
• All instances of a message-driven bean are equivalent, allowing the EJB
container to assign a message to any message-driven bean instance. The
container can pool these instances to allow streams of messages to be
processed concurrently.
• A single message-driven bean can process messages from multiple clients.
• The instance variables of the message-driven bean instance can contain some
state across the handling of client messages (for example, a JMS API
connection, an open database connection, or an object reference to an
enterprise bean object).
• Client components do not locate message-driven beans and invoke methods
directly on them. Instead, a client accesses a message-driven bean through, for
example, JMS by sending messages to the message destination for which the
Page
message-driven bean class is the MessageListener.
You assign a message-driven bean’s destination during deployment by using Application 77
Server resources.
The benefits of the Enterprise JavaBeans Technology for version 3.0 of the EJB
specifications are that it: they simplify the development of large, distributed applications
for the following reasons:
First, because the EJB container provides system-level services to enterprise beans,
the bean developer can concentrate on solving business problems. The EJB container,
rather than the bean developer, is responsible for system-level services such as
transaction management and security authorization.
Third, because enterprise beans are portable components, the application assembler
can build new applications from existing beans. These applications can run on any
compliant Java EE server provided that they use the standard APIs.
Page
BMP versus CMP
78
a. Performance
BMP should always win out over CMP in performance.
Ease of development
Bean-managed beans also offer greater flexibility with respect to the type of data they are
representing. The data comprising an entity bean need not be from a single table or a single database,
or even from any database, for that matter. When you are in charge of managing the persistence of
your data, you are at complete liberty to do anything you want when the EJB container notifies you;
this also provides you with the ability to offer an EJB front-end to an existing legacy system.
Maintainability
Bean-managed beans have their downside too: maintainability and convenience. BMP beans are tied
very closely to a database schema (the code to access specific tables and column names is hard-
coded in the bean itself.) CMP beans can be configured at deployment time; a bean can be mapped to
a specific table and its fields to that table's columns. A change in the database schema does not
necessarily equate to a change in the bean; a smart object-mapping tool can hide some of the
changes, but a major overhaul will affect the bean code, regardless.
(3) JDO Java Data Object is a specification of Java object persistence. One of its features is a
transparency of the persistent services to the domain model. JDO persistent objects are ordinary
Java programming language classes; there's no requirement for them to implement certain
interfaces or extend from special classes. JDO 1.0 was developed under the Java Community
Process as JSR 12. JDO 2.0 was developed under JSR 243 and was released on May 10th,
2006. JDO 2.1 is now underway, being developed by the Apache JDO project.
Object persistence is defined in the external XML metafiles, which may have vendor-specific
extensions. JDO vendors provide developers with enhancers, which modify compiled Java class
files so they can be transparently persisted. (Note that byte-code enhancement is not mandated
by the JDO specification, although it is the commonly used mechanism for implementing the JDO
specification's requirements.) Currently, JDO vendors offer several options for persistence, e.g. to
RDBMS, to OODB, to files.
JDO enhanced classes are portable across different vendors' implementation. Once enhanced, a
Java class can be used with any vendor's JDO product.
JDO is integrated with Java EE in several ways. First of all, the vendor implementation may be
provided as a JEE Connector. Secondly, JDO may work in the context of JEE transaction
services.
Page
79
JDO relies on the J2EE Connector architecture (JCA) for EIS access and uses the Java
Transaction API (JTA) for distributed transactions. A JDO instance is a persistence-
capable (implements PersistenceCapable interface) class, each instance of which
represents some form of persistent data. You can make a Java class persistence-
capable by either explicitly implementing the PersistenceCapable interface or using
a JDO enhancer during or after compile time. (A JDO enhancer is a byte code enhancer
program that modifies the byte codes of Java class files to enable transparent loading
and storing of the persistent instances' fields.) You can make almost any user-defined
class PersistenceCapable; however, some system classes, such as Thread, Socket,
and File, among others, can never be persistence capable.
One of JDO's primary objectives is to provide you with a transparent, Java-centric view
of persistent information stored in a wide variety of datastores. You can use the Java
programming model to represent the data in your application domain and transparently
retrieve and store this data from various systems, without needing to learn a new data-
access language for each type of datastore. The JDO implementation provides the
necessary mapping from your Java objects to the special datatypes and relationships of
the underlying datastore. Chapter 4 discusses Java modeling capabilities you can use in
your applications. This chapter provides a high-level overview of the architectural
aspects of JDO, as well as examples of environments in which JDO can be used. We
cannot enumerate all such environments in this book, because JDO is capable of
running in a wide variety of architectures.
Page
The JDO architecture simplifies the development of scalable, secure, and transactional
JDO implementations that support the JDO interface. You can access a wide variety of
80
storage solutions that have radically different architectures and data models, but you can
use a single, consistent, Java-centric view of the information from all the datastores.
The JDO architecture can be used to access and manage data contained in local
storage systems and heterogeneous EISs, such as enterprise resource planning (ERP)
systems, mainframe transaction processing systems, and database systems. JDO was
designed to be suitable for a wide range of uses, from embedded small-footprint
systems to large-scale enterprise application servers. A JDO implementation may
provide an object-relational mapping tool that supports a broad array of relational
You can focus on developing your application's business and presentation logic without
having to get involved in the issues related to connecting to a specific EIS. The JDO
implementation hides the EIS-specific issues, such as datatype mapping, relationship
mapping, and the retrieval and storage of data. Your application sees only a Java view
of the data, organized as classes using native Java constructs. EIS-specific issues are
important only during deployment of your application.
When JDO is deployed in a managed environment, it uses the J2EE Java Connector
Architecture, which defines a set of portable, scalable, secure, and transactional
mechanisms for integrating an EIS with an application server. These mechanisms focus
on important aspects of integration with heterogeneous systems: instance management,
connection management, and transaction management. The Java Connector
Page
Architecture enables a standard JDO implementation to be pluggable across application
servers from multiple vendors. 81
Managed environments also provide transparency for application components' use of
system-level mechanisms--distributed transactions, security, and connection
management--by hiding the contracts between JDO implementation and the application
server. Chapter 16 covers the use of JDO in the web server environment. Chapter 17
explains how to use JDO to provide persistence services in a J2EE application-server
environment, which supports the Enterprise JavaBeans (EJB) architecture.
The persistent classes that you define can migrate easily from one environment to
another. This also allows you to debug persistent classes and parts of your application
code in a simple one- or two-tier environment and deploy them in another tier of the
system architecture.
Ease of development. The JDO API allows application developers to focus on their
domain object model (DOM) and leave the details of the persistence to the JDO
implementation.
High performance. Java application developers do not need to worry about
performance optimization for data access because this task is delegated to JDO
implementations that can improve data access patterns for best performance.
Scalability and security: since JDO can run within a managed environment such as
J2EE environment, it can take advantages of the scalability and security capabilities of
J2EE architecture.
JDO use JDBC. So JDBC must be faster (sometimes very faster) then JDO.
But... JDO has implemented cache (and connection pool). Hence, in some cases JDO will
As You can see, JDO is more higher layer of Persistent API then JDBC. There are exist
some parallels with CMP & BMP.
JDO and CMP more complex but can be faster in development (sounds good). BMP and
JDBC allow more freedom. You may try to create own pooling and cache and achieve
greates performance (but this will take time + time + time).
(5) ORM
Page
relationships are directly represented, rather than requiring join tables/operations.
The root of the problem is that objects can't be directly saved to and retrieved from
83
relational databases. While objects have identity, state, and behavior in addition to data,
an RDBMS stores data only. Even data alone can present a problem, since there is often
no direct mapping between Java and RDBMS data types. Furthermore, while objects are
traversed using direct references, RDBMS tables are related via like values in foreign and
primary keys. Additionally, current RDBMS have no parallel to Java's object inheritance
for data and behavior. Finally, the goal of relational modeling is to normalize data (i.e.,
eliminate redundant data from tables), whereas the goal of object-oriented design is to
model a business process by creating real-world objects with data and behavior. Robust
object-oriented application development requires a mapping strategy built on a solid
understanding of the similarities and differences in these models.
(6) DAO Data Access Objects a data access object (DAO) is an object that provides an
abstract interface to some type of database or persistence mechanism, providing some
specific operations without exposing details of the database. It provides a mapping from
application calls to the persistence layer. This isolation separates the concerns of what
data accesses the application needs, in terms of domain-specific objects and data types
(the public interface of the DAO), and how these needs can be satisfied with a specific
DBMS, database schema, etc. (the implementation of the DAO).
The DAO pattern is one of the standard J2EE design patterns. Developers use this pattern
to separate low-level data access operations from high-level business logic.
This design pattern is equally applicable to most programming languages, most types of
software with persistence needs and most types of database, but it is traditionally
associated with Java EE applications and with relational databases accessed via the JDBC
API because of its origin in Sun Microsystems' best practice guidelines[1] ("Core J2EE
Patterns") for that platform.
The advantage of using data access objects is the relatively simple and rigorous
separation between two important parts of an application which can and should know
almost nothing of each other, and which can be expected to evolve frequently and
independently. Changing business logic can rely on the same DAO interface, while
changes to persistence logic does not affect DAO clients as long as the interface remains
correctly implemented.
In the specific context of the Java programming language, Data Access Objects as a
design concept can be implemented in a number of ways. This can range from a fairly
simple interface that separates the data access parts from the application logic, to
frameworks and commercial products. DAO coding paradigms can require some skill.
Page
Use of technologies like Java persistence technologies and JDO ensures to some extent
that the design pattern is implemented. Technologies like EJB CMP come built into
84
application servers and can be used in applications that use a JEE application server.
Commercial products like TopLink are available based on Object-relational mapping.
Popular open source ORM products include Hibernate, iBATIS and Apache OpenJPA.
Performance
DAO is a good design pattern. But performance wise CMP is more suitable for updating
DB. You should use DAO for data base read like listing all customers ets because reading
huge number of records is not advisable through CMP.
• My experience is that the DOA makes for tight coupling of your object with the
persistence. I've been called upon to maintain systems that used DOA's. And
those systems where significantly harder to maintain than those with an other
form of persistence.
• It is also expected that in case the DAO implementation were to change the other
parts of the application would be unaffected.
• Resources are dedicated to develop and implement this layer which converts into
better software in this layer.
• DAO pattern is applicable to the data layer. VO pattern will be required along with
it. DAO+VO (Visual Object) combination is good , it is what we can call POJOs
(Plain old java objects).
• Use CMP if There are not too many relationships between DB tables
• If DB portability is of importance and to minimizing maintainability use BMP
• If Scalability is of importance. Remember EJB container provides lots of features
for this , e.g. bean pool, serialization support for excess beans
• If scalability is not a concern, use DAO+VO (visual Object), so this will eliminate
the need of a J2EE server. Plain web server like Tomcat will do. Good cost-
cutting , if that is what you/company is looking for and the anticipated load is not
much.
• If you need the flexibility of CMP , in DAO pattern , try using Factory pattern for
the databases required to be supported.
DAO is a pattern for linking persistency in your model. CMP and BMP are ways of
making an object persistent. The DAO object can be CMP, BMP or simple JDBC. The
pattern does not specify how persistency is realized.
Page
85
Why JPA?
Java developers who need to store and retrieve persistent data already have several
options available to them: serialization, JDBC, JDO, proprietary object-relational
mapping tools, object databases, and EJB 2 entity beans. Why introduce yet another
persistence framework? The answer to this question is that with the exception of JDO,
each of the aforementioned persistence solutions has severe limitations. JPA attempts to
overcome these limitations, as illustrated by the table below.
Table 2.1. Persistence Mechanisms
Advanced
OO Concepts
Yes No Yes Yes No Yes Yes
Transaction
al Integrity
No Yes Yes Yes Yes Yes Yes
Concurrency
No Yes Yes Yes Yes Yes Yes
Large Data
Sets
No Yes Yes Yes Yes Yes Yes
Existing
Schema No Yes Yes No Yes Yes Yes
Relational
and Non-
Relational
Stores
No No No No Yes Yes No
Queries No Yes Yes Yes Yes Yes Yes
Strict
Standards /
Portability
Yes No No No Yes Yes Yes
Simplicity
Yes Yes Yes Yes No Yes Yes
Page
somewhat limited in the object-oriented concepts it can represent. Advanced
features like inheritance, polymorphism, and complex relations are absent. 87
Additionally, EBJ 2.x entities are difficult to code, and they require heavyweight
and often expensive application servers to run.
The JDO specification uses an API that is strikingly similar to JPA. JDO,
however, supports non-relational databases, a feature that some argue dilutes the
specification.
JPA combines the best features from each of the persistence mechanisms listed above.
Creating entities under JPA is as simple as creating serializable classes. JPA supports the
large data sets, data consistency, concurrent use, and query capabilities of JDBC. Like
object-relational software and object databases, JPA allows the use of advanced object-
oriented concepts such as inheritance. JPA avoids vendor lock-in by relying on a strict
Page
• Developing Web Services Using EJB 3.0
Ref. • Developing Web Services Using JAX-WS
89
• Web Services Technology-- Deployment Issues
Page
package endpoint;
import javax.ejb.Stateless;
90
@Stateless
public class Calculator {
public Calculator() {}
public int add(int i, int j) {
int k = i +j ;
System.out.println(i + "+" + j +" = " + k);
return k;
}
Because the EJB 3.0 bean doesn't need to implement the javax.ejb.SessionBean interface,
it no longer needs to include unimplemented lifecycle methods such as ejbActivate and
ejbPassivate. This results in a much simpler and cleaner class. Various annotations
defined in EJB 3.0 reduce the burden on developers and deployers by reducing or
eliminating the need to write a deployment descriptor for the component.
Marking the EJB 3.0 Bean as a Web Service
To mark a bean as a web service, simply annotate the class with the @WebService
annotation. This is an annotation type defined in the javax.jws.WebService package, and
is specified in Web Services Metadata for the Java Platform, JSR 181. Here is the code
for Calculator class marked as a web service:
package endpoint;
import javax.ejb.Stateless;
import javax.jws.WebService;
@Stateless
@WebService
public class Calculator {
public Calculator() {}
public int add(int i, int j) {
int k = i +j ;
System.out.println(i + "+" + j +" = " + k);
return k;
}
}
The service object provides the method, getCalculatorPort, to access the Calculator port
of the web service. Note that both endpoint.CalculatorService and endpoint.Calculator
are portable artifacts that are generated by using the wsimport utility. The wsimport
Page
utility is used to generate JAX-WS artifacts (it is invoked as part of the build-client step
when you run the example program.) 92
After you get the port, you can invoke a business method on it just as though you invoke
a Java method on an object. For example, the following line in JAXWSClient invokes the
add method in Calculator:
int ret = port.add(i, 10);
7. To undeploy the EJB Module from GlassFish, execute the following command:
ant undeploy
@WebService(
name="Calculator",
serviceName="CalculatorService",
targetNamespace="http://techtip.com/jaxws/sample"
)
public class Calculator {
public Calculator() {}
@WebMethod(operationName="add", action="urn:Add")
public int add(int i, int j) {
int k = i +j ;
System.out.println(i + "+" + j +" = " + k);
return k;
}
}
JAX-WS 2.0 relies heavily on the use of annotations as specified in A Metadata Facility
for the Java Programming Language (JSR 175) and Web Services Metadata for the Java
Platform (JSR 181), as well as additional annotations defined by the JAX-WS 2.0
specification.
Notice the two annotations in the Calculator class: @WebService and @WebMethod. A
valid endpoint implementation class must include a @WebService annotation. The
annotation marks the class as a web service. The name property value in the
@WebService annotation identifies a Web Service Description Language (WSDL)
portType (in this case, "Calculator"). The serviceName ("CalculatorService") is a WSDL
service. TargetNamespace specifies the XML namespace used for the WSDL. All the
properties are optional. For details on default values of these properties, see section 4.1 of
the specification Web Services Metadata for the Java Platform, JSR 181.
The @WebMethod annotation exposes a method as a web service method. The
operationName property value in the annotation of the Calculator class identifies a
Page
WSDL operation (in this case, add), and the action property value ("urn:Add") specifies
an XML namespace for the WSDL and some of the elements generated from this web 95
service operation. Both properties are optional. If you don't specify them, the WSDL
operation value defaults to method name, and the action value defaults to the
targetNamespace of the service.
Compile the implementation class
After you code the implementation class, you need to compile it. Start GlassFish by
entering the following command:
<GF_install_dir>\bin\asadmin start-domain domain1
Page
try {
System.out.println( 97
" Retrieving port from the service " + service);
Calculator port = service.getCalculatorPort();
System.out.println(
" Invoking add operation on the calculator port");
for (int i=0;i>10;i++) {
int ret = port.add(i, 10);
if(ret != (i + 10)) {
System.out.println("Unexpected greeting " + ret);
return;
}
System.out.println(
Page
under jaxws-techtip. The artifacts are:
Add.java
98
Add.class
AddResponse.java
AddResponse.class
Calculator.java
Calculator.class
CalculatorService.java
CalculatorService.class
package-info.java
package-info.class
ObjectFactory.class
ObjectFactory.java
Page
all:
BUILD SUCCESSFUL
Total time: 6 seconds
99
How the Web service has been implemented is transparent to the Web service
client. A client does not know if the Web service has been deployed in a J2EE or
non-J2EE environment.
Leverage existing J2EE technology.
Existing J2EE components can be exposed as Web services.
The performance of Web Services under various business scenarios is not known.
The effect of Web Services deployment on network bandwidth is uncertain.
Asynchronous Web Services cannot be built as of now.
How to ensure that all Web Services that are for both producers and consumers
can meet any and all service levels as defined for them in the target production
environment.
How to test for scalability of Web Services, especially the externally facing Web
Services where usage loads may be unpredictable.
How to design web services to ensure transactional integrity.
Explain the benefits of the EJB 3 development model over previous EJB
4.4 generations for ease of development including how the EJB container
simplifies EJB development.
Page
100
Explain standard uses for JSP pages and servlets in a typical Java EE
2 application.
Some of the examples of Web framework in designing a Java EE application are Page
101
Apache Cocoon, Apache Struts, Google Web Toolkit, Ajax, JavaServer Faces and
Spring.
Over the course of its life, the J2EE Web Tier has faced many challenges in easing Web
application development. While it's a scalable, enterprise-ready platform, it isn't exactly
developer-friendly. Particular challenges to Web developers include the need for a
standard Web framework, compatible expression languages, and availability of
components. Several Web frameworks have been developed to resolve these issues, each
with its own strengths and weaknesses. This article discusses the unique challenges of the
Page
Solution: A Unified Expression Language
To solve the disconnect between JSF EL and JSP EL, the JSP 2.1 and JSF 1.2
103
specifications have created a Unified Expression Language. If you're using a JSF 1.2
implementation, you can use JSP expressions in your JSF applications. It also adds a
variable resolver to JSP. This means that frameworks like WebWork can control what $
{...} means and tell it to talk to its ValueStack instead of the standard scopes. This is
important for many framework developers because they don't want to invent their own
syntax for resolving expressions. The new JSP and JSF versions are part of J2EE 5.0 and
will be required by any J2EE 5.0-compliant containers.
Problem: JSP Unfriendly to Component-Based Frameworks
JSP is the primary view choice for JSF apps, but it's clunky at best. Most frameworks that
use JSP simply render values as they encounter them when loading a page. Relationships
Page
developers to innovate and compete for users. Not only do the different frameworks show
different ways of doing things, but they're also borrowing from and competing with one
104
another to become better. Competition breeds (causes) innovation. J2EE stands to benefit
from this innovation because the good ideas can be added to JSF and the bad ones can be
removed. The Java Community Process and Expert Groups for J2EE and JSF are very
open to the community, so the community will contribute and help improve it, much like
they have with the many Open Source alternatives.
All of the frameworks mentioned do the same thing, just in different ways. JSF and
Tapestry are making developing Java Web applications easier due to smarter defaults and
better plumbing. They don't require the developer to know much about the Servlet API,
and they handle most input types transparently - without using custom converters.
Well, that is an interesting question. I think that in a meritocracy, the best framework would
win. What makes a framework the best?
The ease of development and maintenance for the far-sighted crowd costs less and therefore
is most beneficial. For each organization, that merit may be found in any one of the
frameworks mentioned, due to the ease of development due to elegance of design, ample
documentation, easy development semantics, or sheer knowledge of the platform.
Familiarity is a strong argument for ease, and there are a lot of Java devs and architects out
there that will find that the "best" framework is the one they know the best.
Which framework has the most merit? It depends on you, your knowledge, your organization,
and most importantly, your customer's needs. For me and my current projects, I'd probably
go with Struts. But, unlike the techno-vangelists out there, I don't think that everyone has to
convert to my favorite web app framework.
Hefe | June 19, 2006 02:53 PM
Just thought I'd weigh in with a few words with RSF, since this "frameworks" issue is
perpetually interesting.
RSF takes the view that "every framework is an insult to its users", and therefore tries to be
the smallest possible insult. I wrote it primarily because I got extremely fed up with
frameworks perpetually getting in the way of my writing code, and found that Spring was the
first framework that had the conceptual coherence to get out of the way fast enough.
That said, I found there were a few genuinely new and productive ideas in JSF that had been
missed by other frameworks - in particular I find Spring MVC problematic since although it is
sensible so far as it goes, it is the way it is because it is designed to cater to the "lowest
Page
common denominator" of webapp frameworks, which is currently very low. RSF is what a
Spring webapp framework would be, if one abandoned the Spring credo of "working with what 105
is there already" and started from the ground up.
Given the current conversations about Rails and ORM solutions, I think RSF's approach to ORM
(nicknamed "OTP") is very interesting - you might think of it as a little analogous to Rails since
it aims to make access to storage idiomatic and transparent, but goes further in that it not an
"implementation" and only an "idiom" - hence it can be layered on top of whatever ORM you
happen to like using at the moment. What most people like using at the moment in the Java
world is Hibernate, so RSF ORM enables you to use Hibernate ORM without seeing any
Hibernate dependence in your code.
Similarly RSF enables you to write a portlet without seeing any portlet code, or indeed an "X"
without any "X" code in general - for want of a better label this is the "Spring" philosophy of
"invisible frameworks", but RSF takes traditional IoC further by allowing request-scope IoC
using a lightweight Spring clone, RSAC to "clean the areas Spring cannot reach". I think
Excellent discussion!
Thank you, Tim O'B, Tim F, Antranig, et al, for a very lively and informative discussion. I've
been surveying all frameworks, all languages, for the past few months, and this is one of the
best discussion/overviews I've seen. I'm about to begin building a new n-faced app (web +
mobile), and I want to be sure I've looked high and low for the "best" framework/language
combination before I start. I'll most likely have to live with any messes I make for the next
Page
few years, so really need to avoid any serious mistakes. Anything I work on will be built using
typically-chaotic XP methods, so the base platform has to promote code that is maintainable,
reliable, reusable, very flexible. My clients generally don't care what technologies I use, as
106
long as it does what they want, is rock-solid stable, and it's easy/fast to make changes.
I've been a Java developer for 10 years now, so Java is "home" these days, just as C++ and C
were "home" in the years before that. As a professional developer (and consultant) I have an
obligation (to myself and to my clients) to remain open-minded, pondering whether it's time
to abandon Java. If I stagnate, then I deserve the professional death that will surely follow.
(Hopefully I'll get rich soon, and I can stop holding onto this tiger's tail!)
There are certainly some viable alternative languages (Python, Ruby, PHP, Groovy, to name a
few) and they each have fairly good platforms (e.g., Zope/Plone, Rails, PHP-Nuke), so they
deserve to be examined carefully. Each has it's strengths and weaknesses, and the same can
be said for every Java-based solution I've seen or worked with. My job is to stay informed
Explain standard uses for JSP pages and servlets in a typical Java EE
5.2 application.
Page
•
Ref. [JEE_5_TUTORIAL] ch. 4 & 5
107
JavaServer Pages Technology
JavaServer Pages (JSP) technology allows you to easily create web content that has
both static and dynamic components. JSP technology makes available all the dynamic
capabilities of Java Servlet technology but provides a more natural approach to creating
static content. The main features of JSP technology are as follows:
• A language for developing JSP pages, which are text-based documents that
describe how to process a request and construct a response
• An expression language for accessing server-side objects
• Mechanisms for defining extensions to the JSP language
What Is a Servlet?
A servlet is a Java programming language class that is used to extend the capabilities of
servers that host applications accessed by means of a request-response programming
model. Although servlets can respond to any type of request, they are commonly used to
extend the applications hosted by web servers. For such applications, Java Servlet
technology defines HTTP-specific servlet classes.
•
Ref. [JEE_5_TUTORIAL] ch. 10
Page
108
JavaServer Faces Technology
JavaServer Faces technology is a server-side user interface component framework for
Java technology-based web applications.
The main components of JavaServer Faces technology are as follows:
• An API for representing UI components and managing their state; handling
events, server-side validation, and data conversion; defining page navigation;
supporting internationalization and accessibility; and providing extensibility for all
these features
• Two JavaServer Pages (JSP) custom tag libraries for expressing UI components
within a JSP page and for wiring components to server-side objects
• [JEE_5_TUTORIAL]
Ref. • Enterprise JavaBeans - Re: EJB Vs JSP/POJO/Servlets, etc
Page
109
Java EE server: The runtime portion of a Java EE product. A Java EE server provides EJB and
web containers.
■ Enterprise JavaBeans (EJB) container: Manages the execution of enterprise beans for Java
EE applications. Enterprise beans and their container run on the Java EE server.
■ Web container: Manages the execution of JSP page and servlet components for Java EE
applications. Web components and their container run on the Java EE server.
■ Application client container: Manages the execution of application client components.
A web application is a dynamic extension of a web or application server. There are two types of
web applications:
■ Presentation-oriented: A presentation-oriented web application generates interactive web
pages containing various types of markup language (HTML, XML, and so on) and dynamic
content in response to requests.
■ Service-oriented: A service-oriented web application implements the endpoint of a web service.
(See Objective 3.2 for more information.) Presentation-oriented applications are often clients of
service-oriented web applications.
Web Applications
In the Java 2 platform, web components provide the dynamic extension capabilities for a web
server. Web components are either Java servlets, JSP pages, or web service endpoints. The
interaction between a web client and a web application is illustrated in Figure 3–1. The client
sends an HTTP request to the web server. A web server that implements Java Servlet and
JavaServer Pages technology converts the request into an HTTPServletRequest object. This
object is delivered to a web component, which can interact with JavaBeans components or a
database to generate dynamic content. The web component can then generate an
HTTPServletResponse or it can pass the request to another web component. Eventually a web
component generates a HTTPServletResponse object. The web server converts this object to an
HTTP response and returns it to the client.
Page
110
Servlets are Java programming language classes that dynamically process requests and
construct responses. JSP pages are text-based documents that execute as servlets but allow a
more natural approach to creating static content. Although servlets and JSP pages can be used
interchangeably, each has its own strengths. Servlets are best suited for service-oriented
applications (web service endpoints are implemented as servlets) and the control functions of a
presentation-oriented application, such as dispatching requests and handling nontextual data.
Notice that Java Servlet technology is the foundation of all the web application technologies, even
if you do not intend to write servlets. Each technology adds a level of abstraction that makes web
application prototyping and development faster and the web applications themselves more
maintainable, scalable, and robust.
Web components are supported by the services of a runtime platform called a web container. A
web container provides services such as request dispatching, security, concurrency, and life-
cycle management. It also gives web components access to APIs such as naming, transactions,
and email.
Certain aspects of web application behavior can be configured when the application is installed,
or deployed, to the web container. The configuration information is maintained in a text file in XML
format called a web application deployment descriptor (DD). ADD must conform to the schema
described in the Java Servlet Specification.
Page
111
There are a number of scenarios in which the use of enterprise beans in an application
would be considered overkill: sort of like using a sledgehammer to crack a nut. The J2EE
specification doesn't mandate a specific application configuration, nor could it
realistically do so. The J2EE platform is flexible enough to support the application
configuration most appropriate to a specific application design requirement.
Business Components
Page
112
Business code, which is logic that solves or meets the needs of a particular business domain
such as banking, retail, or finance, is handled by enterprise beans running in the business tier.
Figure 1–4 shows how an enterprise bean receives data from client programs, processes it (if
necessary), and sends it to the enterprise information system tier for storage. An enterprise bean
also retrieves data from storage, processes it (if necessary), and sends it back to the client
program.
Technically, the answer is that EJB does not add anything that cannot be done with
regular business objects...
With EJB before EJB3 there is a lot more effort that goes into using the technology, this
is why people often will use regular business objects... they may only need one or two of
the things EJBs bring to the table, so the pain of EJBs is not worth the gain.
Page
EJB3 is a lot easier to implement (than previous EJB specifications) so we should start
to see people using this technology more. 113
When you are asking about EJB vs POJO the question you need to ask yourself first is:
have we decided how the application will be deployed?
If that decision has been made, then you will be able to know what flavor of EJB is
available (if at all) on your application server.
e.g. if it will be deployed on Tomcat => no EJBs at all
if it will be deployed on Glassfish or JBoss5 (or possibly some versions of JBoss4) or if it
will be deployed on Websphere => EJB3 is available.
Where I see EJBs fitting into the MVC application is in the Model layer, i.e. they provide
a means of abstracting your model so that it can be expressed in terms of business
What you have to ask yourself is: do you actually need the extra capability and
complexity of EJB.
Just because EJB is available for use is not a good enough reason to use it.
Need for distributed transactions and biz components spread across multiple physical
servers would be one good reason to use EJB. However, that is not the case for most of
the apps out there.
As for other services provided by an EJB container such as (CMT, security, etc.), those
can be provided by lightweight IOC (Inversion of Control, which is an event-based
programming model.) containers like Spring, without the overhead and complexity of
EJB.
I think this is a very valid question. Think 1999...almost every java application used
servlets/JSP for presentation along with regular java objects for business logic. Of
course, today servlets are for presentation too and EJBs for business logic. So the 2 can
be compared. Brace yourself for impact.......A well designed solution using
Servlets/JSPs will *always* be faster and more scalable than a well designed solution
that uses Servlets/JSP + EJB. Please see
http://rubis.objectweb.org/download/perf_scalability_ejb.pdf for documented evidence.
However the difference in scalability and speed might not be very much particularly if
session beans are used. If you need to develop an application that is largely read-only,
needs to have very few transactions, is typically small and does not need the out-of-box
security features of EJB, simply use servlets/JSP. Hence, EJB-centric solution is for
systems with expected large, heavy load and possible huge write operations.
Page
114
n-tier Architecture
Developing n-tier distributed applications is a complex and challenging job. Distributing
the processing into separate tiers leads to better resource utilization. It also allows
allocation of tasks to experts who are best suited to work and develop a particular tier.
The web page designers, for example, are more equipped to work with the presentation
layer on the web server. The database developers, on the other hand, can concentrate
on developing stored procedures and functions. However, keeping these tiers as
isolated silos serves no useful purpose. They must be integrated to achieve a bigger
Page
enterprise goal. It is imperative that this is done leveraging the most efficient protocol;
otherwise, this leads to serious performance degradation.
115
Besides integration, a distributed application requires various services. It must be able to
create, participate, or manage transactions while interacting with disparate information
systems. This is an absolute must to ensure the concurrency of enterprise data. Since n-
tier applications are accessed over the Internet, it is imperative that they are backed by
strong security services to prevent malicious access.
These days, the cost of hardware, like CPU and memory, has gone down drastically. But
still there is a limit, for example, to the amount of memory that is supported by the
Page
118
It is evident from Figure 7 that layered architecture is an extension of the MVC
architecture. In the traditional MVC architecture, the data access or integration layer was
assumed to be part of the business layer. However, in Java EE, it has been reclaimed as
a separate layer. This is because enterprise Java applications integrate and
communicate with a variety of external information system for business data—relational
database management systems (RDBMSs), mainframes, SAP ERP, or Oracle e-
business suites, to name just a few. Therefore, positioning integration services as a
• [web service]
Ref. • [SOA for J2EE]
JAX-WS 2.0
JAX-WS 2.0 replaces an older API, JAX-RPC 1.1 (Java API for XML-based Remote
Procedure Call), extending it in many areas. The new specification supports multiple
protocols, such as Simple Object Access Protocol (SOAP) 1.1, SOAP 1.2, and XML.
JAX-WS uses JAXB 2.0 as its data binding model, relying on usage annotations to
considerably simplify web service development. It also uses many annotations defined
by the specification Web Services Metadata for the Java Platform and introduces steps
to plug in multiple protocols instead of HTTP only. In a related development, JAX-WS
defines its own message-based session management.
Page
Web Services for J2EE Overview
The Web Services for J2EE specification defines the required architectural relationships
122
as shown in Figure 3. This is a logical relationship and does not impose any
requirements on a container provider for structuring containers and processes. The
additions to the J2EE platform include a port component that depends on container
functionality provided by the web and EJB containers, and the SOAP/HTTP transport.
Web Services for J2EE requires that a Port be referencable from the client, web, and
EJB containers. This specification does not require that a Port be accessible from the
applet container.
This specification adds additional artifacts to those defined by JAX-RPC that may be
used to implement Web services, a role based development methodology, portable
This specification defines two means for implementing a Web service, which runs in a
J2EE environment, but does not restrict Web service implementations to just those
means.
The first is a container based extension of the JAX-RPC programming model which
defines a Web service as a Java class running in the web container.
The second uses a constrained implementation of a stateless session EJB in the EJB
container. Other service implementations are possible, but are not defined by this
specification.
The container provides for life cycle management of the service implementation,
concurrency management of method invocations, and security services. A container
provides the services specific to supporting Web services in a J2EE environment. This
specification does not require that a new container be implemented. Existing J2EE
containers may be used and indeed are expected to be used to host Web services. Web
service instance life cycle and concurrency management is dependent on which
container the service implementation runs in. A JAX-RPC Service Endpoint
implementation in a web container follows standard servlet life cycle and concurrency
requirements and an EJB implementation in an EJB container follows standard EJB life
cycle and concurrency requirements.
This specification defines the responsibilities of the existing J2EE platform roles. There
are no new roles defined by this specification. There are two roles specific to Web
Services for J2EE used within this specification, but they can be mapped onto existing
J2EE platform roles. The Web Services for J2EE product provider role can be mapped
to a J2EE product provider role and the Web services container provider role can be
mapped to a container provider role within the J2EE specification.
In general, the developer role is responsible for the service definition, implementation,
and packaging within a J2EE module. The assembler role is responsible for assembling
the module into an application, and the deployer role is responsible for publishing the
deployed services and resolving client references to services. More details on role
responsibilities can be found in later sections.
There are multiple techniques that can be applied to this architectural model to enhance
systemic qualities and to improve the overall end user experience. The following
strategies, which are discussed in the following pages, are frequently applied in SAMP
(Solaris™ Operating System, Apache, MySQL™ database, PHP) architecture
deployments according to specific site and application needs:
• Layered resource model
• Application threading
• Session management
• Load balancing
• Reverse proxies
• Distributed caching
• Clustering
• Data abstraction and segmentation
• Monitoring and management
Page
infrastructure is founded on processors that feature multiple cores and chip multi-
threading (CMT) technology, such as UltraSPARC® processors in Sun servers. CMT 124
technology enables a single server to act as a multi-node farm, enabling cost-effective
consolidation of multiple Web server instances and components. Because of advanced
thread density, Sun servers based on UltraSPARC processors are ideal systems for the
Web server layer.
Due to the high volume of user data, caching is an essential part of every Web 2.0
application. Memcached is a de facto industry-standard caching solution that allows
companies to store objects in memory and expedite data access, especially in
comparison to disk-based data access in a traditional database.
Page
tiers. To handle thousands or possibly millions of simultaneous user requests, Web 2.0
applications must be able to process many user interactions concurrently. Today’s off- 125
the-shelf operating systems, Web servers, application servers, cache servers, and
databases are generally engineered to support multiple execution threads and
concurrent workloads.
Even so, there are still some specific design challenges that can limit an application’s
ability to scale:
• Shared mutable data. Shared mutable data is the scourge of any parallel
system. In spite of shared user access, the application is responsible for
maintaining data integrity and consistency. Shared data that is read-only is
not an issue since every concurrent process has the exact same view of the
data. If the data is writable, however, then it must be protected by a locking
mechanism to prevent concurrent access. The topic of how to best implement
Session Management
Page
Since HTTP is stateless, any Web server in a cluster can potentially process an
application client request. Session management allows a user’s session state to persist
126
— for example, after a user has been authenticated by the Web server, there is no need
to re-authenticate at the next HTTP request if the user’s authentication persists in the
user’s session state.
The convenience of session management, however, comes with a price — applications
that use session management must maintain each user’s session state, which is usually
stored in memory. This can greatly increase an application’s run-time memory footprint
and tends to link user sessions to specific servers, requiring those sessions to be
migrated to another node if the server is taken offline. If session management is not
implemented, applications can have a smaller footprint and any cluster node can service
requests from any user.
Load Balancing
One key factor in a successful Web 2.0 application architecture is the ability to distribute
application load — a growing number of connections or a geographically dispersed user
base must not significantly impact response time. To accomplish this, applications are
frequently deployed in conjunction with some type of load-balancing solution. In many
cases, a site’s load-balancing scheme redirects incoming requests to the nearest server
geographically.
Both hardware and software-based load-balancing solutions are available. Hardware
load balancers are usually located above the Web tier and sometimes in-between tiers.
Most software-level clustering implementations include load-balancing software that is
used by upstream components. A reverse proxy, such as Squid, can also be used to
distribute load across multiple servers. Other open source load-balancing software
solutions include Perlbal, Pen, and Pound. Hardware-based solutions include BigIP and
ServerIron, among others.
Reverse Proxies
The Web tier hosts both static and dynamic content accessible by end users. With
technologies such as AJAX introducing asynchronous request processing, Web servers
Page
are delivering increasingly complex content aggregated from multiple Web sites and
using different formats, such as JSON or XML. The content is often transported via
127
multiple protocols other than common HTTP formats. For applications to scale well and
achieve performance goals, the use of reverse proxies is becoming a de facto trend.
Some reverse proxies (for example, Squid) allow caching of commonly requested pages
and static content, which reduces Web server load. The Web server is then free to
deliver dynamically generated content. An Apache Web server is commonly deployed in
the Web server tier and receives dispatched requests for dynamic content, as shown in
Figure 2.
Distributed Caching
An essential component of the Web tier is a distributed caching module (Memcached or
other caching module) that provides distributed shared caching capability of user
content. Created by danga.com for Live Journal, Memcached is an open source, high
performance, distributed memory object caching system that is widely adopted and
simple to manage. Rather than caching information within individual Web processes,
Memcached clients and servers enable the creation of a single global cache across
many systems. The cache can then be accessed via a client API either within application
code or via language-specific modules that abstract the data access layer and access
the cache before deferring to the underlying data store. Applications typically cache
partial or full dynamically generated Web pages, partial or full result sets from complex
database queries, and any application-level data that can be shared and reused.
Unlike some databases, Memcached does not block a reading thread while writes are in
progress, which generally speeds up access to data. This is particularly effective in read-
intensive applications. Memcached also provides opportunities for better data locality,
allowing data to be available much closer to where it is needed. In addition, it generally
Page
improves access times compared to accessing data from a database or disk. Depending
on application load, the more Memcached instances there are, the faster the access.
128
Identifying which instance holds the needed data is a constant time operation and with
more instances, there is less load per server. Managing Memcached instances is
simplified because there is no crosstalk between instances — in fact the instances know
nothing of each other.
Memcached comes with many client APIs, including languages such as Ruby on Rails,
PHP, Perl, and the Java programming language. There are also a set of User Defined
Functions (UDF) for MySQL that can be used to push data out to Memcached on writes
and updates. Although this technology is still somewhat experimental, it does go a long
way towards removing the need for a client to manually populate the cache after a write.
In building Web 2.0 applications, it is useful to anticipate the use of caching and initially
Clustering
Clustering for Web 2.0 applications requires that the workload be spread across multiple,
often identical, instances of application tier components. An instance can be a host
system running an entire application stack or it can be a set of virtualized systems
running on the same physical server (such as on a highly threaded Sun CMT server
using Sun Logical Domains or Solaris Containers). An instance can also refer to one of
several application instances running on the same physical server. Deployment
strategies sometimes require a combination of techniques.
Clustering provides several key benefits. First, if a cluster instance (or node) fails, the
application continues to run. Secondly, for appropriately designed applications, it is
possible to scale out a cluster by adding more nodes or more instances, which allows
the entire system to scale, supporting potentially greater workloads.
Failover within a cluster is not always automatic. In a failover scenario, the following
steps must occur when a cluster node fails:
• The failed node must be removed from the cluster
• Any incoming work must be directed to other nodes
• Any ongoing sessions must be redirected to other nodes, along with any data related to
those sessions. These failover requirements are sometimes addressed by load
balancers sitting in front of the cluster (see “Load Balancing”, page 13). Load balancers
not only balance load across cluster nodes, but they can also:
• Detect node failures and prevent any new requests going to failed nodes
• Route all requests for the same session to the same cluster node
• Use weighting policies to distribute load dependent on the capabilities of specific nodes
or by how busy they are
Dependent on the application type, there can be a requirement for cluster node failover
to be invisible to the end user. This means that any data specific to user application
sessions would need to be available across all cluster nodes. Sharing of session data is
usually implemented through application-level APIs and through a persistence layer that
allows session data to be stored and recovered by other cluster nodes when a node
failure occurs (see “Session Management”, page 12). Since a node failure means that
other nodes in the cluster must do more work, it is important to design a cluster such that
node failure does not result in saturation of the rest of the cluster.
In the data tier, different storage requirements can translate into different availability
strategies. For databases, availability is commonly addressed through database
replication. Writes are made initially to a single database instance, which is defined as
Page
the master. These writes are then written out (replayed) either synchronously or
asynchronously to one or more replicas. Along with the master, the replicas service
129
read-only traffic. Synchronous updates provide a consistent view of the data across the
entire cluster but are slower than asynchronous updates — therefore synchronous
updates are generally not well-suited for implementations that use multiple replicas.
Asynchronous updates can suffer from what is known as “replication lag,” where the
replay of writes to the replicas causes the possibility of a read seeing stale data. The
choice of database vendor (along with data integrity and consistency requirements)
ultimately determines which replication strategy is best suited for a given application.
From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Alur, Crupi and
1 Malks (2003). Core J2EE Patterns: Best Practices and Design Strategies
2nd Edition and named using the names given in that book.
From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Gamma, Erich;
2 Richard Helm, Ralph Johnson, and John Vlissides (1995). Design
Patterns: Elements of Reusable Object-Oriented Software and are named
using the names given in that book.
From a list, select the benefits and drawbacks of a pattern drawn from
this book - Gamma, Erich; Richard Helm, Ralph Johnson, and John
3 Vlissides (1995). Design Patterns: Elements of Reusable Object-
Oriented Software.
From a list, select the benefits and drawbacks of a specified Core J2EE
4 pattern drawn from this book – Alur, Crupi and Malks (2003). Core
J2EE Patterns: Best Practices and Design Strategies 2nd Edition.
Composite
Model a network of related business entities
Entity
Page
Composite Separately manage layout and content of multiple
View composed views
134
Data Access
Abstract and encapsulate data access mechanisms
Object (DAO)
Fast Lane
Improve read performance of tabular data
Reader
Front Controller Centralize application request processing
Intercepting
Pre- and post-process application requests
Filter
Model-View- Decouple data representation, application behavior,
Controller and presentation
Page
135
7.1 From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Alur, Crupi and
Malks (2003). Core J2EE Patterns: Best Practices and Design Strategies
• [CORE_J2EE_PATTERNS]
Ref. • The [CORE_J2EE_PATTERNS]’s site
Among the 5 tiers of the J2EE architecture (see diagram here), J2EE patterns address the
following 3 tiers: presentation, business and integration.
Presentation Tier
Business Tier
Integration Tier
Page
Intercepting Filter intercepts incoming requests and outgoing responses and applies a filter. These filters
may be added and removed in a declarative manner, allowing them to be applied unobtrusively in a variety
138
of combinations. After this preprocessing and/or post-processing is complete, the final filter in the group
vectors control to the original target object. For an incoming request, this is often a Front Controller, but may
be a View.
Front Controller is a container to hold the common processing logic that occurs within the presentation tier
and that may otherwise be erroneously placed in a View. A controller handles requests and manages
content retrieval, security, view management, and navigation, delegating to a Dispatcher component to
dispatch to a View.
Application Controller centralizes control, retrieval, and invocation of view and command processing. While a
Front Controller acts as a centralized access point and controller for incoming requests, the Application
Controller is responsible for identifying and invoking commands, and for identifying and dispatching to views.
Context Object encapsulates state in a protocol-independent way to be shared throughout your application.
Using Context Object makes testing easier, facilitating a more generic test environment with reduced
dependence upon a specific container.
Page
Data Access Object enables loose coupling between the business and resource tiers. Data Access Object
encapsulates all the data access logic to create, retrieve, delete, and update data from a persistent store.
Data Access Object uses Transfer Object to send and receive data.
139
Service Activator enables asynchronous processing in your enterprise applications using JMS. A Service
Activator can invoke Application Service, Session Façade or Business Objects. You can also use several
Service Activators to provide parallel asynchronous processing for long running tasks.
Domain Store provides a powerful mechanism to implement transparent persistence for your object model. It
combines and links several other patterns including Data Access Objects.
Web Service Broker exposes and brokers one or more services in your application to external clients as a
web service using XML and standard web protocols. A Web Service Broker can interact with Application
Service and Session Façade. A Web Service Broker uses one or more Service Activators to perform
asynchronous processing of a request.
1. Intercepting Filter
Problem
You want to intercept and manipulate a request and a response before and after the request is processed.
Forces
Solution
Use an Intercepting Filter as a pluggable filter to pre and post process requests and responses. A
filter manager combines loosely coupled filters in a chain, delegating control to the appropriate
filter. In this way, you can add, remove, and combine these filters in various ways without changing
existing code.
This pattern is useful for security checks, auditing, caching, compression, etc.
Class Diagram
Sequence Diagram
Page
141
Page
Consequences
• Centralizes control with loosely coupled handlers
142
• Improves reusability
• Declarative and flexible configuration
• Information sharing is inefficient
Related Patterns
• Front Controller
The controller solves some similar problems, but is better suited to handling core processing.
• Decorator [GoF]
The Intercepting Filter is related to the Decorator, which provides for dynamically pluggable
wrappers.
1. Front Controller
Problem
You want a centralized access point for presentation-tier request handling.
Forces
• You want to avoid duplicate control logic.
• You want to apply common logic to multiple requests.
• You want to separate system processing logic from the view.
• You want to centralize controlled access points into your system.
Solution
Use a Front Controller as the initial point of contact for handling all related requests. The Front
Controller centralizes control logic that might otherwise be duplicated, and manages the key request
handling activities.
This pattern should be the entry point for the system and should (delegates work to Application Controller)
and should not be too fat
Class Diagram
Consequences
Page
• Centralizes control 144
• Improves manageability
• Improves reusability
• Improves role separation
Related Patterns
• Intercepting Filter
Both Intercepting Filter and Front Controller describe ways to centralize control of certain aspects
of request processing.
• Application Controller
Application Controller encapsulates the action and view management code to which the controller
delegates.
1. Context Object
Problem
You want to avoid using protocol-specific system information outside of its relevant context.
Forces
• You have components and services that need access to system information.
• You want to decouple application components and services from the protocol specifics of system
information.
• You want to expose only the relevant APIs within a context.
Solution
Use a Context Object to encapsulate state in a protocol-independent way to be shared throughout
your application.
Application components should not have to know HTTP. Instead, they should call getXXX() on a
context object.
Class Diagram
Page
Consequences
• Improves reusability and maintainability 146
• Improves testability
• Reduces constraints on evolution of interfaces
• Reduces performance
Related Patterns
• Intercepting Filter
An Intercepting Filter can use a ContextFactory to create a Context Object during web request
handling.
• Front Controller
A Front Controller can use a ContextFactory to create a Context Object during web request
handling
1. Application Controller
Problem
You want to centralize and modularize action and view management.
Forces
• You want to reuse action and view-management code.
• You want to improve request-handling extensibility, such as adding use case functionality to an
application incrementally.
• You want to improve code modularity and maintainability, making it easier to extend the application
and easier to test discrete parts of your request-handling code independent of a web container.
Solution
Use an Application Controller to centralize retrieval and invocation of request-processing
components, such as commands and views.
Class Diagram
Sequence Diagram
Page
147
Consequences
• Improves modularity
• Improves reusability
• Improves extensibility
Page
Related Patterns
• Front Controller
148
A Front Controller uses an Application Controller to perform action and view management.
• Service Locator
A Service Locator performs service location and retrieval. A Service Locator is a coarser object,
often uses sophisticated infrastructure for lookup, and doesn’t manage routing. It also doesn’t
address view management.
• Command Processor [POSA1]
A Command Processor manages command invocations, providing invocation scheduling, logging,
and undo/redo functionality.
• Command Pattern [GoF]
A Command encapsulates a request in an object, separating the request from its invocation.
1. View Helper
Problem
You want to separate a view from its processing logic.
Forces
• You want to use template-based views, such as JSP.
• You want to avoid embedding program logic in the view.
• You want to separate programming logic from the view to facilitate division of labor between
software developers and web page designers.
Solution
Use Views to encapsulate formatting code and Helpers to encapsulate view-processing logic. A View
delegates its processing responsibilities to its helper classes, implemented as POJOs, custom tags,
or tag files. Helpers serve as adapters between the view and the model, and perform processing
related to formatting logic, such as generating an HTML table, so No programming logic in the
views.
Class Diagram
Sequence Diagram
Page
149
Consequences
• Improves application partitioning, reuse, and maintainability
• Improves role separation
• Eases testing
• Helper usage mirrors scriptlets
Related Patterns
Page
• Front Controller
A Front Controller typically delegates to an Application Controller , to perform action and view 150
management.
• Application Controller
An Application Controller manages view preparation and view creation, delegating to views and
helpers.
• View Transform
An alternative approach to view creation is to perform a View Transform.
• Business Delegate
A Business Delegate reduces the coupling between a helper object and a remote business service,
upon which the helper object can invoke.
1. Composite View
Problem
Forces
• You want common subviews, such as headers, footers and tables reused in multiple views, which
may appear in different locations within each page layout.
• You have content in subviews which might which frequently change or might be subject to certain
access controls, such as limiting access to users in certain roles.
• You want to avoid directly embedding and duplicating subviews in multiple views which makes
layout changes difficult to manage and maintain.
Solution
Use Composite Views that are composed of multiple atomic subviews. Each subview of the overall
template can be included dynamically in the whole, and the layout of the page can be managed
independently of the content.
Class Diagram
Sequence Diagram
Page
151
Strategies
Consequences
• Improves modularity and reuse
• Adds role-based or policy-based control
• Enhances maintainability
• Reduces maintainability
• Reduces performance
Related Patterns
• View Helper
A Composite View can fulfill the role of View in View Helper.
• Composite [GoF]
A Composite View is based on Composite [GoF], which describes part-whole hierarchies where a
composite object is composed of numerous subparts.
1. Service to Worker
Problem
You want to perform core request handling and invoke business logic before control is passed to the view.
Forces
• You want specific business logic executed to service a request in order to retrieve content that will
be used to generate a dynamic response.
• You have view selections which may depend on responses from business service invocations.
• You may have to use a framework or library in the application.
Solution
Use Service to Worker to centralize control and request handling to retrieve a presentation model
before turning control over to the view. The view generates a dynamic response based on the
presentation model.
This pattern is composed of Front Controller + Application Controller + View Helper.
Class Diagram
Page
152
Strategies
Page
• Servlet Front Strategy
• JSP Front Strategy
153
• Template-Based View Strategy
• Controller-Based View Strategy
• JavaBean Helper Strategy
• Custom Tag Helper Strategy
• Dispatcher in Controller Strategy
Consequences
• Centralizes control and improves modularity, reusability, and maintainability
• Improves role separation
Related Patterns
Forces
• You have static views.
• You have views generated from an existing presentation model.
• You have views which are independent of any business service response.
• You have limited business processing.
Solution
Use Dispatcher View with views as the initial access point for a request. Business processing, if
necessary in limited form, is managed by the views.
Class Diagram
Page
154
Sequence Diagram
Consequences
• Leverages frameworks and libraries.
• Introduces potential for poor separation of the view from the model and control logic.
• Separates processing logic from view and improves reusability.
Related Patterns
• Front Controller
In a Dispatcher View approach, a Front Controller can handle the request or the request may be
handled initially by the view.
Page
• Application Controller
An Application Controller will often not be used with Dispatcher View. An Application Controller is 155
used in those cases where limited view management is required to resolve an incoming request to
the actual view.
• View Helper
Helpers mainly adapt and transform the presentation model for the view, but also help with any
limited business processing that is initiated from the view.
• Composite View
The view can be a Composite View.
• Service to Worker
The Service to Worker approach centralizes control, request handling, and business processing
before control is passed to the view. Dispatcher View, defers this behavior, if needed, to the time of
view processing.
1. Business Delegate
Sequence Diagram
Page
156
Page
158
Page
159
1. Session Façade
Problem
You want to expose business components and services to remote clients.
Page
Forces 160
• You want to avoid giving clients direct access to business-tier components, to prevent
tight coupling with the clients.
• You want to provide a remote access layer to your Business Objects (374) and other
business-tier components.
• You want to aggregate and expose your Application Services (357) and other services to
remote clients.
• You want to centralize and aggregate all business logic that needs to be exposed to
remote clients.
• You want to hide the complex interactions and interdependencies between business
components and services to improve manageability, centralize logic, increase flexibility,
and improve ability to cope with changes.
Sequence Diagram
Page
161
Strategies
• Stateless Session Façade Strategy
• Stateful Session Façade Strategy
Consequences
Page
The Session Façade is based on the Facade design pattern.
162
1. Application Service
Problem
You want to centralize business logic across several business-tier components and services.
Forces
• You want to minimize business logic in service facades.
• You have business logic acting on multiple Business Objects or services.
• You want to provide a coarser-grained service API over existing business-tier
components and services.
• You want to encapsulate use case-specific logic outside of individual Business Objects.
Sequence Diagram
Page
163
Strategies
• Application Service Command Strategy
• GoF Strategy for Application Service Strategy
Page
165
Page
used by developers and architects in the field and is more precise. We have seen
extensive use of the term business object in numerous projects that fall in line with the
concepts outlined in this pattern.
167
1. Composite Entity
Problem
You want to use entity beans to implement your conceptual domain model.
Forces
• You want to avoid the drawbacks of remote entity beans, such as network overhead and
remote inter-entity bean relationships.
• You want to leverage bean-managed persistence (BMP) using custom or legacy
persistence implementations.
Sequence Diagram
Page
168
Page
• Increases object granularity
• Facilitates composite transfer object creation 169
Related Patterns
• Business Object
The Business Object pattern describes in general how domain model entities are
implemented in J2EE applications. Composite Entity is one of the strategies of the
Business Object pattern for implementing the Business Objects using entity beans.
• Transfer Object
The Composite Entity creates a composite Transfer Object and returns it to the client.
The Transfer Object is used to carry data from the Composite Entity and its dependent
objects.
• Session Façade
Composite Entities are generally not directly exposed to the application clients. Session
1. Transfer Object
Problem
You want to transfer multiple data elements over a tier.
Forces
• You want clients to access components in other tiers to retrieve and update data.
• You want to reduce remote requests across the network.
• You want to avoid network performance degradation caused by chattier applications that
have high network traffic.
Solution
This pattern is also known as value object.
Use a Transfer Object to carry multiple data elements across a tier.
Class Diagram
Page
170
Sequence Diagram
Page
• Multiple Transfer Objects Strategy
• Entity Inherits Transfer Object Strategy
171
Consequences
• Reduces network traffic
• Simplifies remote object and remote interface
• Transfers more data in fewer remote calls
• Reduces code duplication
• Introduces stale transfer objects
• Increases complexity due to synchronization and version control
Related Patterns
Page
172
Page
173
Strategies
• POJO Transfer Object Assembler Strategy
• Session Bean Transfer Object Assembler Strategy
Page
Forces
• You want to avoid the overhead of using EJB finder methods for large searches.
174
• You want to implement a read-only use-case that does not require a transaction.
• You want to provide the clients with an efficient search and iterate mechanism over a
large results set.
• You want to maintain the search results on the server side.
Solution
Use a Value List Handler to search, cache the results, and allow the client to traverse and
select items from the results.
Class Diagram
Page
175
Strategies
• POJO Handler Strategy
• Value List Handler Session Façade Strategy
Sequence Diagram
Page
177
Page
• Transfer Object Collection Strategy
• Cached RowSet Strategy 178
• Read Only RowSet Strategy
• RowSet Wrapper List Strategy
Consequences
• Centralizes control with loosely coupled handlers
• Enables transparency
• Provides object-oriented view and encapsulates database schemas
• Enables easier migration
• Reduces code complexity in clients
• Organizes all data access code into a separate layer
1. Service Activator
Problem
You want to invoke services asynchronously.
Forces
• You want to invoke business services, POJOs, or EJB components in an asynchronous
Page
manner.
• You want to integrate publish/subscribe and point-to-point messaging to enable 179
asynchronous processing services.
• You want to perform a business task that is logically composed of several business tasks.
Solution
Use a Service Activator to receive asynchronous requests and invoke one or more
business services.
Class Diagram
Page
180
Strategies
• POJO Service Activator Strategy
• MDB Service Activator Strategy
• Service Activator Aggregator Strategy
• Response Strategies
Page
Forces
• You want to avoid putting persistence details in your Business Objects.
181
• You do not want to use entity beans.
• Your application might be running in a web container.
• Your object model uses inheritance and complex relationships.
Solution
Use a Domain Store to transparently persist an object model. Unlike J2EE’s container-
managed persistence and bean-managed persistence, which include persistence support
code in the object model, Domain Store's persistence mechanism is separate from the
object model.
Class Diagram
Page
182
Strategies
• Custom Persistence Strategy
• JDO Strategy
Consequences
Page
Forces
• You want to reuse and expose existing services to clients. 183
• You want to monitor and potentially limit the usage of exposed services, based on your
business requirements and system resource usage.
• Your services must be exposed using open standards to enable integration of
heterogeneous applications.
• You want to bridge the gap between business requirements and existing service
capabilities
Solution
Use a Web Service Broker to expose and broker one or more services using XML and web
protocols.
Class Diagram
Strategies
Page
• Custom XML Messaging Strategy
• Java Binder Strategy 184
• JAX-RPC Strategy
Consequences
• Introduces a layer between client and service
• Existing remote Session Façades (341) need be refactored to support local access
• Network performance may be impacted due to web protocols
Related Patterns
• Aggregator [EIP]
• Application Service
Application Service components can be called from Web Service Broker components.
From a list, select the most appropriate pattern for a given scenario.
Patterns are limited to those documented in this book - Gamma, Erich;
7.2 Richard Helm, Ralph Johnson, and John Vlissides (1995). Design
Patterns: Elements of Reusable Object-Oriented Software and are named
using the names given in that book.
Ref. • [DESIGN_PATTERNS]
# OO Pattern Definition
Page
When instances of a class can have one of only a few different
combinations of state. It may be more convenient to install a 186
17 Prototype corresponding number of prototypes and clone them (less
expensive at runtime) rather than instantiating the class
manually, each time with the appropriate state.
20 State To allow an object to alter its behavior when its internal state
Classifications of OO patterns
Purpose
Creational Structural Behavioral
Factory Method Adapter Interpreter
Class
Template Method
Abstract
Factory Adapter Chain of Responsibility
Builder Bridge Command
Composit
Scop Prototype e Iterator
e Objec Singleton Decorator Mediator
t Façade Memento
Flyweight Observer
Proxy State
Strategy
Page
Visitor
187
We classify OO design patterns by two criteria in the table above. The first criterion,
called purpose, reflects what a pattern does. Patterns can have creational, structural, or
behavioral purpose. Creational patterns concern the process of object creation.
Structural patterns deal with the composition of classes or objects. Behavioral patterns
characterize the ways in which classes or objects interact and distribute responsibility.
The second criterion, called scope, specifies whether the pattern applies primarily to
classes or to objects. Class patterns deal with relationships between classes and their
subclasses. These relationships are established through inheritance, so they are static—
fixed at compile-time. Object patterns deal with object relationships, which can be
Creational class patterns defer some part of object creation to subclasses, while
Creational object patterns defer it to another object. The Structural class patterns use
inheritance to compose classes, while the Structural object patterns describe ways to
assemble objects. The Behavioral class patterns use inheritance to describe algorithms
and flow of control, whereas the Behavioral object patterns describe how a group of
objects cooperate to perform a task that no single object can carry out alone.
There are other ways to organize the patterns. Some patterns are often used together.
For example, Composite is often used with Iterator or Visitor. Some patterns are
alternatives: Prototype is often an alternative to Abstract Factory. Some patterns result in
similar designs even though the patterns have different intents. For example, the
structure diagrams of Composite and Decorator are similar.
1. Abstract Factory
Provide an interface for creating families of related or dependent objects without
specifying their concrete classes.
Page
188
The form of the Adapter (called the Class Adapter) shown below uses inheritance:
Page
189
Page
190
4. Builder
Separate the construction of a complex object from its representation so that the
same construction process can create different representations.
The Builder Design Pattern encapsulates the logic of how to put together a
complex object so that the client just requests a configuration and the Builder directs
the logic of building it.
The Builder pattern allows a client object to construct a complex object by specifying
only its type and content. The client is shielded from the details of the object's
construction.
It is a pattern for step-by-step creation of a complex object so that the same
construction process can create different representations is the routine in the builder
pattern that also makes for finer control over the construction process. All the
different builders generally inherit from an abstract builder class that declares the
general functions to be used by the director to let the builder create the product in
parts.
Page
Builder has a similar motivation to the Abstract Factory, but whereas in that pattern,
the client uses the Abstract Factory class methods to create its own object, in Builder 191
the client instructs the builder class on how to create the object and then asks it for
the result. How the class is put together is up to the Builder class. It's a subtle
difference.
The Builder pattern is applicable when the algorithm for creating a complex object
should be independent of the parts that make up the object and how they are
assembled and the construction process must allow different representations for the
object that's constructed.
Page
192
Page
193
6. Command
Encapsulate a request as an object, thereby letting you parameterize clients with
different requests, queue or log requests, and support undoable operations.
Encapsulate an operational request of a Service into an entity (Command), so that
the same Service can be used to trigger different operations. Optionally, this can be
used to support undo, logging, queuing, and other additional behaviors. This is
typically used to separate non-changing code (framework code) from changing code
(plugin code) when a reusable object framework is created.
Use the Command Design Pattern when you want to:
• Parameterize objects by an action to perform. You can express such
parameterization in a procedural language with a callback function, that is, a
function that's registered somewhere to be called at a later point. Commands
are an object-oriented replacement for callbacks.
• Specify, queue, and execute requests at different times. A Command object
can have a lifetime independent of the original request. If the receiver of a
request can be represented in an address space-independent way, then you
can transfer a command object for the request to a different process and
Page
fulfill the request there.
• Support undo. The Command's execute operation can store state for
194
reversing its effects in the command itself. The Command interface must
have an added unexecute operation that reverses the effects of a previous
call to execute. Executed commands are stored in a history list. Unlimited-
level undo and redo is achieved by traversing this list backwards and
forwards calling unexecute and execute, respectively.
• Support logging changes so that they can be reapplied in case of a system
crash. By augmenting the Command interface with load and store
operations, you can keep a persistent log of changes. Recovering from a
crash involves reloading logged commands from disk and reexecuting them
with the execute operation.
7. Composite
Compose objects into tree structures to represent part-whole hierarchies. Composite
Page
lets clients treat individual objects and compositions of objects uniformly.
Model simple and complex components in such a way as to allow client entities to
195
consume their behavior in the same way. The Composite Design Pattern captures
hierarchical relationships of varying complexity and structure.
8. Decorator
Attach additional responsibilities to an object dynamically. Decorators provide a
flexible alternative to sub-classing (i.e. inheritance) for extending functionality.
The Decorator Pattern works by wrapping the new "decorator" object around the
original object, which is typically achieved by passing the original object as a
parameter to the constructor of the decorator, with the decorator implementing the
new functionality. The interface of the original object needs to be maintained by the
Page
decorator.
Decorators are alternatives to subclassing. Subclassing adds behavior at compile
196
time whereas decorators provide a new behavior at runtime.
This difference becomes most important when there are several independent ways of
extending functionality. In some object-oriented programming languages, classes
cannot be created at runtime, and it is typically not possible to predict what
combinations of extensions will be needed at design time. This would mean that a
new class would have to be made for every possible combination. By contrast,
decorators are objects, created at runtime, and can be combined on a per-use basis.
An example of the decorator pattern is the Java I/O Streams implementation.
9. Façade
Provide a unified interface to a set of interfaces in a subsystem. Façade defines a
higher-level interface that makes the subsystem easier to use.
Structuring a system into subsystems helps reduce complexity. A common design
Page
goal is to minimize the communication and dependencies between subsystems. One
way to achieve this goal is to introduce a Façade object that provides a single,
simplified interface to the more general facilities of a subsystem.
197
Façade allows for the re-use of a valuable sub-system without coupling to the
specifics of its nature.
Page
easier to customize, but it also becomes harder to use for clients that don't
need to customize it. A Façade can provide a simple default view of the
subsystem that is good enough for most clients. Only clients needing more
198
customizability will need to look beyond the Façade.
• There are many dependencies between clients and the implementation
classes of an abstraction. Introduce a Façade to decouple the subsystem from
clients and other subsystems, thereby promoting subsystem independence
and portability.
• You want to layer your subsystems. Use a Façade to define an entry point to
each subsystem level. If subsystems are dependent, then you can simplify
the dependencies between them by making them communicate with each
other solely through their Façades.
11. Flyweight
Use sharing to support large numbers of fine-grained objects efficiently.
A Flyweight is a shared object that can be used in multiple contexts simultaneously.
The Flyweight acts as an independent object in each context - it's indistinguishable
from an instance of the object that's not shared. Flyweights cannot make
assumptions about the context in which they operate. The key concept here is the
distinction between intrinsic and extrinsic state. Intrinsic state is stored in the
flyweight; it consists of information that's independent of the flyweight's context,
Page
thereby making it sharable. Extrinsic state depends on and varies with the
flyweight's context and therefore can't be shared. Client objects are responsible for
passing extrinsic state to the flyweight when it needs it.
199
12. Interpreter
Given a language and its representation for its grammar, an interpreter uses the
Page
representation to interpret sentences in the language.
Use the Interpreter Pattern when there is a language to interpret, and you can
200
represent statements in the language as abstract syntax trees. The Interpreter
Pattern works best when:
• The grammar is simple. For complex grammars, the class hierarchy for the
grammar becomes large and unmanageable. Tools such as parser generators
are a better alternative in such cases. They can interpret expressions without
building abstract syntax trees, which can save space and possibly time.
• Efficiency is not a critical concern. The most efficient interpreters are usually
not implemented by interpreting parse trees directly but by first translating
them into another form. For example, regular expressions are often
transformed into state machines. But even then, the translator can be
implemented by the Interpreter pattern, so the pattern is still applicable.
Page
knows which elements have been traversed already.
201
14. Mediator
Define an object that encapsulates how a set of objects interact. Mediator promotes
loose coupling by keeping objects from referring to each other explicitly, and it lets
you vary their interaction independently.
Object-oriented design encourages the distribution of behavior among objects. Such
distribution can result in an object structure with many connections between objects;
in the worst case, every object ends up knowing about every other.
Though partitioning a system into many objects generally enhances reusability,
proliferating interconnections tend to reduce it again. Lots of interconnections make
it less likely that an object can work without the support of others - the system acts
as though it were monolithic. Moreover, it can be difficult to change the system's
behavior in any significant way, since behavior is distributed among many objects.
As a result, you may be forced to define many subclasses to customize the system's
behavior.
Page
You can avoid these problems by encapsulating collective behavior in a separate
Mediator object. A mediator is responsible for controlling and coordinating the
interactions of a group of objects. The Mediator serves as an intermediary that
202
keeps objects in the group from referring to each other explicitly. The objects only
know the Mediator, thereby reducing the number of interconnections.
15. Memento
Without violating encapsulation, capture and externalize an object's internal state so Page
that the object can be restored to this state later.
203
Sometimes it's necessary to record the internal state of an object. This is required
when implementing checkpoints and undo mechanisms that let users back out of
tentative operations or recover from errors. You must save state information
somewhere so that you can restore objects to their previous states. But objects
normally encapsulate some or all of their state, making it inaccessible to other
objects and impossible to save externally. Exposing this state would violate
encapsulation, which can compromise the application's reliability and extensibility.
We can solve this problem with the Memento pattern. A memento is an object that
stores a snapshot of the internal state of another object - the memento's originator.
The undo mechanism will request a memento from the originator when it needs to
checkpoint the originator's state. The originator initializes the memento with
16. Observer
Define a one-to-many dependency among objects so that when one object changes
state, all its dependents are notified and updated automatically.
An event can occurs and a number of entities need to receive a message about it.
When will the event occur? Perhaps if it will occur at all it will be unpredictable. Also,
the number of entities, and which entities they are, is unpredictable. Ideally, the
subscribers to this notification should be configurable at runtime.
A common side-effect of partitioning a system into a collection of cooperating classes
is the need to maintain consistency between related objects. You don't want to
achieve consistency by making the classes tightly coupled, because that reduces
their reusability.
The Observer Design Pattern describes how to establish these relationships. The
Page
key objects in this pattern are subject and observer. A subject may have any
number of dependent observers. All observers are notified whenever the subject 204
undergoes a change in state. In response, each observer will query the subject to
synchronize its state with the subject's state.
This kind of interaction is also known as publish-subscribe. The subject is the
publisher of notifications. It sends out these notifications without having to know
who its observers are. Any number of observers can subscribe to receive
notifications.
17. Prototype
Specify the kinds of objects to create using a prototypical instance, and create new
objects by copying this prototype.
A Prototype Design Pattern is a creational design pattern used in software
development when the type of objects to create is determined by a prototypical
instance, which is cloned to produce new objects. This pattern is used for example
when the inherent cost of creating a new object in the standard way (e.g., using the
'new' keyword) is prohibitively expensive for a given application.
To implement the pattern, declare an abstract base class that specifies a pure virtual
Page
clone() method. Any class that needs a "polymorphic constructor" capability
derives itself from the abstract base class, and implements the clone() operation. 205
The client, instead of writing code that invokes the "new" operator on a hard-wired
class name, calls the clone() method on the prototype, calls a factory method with
a parameter designating the particular concrete derived class desired, or invokes the
clone() method through some mechanism provided by another design pattern.
18. Proxy
It provides a placeholder for another object to control access to it.
A proxy, in its most general form, is a class functioning as an interface to another
thing. The other thing could be anything: a network connection, a large object in
memory, a file, or some other resource that is expensive or impossible to duplicate.
Proxy Design Pattern is applicable whenever there is a need for a more versatile
or sophisticated reference to an object than a simple pointer. Here are several
common situations in which the Proxy pattern is applicable:
Page
206
19. Singleton
It ensures a class only has one instance, and provides a global point of access to it.
The singleton pattern is a design pattern that is used to restrict instantiation of a
class to one object. This is useful when exactly one object is needed to coordinate
actions across the system:
Page
207
20. State
It allows an object to alter its behavior when its internal state changes. The object
will appear to change its class.
The key idea in this pattern is to introduce an abstract class to represent the possible
states of the object. This class declares an interface common to all the classes that
represent different operational states. The concrete subclasses implement state-
specific behavior. Based on the current state, the appropriate concrete class is
selected and used.
Page
can vary independently from other objects.
208
21. Strategy
Define a family of algorithms, encapsulate each one, and make them
interchangeable. Strategy lets the algorithm vary independently from clients that use
it.
A single behavior with varying implementation exists, and we want to decouple
consumers of this behavior from any particular implementation. We may also want to
decouple them from the fact that the implementation is varying at all.
Page
abstract methods to implement real actions. Thus the general algorithm is saved in
one place but the concrete steps may be changed by the subclasses. 209
The Template Method thus manages the larger picture of task semantics, and more
refined implementation details of selection and sequence of methods. This larger
picture calls abstract and non-abstract methods for the task at hand. The non-
abstract methods are completely controlled by the Template Method. The expressive
power and degrees of freedom occur in abstract methods that may be implemented
in subclasses. Some or all of the abstract methods can be specialized in a subclass;
the abstract method is the smallest unit of granularity, allowing the writer of the
subclass to provide particular behavior with minimal modifications to the larger
semantics. In contrast the Template Method need not be changed and is not an
abstract operation and thus may guarantee required steps before and after the
abstract operations. Thus the Template Method is invoked and as a consequence the
subordinate non-abstract methods and abstract methods are called in the correct
sequence.
Page
23. Visitor
It represents an operation to be performed on the elements of an object structure.
210
Visitor lets you define a new operation without changing the classes of the elements
on which it operates.
The Visitor Design Pattern is a way of separating an algorithm from an object
structure. A practical result of this separation is the ability to add new operations to
existing object structures without modifying those structures.
The idea is to use a structure of element classes, each of which has an accept()
method that takes a visitor object as an argument. Visitor is an interface that has a
visitXXX() method for each element class. The accept() method of an element
class calls back the visitXXX() method for its class. Separate concrete visitor
classes can then be written that perform some particular operations:
Page
costly. If the object structure classes change often, then it's probably better
to define the operations in those classes. 211
Ref. • [DESIGN_PATTERNS]
# OO Pattern Definition
Page
1
dependent objects without specifying their concrete classes.
212
Adapter To convert the interface of a class into another interface clients
2
expect.
Factory Method To Define an interface for creating an object, but let subclasses
10 decide which class to instantiate. Factory Method lets a class
defer instantiation to subclasses.
Page
15 object's internal state so that the object can be restored to this
state later. 213
Observer To define a one-to-many dependency between objects so that
16 when one object changes state, all its dependents are notified
and updated automatically.
State To allow an object to alter its behavior when its internal state
20
changes.
1. Abstract Factory
• It isolates concrete classes. The Abstract Factory pattern helps you control
the classes of objects that an application creates. Because a factory
encapsulates the responsibility and the process of creating product objects, it
isolates clients from implementation classes. Clients manipulate instances
through their abstract interfaces. Product class names are isolated in the
implementation of the concrete factory; they do not appear in client code.
• It makes exchanging product families easy. The class of a concrete factory
appears only once in an application - that is, where it's instantiated. This
makes it easy to change the concrete factory an application uses. It can use
different product configurations simply by changing the concrete factory.
Because an abstract factory creates a complete family of products, the whole
product family changes at once.
• It promotes consistency among products. When product objects in a family
are designed to work together, it's important that an application use objects
from only one family at a time. AbstractFactory makes this easy to enforce.
• When we use the Abstract Factory we gain protection from illegitimate
Page
combinations of service objects. This means we can design the rest of the
system for maximum flexibility, since we know that the Abstract Factory will 214
eliminate any concerns of the flexibility yeilding bugs. Also, the consuming
entity (Client) or entities will be incrementally simpler, since they can deal
with the components at the abstract level.
The drawback are:
• Supporting new kinds of products is difficult. Extending abstract factories to
produce new kinds of Products isn't easy. That's because the AbstractFactory
interface fixes the set of products that can be created. Supporting new kinds
of products requires extending the factory interface, which involves changing
the AbstractFactory class and all of its subclasses.
• As with factories in general, the Abstract Factory's responsibility is limited to
the creation of instances, and thus the testable issue is whether or not the
right set of instances is created under a given circumstance. Often, this is
Page
○ If the construction of the foreign class was not encapsulated (which
is common), the Adapter can encapsulate it in its constructor.
215
However, an object factory is preferred.
Class and Object Adapters have different trade-offs.
A Class Adapter:
• adapts Adaptee to Target by committing to a concrete Adapter class. As a
consequence, a Class Adapter won't work when we want to adapt a class and
all its subclasses.
• lets Adapter override some of Adaptee's behavior, since Adapter is a subclass
of Adaptee.
• introduces only one object, and no additional pointer indirection is needed to
get to the adaptee.
3. Bridge
• Decoupling interface and implementation. An implementation is not bound
permanently to an interface. The implementation of an abstraction can be
configured at run-time. It's even possible for an object to change its
implementation at run-time.
Decoupling Abstraction and Implementor also eliminates compile-time
dependencies on the implementation. Changing an implementation class
doesn't require recompiling the Abstraction class and its clients. This
property is essential when you must ensure binary compatibility between
different versions of a class library.
• Improved extensibility. You can extend the Abstraction and Implementor
hierarchies independently.
• Hiding implementation details from clients. You can shield clients from
implementation details, like the sharing of implementor objects and the
accompanying reference count mechanism (if any).
• The Behavior Classes will probably be testable on their own (unless they are
Adapters and/or Façades, in which case see the testing forces accompanying
those patterns). However the entity classes are dependant upon behaviors,
and so a Mock or Fake object can be used to control the returns from these
dependencies, and also to check on the action taken upon the behavior by
the entity, if this is deemed an appropriate thing to test.
• The Bridge creates flexibility because the entities and behaviors can each
vary without necessarily affecting the other.
• Both the Entities and Behaviors are open-closed, if we build the bridge in an
object factory, which is recommended.
• If the Entities are highly orthoganal from one another, the Behavior interface
will tend to be broad.
• The interface of the Behavior can require changes over time, which can
cause maintenance problems. Specifically, if new Entities that may be added
Page
to the system in the future are unlikely to be satisfied with the current
Behavior interface, then this interface may bloat, requiring potentially
extensive maintenance.
216
• The delegation from the Entities to the Behaviors can degrade performance.
4. Builder
Benefits:
• It lets you vary a product's internal representation. The Builder object
provides the director with an abstract interface for constructing the product.
The interface lets the builder hide the representation and internal structure
of the product. It also hides how the product gets assembled. Because the
product is constructed through an abstract interface, all you have to do to
change the product's internal representation is define a new kind of builder.
5. Chain of Responsibility
• Reduced coupling. The pattern frees an object from knowing which other
object handles a request. An object only has to know that a request will be
handled "appropriately." Both the receiver and the sender have NO explicit
knowledge of each other, and an object in the chain doesn't have to know
about the chain's structure.
As a result, Chain of Responsibility can simplify object interconnections.
Instead of objects maintaining references to all candidate receivers, they
keep a single reference to their successor.
• Added flexibility in assigning responsibilities to objects. Chain of
Responsibility gives you added flexibility in distributing responsibilities
among objects. You can add or change responsibilities for handling a request
by adding to or otherwise changing the chain at run-time. You can combine
this with subclassing to specialize handlers statically.
The drawbacks are:
• Receipt isn't guaranteed. Since a request has no explicit receiver, there's no
guarantee it'll be handled—the request can fall off the end of the chain
without ever being handled. A request can also go unhandled when the chain
is not configured properly.
• The chain may get lengthy, and may introduce performance problems.
6. Command
The Command pattern has the following consequences:
• Command decouples the object that invokes the operation from the one that
Page
knows how to perform it.
• Commands are first-class objects. They can be manipulated and extended
217
like any other object.
• You can assemble commands into a composite command. In general,
composite commands are an instance of the Composite pattern.
• It's easy to add new Commands, because you don't have to change existing
classes.
7. Composite
• defines class hierarchies consisting of primitive objects and composite
objects. Primitive objects can be composed into more complex objects, which
8. Decorator
The Decorator Pattern has at least two key benefits and two liabilities:
• More flexibility than static inheritance. The Decorator pattern provides a
more flexible way to add responsibilities to objects than can be had with
static (multiple) inheritance. With decorators, responsibilities can be added
and removed at run-time simply by attaching and detaching them. In
contrast, inheritance requires creating a new class for each additional
responsibility. This gives rise to many classes and increases the complexity
of a system. Furthermore, providing different Decorator classes for a specific
Target Abstraction class lets you mix and match responsibilities.
Decorators also make it easy to add a property twice.
• Avoids feature-laden classes high up in the hierarchy. Decorator offers a
pay-as-you-go approach to adding responsibilities. Instead of trying to
support all foreseeable features in a complex, customizable class, you can
define a simple class and add functionality incrementally with Decorator
objects. Functionality can be composed from simple pieces. As a result, an
application needn't pay for features it doesn't use. It's also easy to define
new kinds of Decorators independently from the classes of objects they
extend, even for unforeseen extensions. Extending a complex class tends to
expose details unrelated to the responsibilities you're adding.
The drawbacks are:
• A decorator and its component aren't identical. A decorator acts as a
Page
transparent enclosure. But from an object identity point of view, a decorated
component is not identical to the component itself. Hence you shouldn't rely
on object identity when you use decorators.
218
• Lots of little objects. A design that uses Decorator often results in systems
composed of lots of little objects that all look alike. The objects differ only in
the way they are interconnected, not in their class or in the value of their
variables. Although these systems are easy to customize by those who
understand them, they can be hard to learn and debug.
9. Façade
The Façade pattern offers the following benefits:
• It shields clients from subsystem components, thereby reducing the number
of objects that clients deal with and making the subsystem easier to use.
11. Flyweight
The drawback of flyweights that it may introduce run-time costs associated with
transferring, finding, and/or computing extrinsic state, especially if it was formerly
stored as intrinsic state. However, such costs are offset by space savings, which
Page
increase as more flyweights are shared.
Storage savings are a function of several factors:
219
• the reduction in the total number of instances that comes from sharing.
• the amount of intrinsic state per object.
• whether extrinsic state is computed or stored.
The more flyweights are shared, the greater the storage savings. The savings
increase with the amount of shared state. The greatest savings occur when the
objects use substantial quantities of both intrinsic and extrinsic state, and the
extrinsic state can be computed rather than stored. Then you save on storage in two
ways: Sharing reduces the cost of intrinsic state, and you trade extrinsic state for
computation time.
13. Iterator
The Iterator pattern has three important consequences:
• It supports variations in the traversal of an aggregate. Complex aggregates
may be traversed in many ways. For example, code generation and semantic
checking involve traversing parse trees. Code generation may traverse the
parse tree inorder or preorder. Iterators make it easy to change the traversal
algorithm: Just replace the iterator instance with a different one. You can
also define Iterator subclasses to support new traversals.
Page
• Iterators simplify the Aggregate interface. Iterator's traversal interface
obviates the need for a similar interface in Aggregate, thereby simplifying
the aggregate's interface.
220
• More than one traversal can be pending on an aggregate. An iterator keeps
track of its own traversal state. Therefore you can have more than one
traversal in progress at once.
14. Mediator
The Mediator pattern has the following benefits and drawbacks:
• It limits subclassing. A mediator localizes behavior that otherwise would be
distributed among several objects. Changing this behavior requires
subclassing Mediator only; Colleague classes can be reused as is.
15. Memento
The Memento Design Pattern has several consequences:
• Preserving encapsulation boundaries. Memento avoids exposing information
that only an originator should manage but that must be stored nevertheless
outside the originator. The pattern shields other objects from potentially
complex Originator internals, thereby preserving encapsulation boundaries.
• It simplifies Originator. In other encapsulation-preserving designs, Originator
keeps the versions of internal state that clients have requested. That puts all
the storage management burden on Originator. Having clients manage the
state they ask for simplifies Originator and keeps clients from having to
notify originators when they're done.
• Using mementos might be expensive. Mementos might incur considerable
overhead if Originator must copy large amounts of information to store in the
memento or if clients create and return mementos to the originator often
enough. Unless encapsulating and restoring Originator state is cheap, the
pattern might not be appropriate.
• Defining narrow and wide interfaces. It may be difficult in some languages to
ensure that only the originator can access the memento's state.
• Hidden costs in caring for mementos. A caretaker is responsible for deleting
the mementos it cares for. However, the caretaker has no idea how much
state is in the memento. Hence an otherwise lightweight caretaker might
incur large storage costs when it stores mementos.
17. Prototype
Prototype has many of the same consequences that Abstract Factory and Builder
have: It hides the concrete product classes from the client, thereby reducing the
number of names clients know about. Moreover, these patterns let a client work with
application-specific classes without modification.
Additional benefits of the Prototype Design Pattern are listed below:
• Adding and removing products at run-time. Prototypes let you incorporate a
new concrete product class into a system simply by registering a prototypical
instance with the client. That's a bit more flexible than other creational
patterns, because a client can install and remove prototypes at run-time.
• Specifying new objects by varying values. Highly dynamic systems let you
define new behavior through object composition - by specifying values for an
object's variables, for example - and not by defining new classes. You
effectively define new kinds of objects by instantiating existing classes and
registering the instances as prototypes of client objects. A client can exhibit
new behavior by delegating responsibility to the prototype.
Page
This kind of design lets users define new "classes" without programming. In
fact, cloning a prototype is similar to instantiating a class. The Prototype 222
pattern can greatly reduce the number of classes a system needs.
• Specifying new objects by varying structure. Many applications build objects
from parts and subparts. Editors for circuit design, for example, build circuits
out of subcircuits. For convenience, such applications often let you
instantiate complex, user-defined structures, say, to use a specific subcircuit
again and again.
The Prototype pattern supports this as well. We simply add this subcircuit as
a prototype to the palette of available circuit elements. As long as the
composite circuit object implements clone() as a deep copy, circuits with
different structures can be prototypes.
18. Proxy
The Proxy pattern introduces a level of indirection when accessing an object. The
additional indirection has many uses, depending on the kind of proxy:
• A remote proxy can hide the fact that an object resides in a different address
space.
• A virtual proxy can perform optimizations such as creating an object on
demand.
• Both protection proxies and smart references allow additional housekeeping
tasks when an object is accessed.
There's another optimization that the Proxy pattern can hide from the client. It's
called copy-on-write, and it's related to creation on demand. Copying a large and
Page
complicated object can be an expensive operation. If the copy is never modified,
then there's no need to incur this cost. By using a proxy to postpone the copying
process, we ensure that we pay the price of copying the object only if it's modified.
223
To make copy-on-write work, the subject must be reference counted. Copying the
proxy will do nothing more than increment this reference count. Only when the client
requests an operation that modifies the subject does the proxy actually copy it. In
that case the proxy must also decrement the subject's reference count. When the
reference count goes to zero, the subject gets deleted.
Copy-on-write can reduce the cost of copying heavyweight subjects significantly.
• Proxies promote strong cohesion.
• Proxies simplify the client object and the object being proxied (by hiding
complex issues like remoting and caching, etc.)
19. Singleton
The Singleton Design Pattern has several benefits:
• Controlled access to sole instance. Because the Singleton class encapsulates
its sole instance, it can have strict control over how and when clients access
it.
• Reduced name space. The Singleton pattern is an improvement over global
variables. It avoids polluting the name space with global variables that store
sole instances.
• Permits refinement of operations and representation. The Singleton class
may be subclassed, and it's easy to configure an application with an instance
of this extended class. You can configure the application with an instance of
the class you need at run-time.
• Permits a variable number of instances. The pattern makes it easy to change
your mind and allow more than one instance of the Singleton class.
Moreover, you can use the same approach to control the number of
instances that the application uses. Only the operation that grants access to
the Singleton instance needs to change.
• More flexible than class operations. Another way to package a singleton's
functionality is to use class operations (that is, static member functions in
C++ or class methods in Smalltalk). But both of these language techniques
make it hard to change a design to allow more than one instance of a class.
Moreover, static member functions in C++ are never virtual, so subclasses
can't override them polymorphically.
20. State
The State Design Pattern has the following consequences:
• It localizes state-specific behavior and partitions behavior for different states.
The State pattern puts all behavior associated with a particular state into one
object. Because all state-specific code lives in a State subclass, new states
and transitions can be added easily by defining new subclasses.
• It makes state transitions explicit. When an object defines its current state
Page
solely in terms of internal data values, its state transitions have no explicit
224
representation; they only show up as assignments to some variables.
Introducing separate objects for different states makes the transitions more
explicit. Also, State objects can protect the Context from inconsistent
internal states, because state transitions are atomic from the Context's
perspective - they happen by rebinding one variable (the Context's State
object variable), not several.
• State objects can be shared. If State objects have no instance variables -
that is, the state they represent is encoded entirely in their type - then
contexts can share a State object. When states are shared in this way, they
are essentially flyweights with no intrinsic state, only behavior.
Page
application. Sometimes you can reduce this overhead by implementing
strategies as stateless objects that contexts can share. Any residual state is
225
maintained by the context, which passes it in each request to the Strategy
object. Shared strategies should not maintain state across invocations. The
Flyweight pattern describes this approach in more detail.
1. Template Method
Template methods are a fundamental technique for code reuse. They are particularly
important in class libraries, because they are the means for factoring out common
behavior in library classes.
Template methods lead to an inverted control structure that's sometimes referred to
as "the Hollywood principle," that is, "Don't call us, we'll call you". This refers to how
a parent class calls the operations of a subclass and not the other way around.
1. Visitor
Some of the benefits and liabilities of the Visitor pattern are as follows:
• Visitor makes adding new operations easy. Visitors make it easy to add
operations that depend on the components of complex objects. You can
define a new operation over an object structure simply by adding a new
visitor. In contrast, if you spread functionality over many classes, then you
must change each class to define a new operation.
• A visitor gathers related operations and separates unrelated ones. Related
behavior isn't spread over the classes defining the object structure; it's
localized in a visitor. Unrelated sets of behavior are partitioned in their own
visitor subclasses. That simplifies both the classes defining the elements and
the algorithms defined in the visitors. Any algorithm-specific data structures
can be hidden in the visitor.
• Visiting across class hierarchies. An iterator can visit the objects in a
structure as it traverses them by calling their operations. But an iterator
can't work across object structures with different types of elements.
Visitor does not have this restriction. It can visit objects that don't have a
common parent class. You can add any type of object to a Visitor interface.
• Accumulating state. Visitors can accumulate state as they visit each element
in the object structure. Without a visitor, this state would be passed as extra
arguments to the operations that perform the traversal, or they might
appear as global variables.
• Breaking encapsulation. Visitor's approach assumes that the
ConcreteElement interface is powerful enough to let visitors do their job. As
a result, the pattern often forces you to provide public operations that access
an element's internal state, which may compromise its encapsulation.
The drawback is:
• Adding new ConcreteElement classes is hard. The Visitor pattern makes it
hard to add new subclasses of Element. Each new ConcreteElement gives
rise to a new abstract operation on Visitor and a corresponding
implementation in every ConcreteVisitor class. Sometimes a default
implementation can be provided in Visitor that can be inherited by most of
the ConcreteVisitors, but this is the exception rather than the rule.
Page
So the key consideration in applying the Visitor pattern is whether you are
mostly likely to change the algorithm applied over an object structure or the 226
classes of objects that make up the structure. The Visitor class hierarchy can
be difficult to maintain when new ConcreteElement classes are added
frequently. In such cases, it's probably easier just to define operations on the
classes that make up the structure. If the Element class hierarchy is stable,
but you are continually adding operations or changing algorithms, then the
Visitor pattern will help you manage the changes.
7.4 From a list, select the benefits and drawbacks of a specified Core J2EE
pattern drawn from this book – Alur, Crupi and Malks (2003). Core
Ref. • [CORE_J2EE_PATTERNS]
Presentation Tier
Composite View To build a view from modular, atomic component parts that are
6 combined to create a composite whole, while managing the
content and the layout independently.
Dispatcher View For a view to handle a request and generate a response, while
8
managing limited amounts of business processing.
Page
Business Tier
227
Business Delegate To hide the details of service creation, reconfiguration, and
9
invocation retries from the clients.
14 Composite Entity To encapsulate the physical database design from the clients.
Value List Handler To provide the clients with an efficient search and iterate
17
mechanism over a large results set.
Integration Tier
Web Service Broker To provide access to one or more services using XML and web
21
protocols
1. Intercepting Filter
• Centralizes control with loosely coupled handlers
Filters provide a central place for handling processing across multiple
requests, as does a controller. Filters are better suited to massaging
requests and responses for ultimate handling by a target resource, such as a
controller. Additionally, a controller often ties together the management of
numerous unrelated common services, such as authentication, logging,
encryption, and so forth. Filtering allows for much more loosely coupled
handlers, which can be combined in various permutations.
Page
• Improves reusability
Filters promote cleaner application partitioning and encourage reuse. You can
228
transparently add or remove these pluggable interceptors from existing code,
and due to their standard interface, they work in any permutations and are
reusable for varying presentations.
• Declarative and flexible configuration
Numerous services are combined in varying permutations without a single
recompile of the core code base.
• Information sharing is inefficient
Sharing information between filters can be inefficient, since by definition
each filter is loosely coupled. If large amounts of information must be shared
between filters, then this approach might prove to be costly.
2. Front Controller
• Centralizes control
A controller provides a central place to handle control logic that is common
across multiple requests. A controller is the initial access point of the request
handling mechanism and delegates to an Application Controller to perform
the underlying business processing and view generation functionality.
• Improves manageability
Centralizing control makes it easier to monitor control flow that also provides
a choke point for illicit attempts to access the application. In addition,
auditing a single entrance into the application requires fewer resources than
distributing checks across all pages.
• Improves reusability
Promotes cleaner application partitioning and encourages reuse, as common
code moves into a controller or is managed/delegated to by a controller.
• Improves role separation
A controller promotes cleaner separation of team roles, since one role
(software developer) can more easily maintain programming logic while
another (web production) maintains markup for view generation.
3. Context Object
• Improves reusability and maintainability
Application components and subsystems are more generic and can be reused
for various types of clients, since the application interfaces are not polluted
with protocol-specific data types.
• Improves testability
Using Context Objects helps remove dependencies on protocol-specific code
that might tie a runtime environment to a container, such as a web server or
an application server. Testing is easier when such dependencies are limited
or removed, since automated testing tools, such as JUnit can work directly
with Context Objects.
• Reduces constraints on evolution of interfaces
Interfaces that accept a Context Object, instead of the numerous objects
that the Context Object encapsulates, are less tied to these specific details
that might constrain later changes. This is important when developing
Page
frameworks, but is also valuable in general.
• Reduces performance 229
There is a modest performance hit, because state is transferred from one
object to another. This reduction in performance is usually far outweighed by
the benefits of improved reusability and maintainability of the application
subcomponents.
4. Application Controller
• Improves modularity
Separating common action and view management code, into its own set of
classes makes the application more modular. This modularity might also ease
testing, since aspects of the Application Controller functionality will not be
tied to a web container.
• Improves reusability
Page
reduce the implementation details that are embedded directly within the
page. It is important to keep in mind, though, that it is not a panacea to
simply use JavaBeans or custom tags within your JSP. The use of certain
230
generic helpers only replaces the embedded Java code with a references to
helpers that, in effect, produce the same problem of exposing the
implementation details, as opposed to the intent of the code.
An example is the use of a conditional helper, such as a custom tag that
models the conditional logic of an 'if' statement. Heavy usage of this sort of
helper tag may simply mirror the scriptlet code that it is intended to replace.
As a result, the resulting fragment continues to look like programming logic
embedded within the page. Using helpers as scriptlets is a bad practice,
although it is often done in an attempt to apply View Helper.
6. Composite View
7. Service to Worker
• Centralizes control and improves modularity, reusability, and maintainability
Centralizing control and request-handling logic improves the system's
Page
modularity and reusability. Common request processing code can be reused,
reducing the sort of duplication that occurs if processing logic is embedded 231
within views. Less duplication means improved maintainability, since changes
are made in a single location.
• Improves role separation
Centralizing control and request-handling logic separates it from view
creation code and allows for a cleaner separation of team roles. Software
developers can focus on maintaining programming logic while page authors
can focus on the view.
8. Dispatcher View
• Leverages frameworks and libraries
Frameworks and libraries realize and support specific patterns. The
Dispatcher View approach is supported in standard and custom libraries that
9. Business Delegate
• Reduces coupling, improves maintainability
The Business Delegate reduces coupling between the presentation tier and
the business tier by hiding all business-tier implementation details. Managing
changes is easier because they are centralized in the Business Delegate.
• Translates business service exceptions
The Business Delegate translates network or infrastructure-related
exceptions into business exceptions, shielding clients from the knowledge of
the underlying implementation specifics.
• Improves availability
When a Business Delegate encounters a business service failure, the
delegate can implement automatic recovery features without exposing the
problem to the client. If the recovery succeeds, the client doesn't need to
know about the failure. If the recovery attempt fails, then the Business
Delegate needs to inform the client of the failure. Additionally, the Business
Delegate methods can be synchronized, if necessary.
• Exposes a simpler, uniform interface to the business tier
The Business Delegate is implemented as a simple Java object, making it
easier for application developers to use business-tier components without
dealing with the complexities of the business-service implementations.
• Improves performance
The Business Delegate can cache information on behalf of the presentation-
Page
tier components to improve performance for common service requests.
• Introduces an additional layer 232
The Business Delegate adds a layer that might be seen as increasing
complexity and decreasing flexibility. However, the benefits of the pattern
outweigh such drawbacks.
• Hides remoteness
Location transparency is a benefit of this pattern, but it can lead to problems
if you don't keep in mind where the Business Delegate resides. A Business
Delegate is a client-side proxy to a remote service. Even though a Business
Delegate is implemented as a local POJO, when you call a method on a
Business Delegate, the Business Delegate typically has to make a call across
the network to the underlying business service to fulfill this request.
Therefore, try to keep calls to the Business Delegate to a minimum to
prevent excess network traffic.
Page
component interactions and presents the client with a simpler coarse-grained
service-layer interface to the system that is easy to understand and use. In 233
addition, by providing a Business Delegate for each Session Façade, you can
make it easier for client-side developers to leverage the power of Session
Façades.
• Reduces coupling between the tiers
Using a Session Façade decouples the business components from the clients,
and reduces tight coupling and dependency between the presentation and
business tiers. You can additionally implement Application Services to
encapsulate the complex business logic that acts on several Business
Objects. Instead of implementing the business logic, the Session Façades can
delegate the business logic to Application Services to implement.
• Promotes layering, increases flexibility and maintainability
Page
12. Application Service
• Centralizes reusable business and workflow logic
234
Application Services create a layer of services encapsulating the Business
Objects layer. This creates a centralized layer that encapsulates common
business logic acting upon multiple Business Objects.
• Improves reusability of business logic
Application Services create a set of reusable components that can be reused
across various use case implementations. Application Services encapsulate
inter-Business Object operations.
• Avoids duplication of code
Page
Business objects act as a centralized object model to all clients in an
application. You can build various services on top of Business Object, which
can also use other services such as persistence, business rules, integration,
235
and so forth. This facilitates separation of concerns in a multi-tiered
application and facilitates service-oriented architecture.
• POJO implementations can induce, and are susceptible to, stale data
When you implement Business Objects as POJOs in a distributed multi-tier
application, a Business Object might end up instantiated in multiple VMs or
containers. The application is responsible for ensuring that these multiple
instances maintain consistency and integrity of the business data. This might
require synchronization of state among the instances, and between the
instances and the data store, to guarantee the integrity of the business data
and avoid stale data. On the other hand, when you implement the Business
Objects as entity beans, the container handles the creation, synchronization,
Page
• Reduces database schema dependency
Composite Entity provides an object view of the data in the database. The 236
database schema is hidden from the clients, since the mapping of the entity
bean to the schema is internal to the Composite Entity. Changes to the
database schema might require changes to the Composite Entity beans.
However, the clients are not affected since the Composite Entity beans do
not expose the schema to the external world.
• Increases object granularity
With a Composite Entity, the client typically looks up the parent entity bean
instead of locating numerous fine-grained dependent entity beans. The
parent entity bean acts as a Facade [GoF] to the dependent objects and
hides the complexity of dependent objects by exposing a simpler interface.
Composite Entity avoids fine-grained method invocations on the dependent
objects, decreasing the network overhead.
Page
When the client includes logic to manage the interactions with distributed
components, clearly separating business logic from the client tier becomes 237
difficult. The Transfer Object Assembler contains the business logic to
maintain the object relationships and to construct the composite transfer
object representing the model. The client doesn't need to know how to
construct the model or know about the different components that provide
data to assemble the model.
• Reduces coupling between clients and the application model
The Transfer Object Assembler hides the complexity of the construction of
model data from the clients and reduces coupling between clients and the
model. With loose coupling, if the model changes, then the Transfer Object
Assembler requires a corresponding change and insulates the clients from
this change.
• Improves network performance
Page
Network performance improves because only a requested subset of the
results, rather than the entire result set, is sent to the client on demand. If 238
the client/user displays the first few results and then abandons the query,
the network bandwidth is not wasted, since the data is cached on the server
side and never sent to the client.
• Allows deferring entity bean transactions
Caching results on the server side and minimizing finder overhead might
improve transaction management. For example, a query to display a list of
books uses a Value List Handler to obtain the list without using the Book
entity bean's finder methods. At a later point, when the user wants to modify
a book in detail, the client invokes a Session Façade that locates the required
Book entity bean instance with appropriate transaction semantics as needed
for this use-case.
• Promotes layering and separation of concerns
Page
• Reduces code complexity in clients 239
Since the DAOs encapsulate all the code necessary to interact with the
persistent storage, the clients can use the simpler API exposed by the data
access layer. This reduces the complexity of the data access client code and
improves the maintainability and development productivity.
• Organizes all data access code into a separate layer
Data access objects organize the implementation of the data access code in a
separate layer. Such a layer isolates the rest of the application from the
persistent store and external data sources. Because all data access
operations are now delegated to the DAOs, the separate data access layer
isolates the rest of the application from the data access implementation. This
centralization makes the application easier to maintain and manage.
The pattern has the following drawbacks:
Page
Activator needs to be monitored to ensure availability. The additional
management and maintenance of this process can add to application support 240
overhead. An MDB Service Activator might be a better alternative because it
will be managed and monitored by the application server.
20. Domain Store
• Creating a custom persistence framework is a complex task
Implementing Domain Store and all the features required for transparent
persistence is not a simple task due to the nature of the problem and due to
complex interactions between several participants and this pattern
framework. So, consider implementing your own transparent persistence
framework after exhausting all other options.
• Multi-layer object tree loading and storing requires optimization techniques
Page
241
• [CORE_SECURITY_PATTERNS] Chapter 3.
Ref. • Java Web Start and Security
• Java Web Start Security
In the first release of the Sun Java Platform, the Java Development Kit 1.0.x (JDK) introduced
the notion of a sandbox-based security model. This primarily supports downloading and
running Java applets securely and avoids any potential risks to the user's resources. With the
JDK 1.0 sandbox security model, all Java applications (excluding Java applets) executed locally
can have full access to the resources available to the JVM. Application code downloaded from
remote resources, such as Java applets, will have access ONLY to the restricted resources
provided within its sandbox. This sandbox security protects the Java applet user from potential
risks because the downloaded applet cannot access or alter the user's resources beyond the
sandbox.
Page
The release of JDK 1.1.x introduced the notion of signed applets, which allowed downloading
and executing applets as trusted code after verifying the applet signer's information. To 242
facilitate signed applets, JDK 1.1.x added support for cryptographic algorithms that provide
digital signature capabilities. With this support, a Java applet class could be signed with digital
signatures in the Java archive format (JAR file). The JDK runtime will use the trusted public
keys to verify the signers of the downloaded applet and then treat it as a trusted local
application, granting access to its resources:
Page
resources, such as the file systems, networks, and so forth, are accessible only via
system domains. The resources that are part of the single execution thread are
considered an application domain. So in reality, an application that requires access to
243
an external resource may have an application domain as well as a system domain.
While executing code, the Java runtime maintains a mapping from code to protection
domain as well as to its permissions.
Page
set of operations to construct access on a particular resource. The Permission class
contains several subclasses that represent access to different types of resources. The
subclasses belong to their own packages that represent the APIs for the particular
244
resource.
Some of the commonly used Permission classes are as follows:
○ For wildcard permissions: java.security.AllPermission
○ For named permissions: java.security.BasicPermission
○ For file system: java.io.FilePermission
○ For network: java.net.SocketPermission
○ For properties: java.lang.PropertyPermission
○ For runtime resources: java.lang.RuntimePermission
• Policy
The Java 2 security policy defines the protection domains for all running Java code
with access privileges and a set of permissions such as read and write access or
making a connection to a host. The policy for a Java application is represented by a
Policy object, which provides a way to declare permissions for granting access to its
required resources. In general, all JVMs have security mechanisms built in that allow
you to define permissions through a Java security policy file. A JVM makes use of a
policy-driven access-control mechanism by dynamically mapping a static set of
permissions defined in one or more policy configuration files. These entries are often
referred to as grant entries. A user or an administrator externally configures the
policy file for a J2SE runtime environment using an ASCII text file or a serialized
binary file representing a Policy class.
• SecurityManager
Each Java application can have its own security manager that acts as its primary
security guard against malicious attacks. The security manager enforces the required
security policy of an application by performing runtime checks and authorizing
access, thereby protecting resources from malicious operations. Under the hood, it
uses the Java security policy file to decide which set of permissions are granted to
the classes. However, when untrusted classes and third-party applications use
the JVM, the Java security manager applies the security policy associated with the
JVM to identify malicious operations. In many cases, where the threat model does
not include malicious code being run in the JVM, the Java security manager is
unnecessary.
In cases where the SecurityManager detects a security policy violation, the JVM
will throw an AccessControlException or a SecurityException.
If you wish to have your applications use a SecurityManager and security policy,
start up the JVM with the -Djava.security.manager option and you can also
specify a security policy file using the policies in the -Djava.security.policy
option as JVM arguments. If you enable the Java Security Manager in your
application but do not specify a security policy file, then the Java Security Manager
uses the default security policies defined in the java.policy file in the
$JAVA_HOME/jre/lib/security directory.
• AccessController
Page
The access controller mechanism performs a dynamic inspection and decides
whether the access to a particular resource can be allowed or denied. From a
programmer's standpoint, the Java access controller encapsulates the location, code
245
source, and permissions to perform the particular operation. In a typical process,
when a program executes an operation, it calls through the security manager, which
delegates the request to the access controller, and then finally it gets access or
denial to the resources.
• Bytecode verifier
The Java bytecode verifier is an integral part of the JVM that plays the important role
of verifying the code prior to execution. It ensures that the code was produced
consistent with specifications by a trustworthy compiler, confirms the format of the
class file, and proves that the series of Java byte codes are legal. With bytecode
verification, the code is proved to be internally consistent following many of the rules
and constraints defined by the Java language compiler. The bytecode verifier may
Page
along with the JAR file to any client recipients who will use the applet. The client who receives
the certificate uses it to authenticate the signature on the JAR file. To sign the applet, we need
to obtain a certificate that is capable of code signing. For all production purposes, you must
246
always obtain a certificate from a CA such as VeriSign, Thawte, or some other CA.
What is a Java Web Start?
JavaTM Web Start is a new technology for deploying applications -- it gives you the
power to launch full-featured applications with a single click from your Web browser. You
can download and launch applications, such as a program for drawing or sketching
chemical structures, without going through complicated installation procedures. With
Java Web Start, you launch applications simply by clicking on a Web page link. If the
application is not present on your computer, Java Web Start automatically downloads all
necessary files. It then caches the files on your computer so the application is always
Java Web Start (JWS) is a full-fledged Java application that allows Java client
applications to be deployed, launched, and updated from a Web server. It provides a
mechanism for application distribution through a Web server and facilitates Java rich-
client access to applications over a network. The underlying technology of JWS is the
Java Network Launch protocol (JNLP), which provides a standard way for packaging
and provisioning the Java programs (as JAR files) and then launching Java programs
over a network. The JNLP-packaged applications are typically started from a Web
browser that launches the client-side JWS software, which downloads, caches, and then
executes the application locally. Once the application is downloaded, it does not need to
be downloaded again unless newer updates are made available in the server. These
updates are done automatically in an incremental fashion during the client application
startup. Applications launched using JWS are typically cached on the user's machine
and can also be run offline. Since the release of J2SE 1.4, JWS has been an integral
part of the J2SE bundle, and it does not require a separate download [JWS].
Applications launched with Java Web Start are, by default, run in a restricted
environment, known as a sandbox. In this sandbox, Java Web Start:
• Protects users against malicious code that could affect local files
• Protects enterprises against code that could attempt to access or destroy data on
networks
Unsigned JAR files launched by Java Web Start remain in this sandbox, meaning they
cannot access local files or the network.
Java Web Start supports signed JAR files so that your application can work outside of
the sandbox described above, so that the application can access local files and the
network.
Java Web Start verifies that the contents of the JAR file have not changed since it was Page
247
signed. If verification of a digital signature fails, Java Web Start does not run the
application.
When the user first runs an application as a signed JAR file, Java Web Start opens a
dialog box displaying the application's origin based on the signer's certificate. The user
can then make an informed decision regarding running the application.
For more information, see the Signing and Verifying JAR Files section.
Security and JNLP Files
The following example provides the application with complete access to the client system
if all its JAR files are signed:
<security>
<all-permissions/>
</security>
Java Web Start dynamically imports certificates as browsers typically do. To do this,
Java Web Start sets its own https handler, using the java.protocol.handler.pkgs
system properties, to initialize defaults for the SSLSocketFactory and
HostnameVerifier. It sets the defaults with the methods
HttpsURLConnection.setDefaultSSLSocketFactory and
HttpsURLConnection.setDefaultHostnameVerifier.
If your application uses these two methods, ensure that they are invoked after the Java
Web Start initializes the https handler, otherwise your custom handler will be replaced
by the Java Web Start default handler.
You can ensure that your own customized SSLSocketFactory and HostnameVerifiter
are used by doing one of the following:
• Install your own https handler, to replace the Java Web Start https handler. For
more information, see the document A New Era for Java Protocol Handlers.
• In your application, invoke
HttpsURLConnection.setDefaultSSLSocketFactory or
HttpsURLConnection.setDefaultHostnameVerifier only after the first https
URL object is created, which executes the Java Web Start https handler
initialization code first.
JWS Security Model
Typical to a stand-alone Java application, JWS applications run outside a Web browser using the
sandbox features of the underlying Java platform. JWS also allows defining security attributes for
Page
client-side Java applications and their access to local resources, such as file system access, making
network connections, and so on. These security attributes are specified using XML tags in the JNLP
descriptor file. The JNLP descriptor defines the application access privileges to the local and network
248
resources. In addition, JWS allows the use of digital signatures for signing JAR files in order to verify
the application origin and its integrity so that it can be trusted before it is downloaded to a client
machine. The certificate used to sign the JAR files is verified using the trusted certificates in the client
keystore. This helps users avoid starting malicious applications and inadvertent downloads without
knowing the originating source of the application.
When downloading signed JARs, JWS displays a dialog box that mentions the source of the
application and the signer's information before the application is executed. This allows users to make
decisions regarding whether to grant additional privileges to the application or not. When downloading
unsigned applications (unsigned JARs) that require access to local resources, JWS throws a "Security
Advisory" dialog box notifying the user that an application requires access to the local resources and
prompts the user with a question "Do you want to allow this action?" JWS will allow the user to grant
the client application access to the local resources by clicking the "Yes" button in the Security Advisory
dialog box.
To deploy a JWS application, in addition to JAR files, adding a .jnlp file is required. The JNLP file is an
XML-based document that describes the application classes (JAR files), their location in a Web server,
JRE version, and how to launch in the client environment. The client user downloads the JNLP file
from the server, which automatically launches the JWS application on the client side. The JNLP file
uses XML elements to describe a JWS application. The root element is tagged as <jnlp>, which
contains the four core sub-elements: information, security, resources, and application-desc.
To enforce security, the <security> element is used to specify the required permissions. The security
element provides two permission options: <all-permissions/> to provide an application with full access
to the client's local computing resources, and <j2ee-application-client-permissions/> to provide a
selected set of permissions that includes socket permissions, clipboard access permission, printing
permission, and so forth. Example 3-19 is a JNLP file that shows putting all the elements including a
<security> element setting with all permissions.
Example 3-19. JNLP file showing <security> elements
The Java platform facilitates an extensible security architectural model via standards-based
security API technologies that provide platform independence and allow interoperability among
vendor implementations. These API technologies add a variety of security features to the core
Java platform by integrating technologies to support cryptography, certificate management,
authentication and authorization, secure communication, and other custom security
mechanisms.
Page
250
Page
○ Certificate factory for X.509 certificates and revocation lists
○ Keystore implementation named JKS, which allows managing a repository of
251
keys and certificates
Page
○ MAC algorithms to validate information transmitted between parties 252
○ Support for PKCS#11 (RSA Cryptographic Token Interface Standard), which
allows devices to store cryptographic information and perform cryptographic
services. This feature is available in J2SE 5.0 and later versions.
Page
algorithm
○ Implementation of key agreement protocols based on Diffie-Hellman 253
○ Implementation of Padding scheme as per PKCS#5
○ Algorithm parameter managers for Diffie-Hellman, DES, Triple DES, Blowfish,
and PBE
○ Support for Advanced Encryption Standard (AES)
○ A keystore implementation named JCEKS
Page
communication between two or more parties by securely exchanging a secret
key over a network. The Diffie-Hellman (DH) key agreement protocol allows
254
two users to exchange a secret key over an insecure medium without any
prior secrets. JCE provides support for the Diffie-Hellman key agreement
protocol.
3) Java Certification Path API (CertPath)
CertPath provides the functionality of checking, verifying, and validating the
authenticity of certificate chains.
CertPath provides a full-fledged API framework for application developers who wish
to integrate the functionality of checking, verifying, and validating digital certificates
into their applications.
Digital certificates play the role of establishing trust and credentials when conducting
business or other transactions. Issued by a Certification Authority (CA), a digital
Page
255
With JSSE, it is possible to develop client and server applications that use secure
transport protocols, which include:
○ Secure HTTP (HTTP over SSL)
○ Secure Shell (Telnet over SSL)
○ Secure SMTP (SMTP over SSL)
○ IPSEC (Secure IP)
○ Secure RMI or RMI/IIOP (RMI over SSL)
Page
resend the message. These methods are fine when the expected cause of the
corruption is due to electronic glitches or some other natural phenomena, but if the
expected cause is an intelligent adversary with malicious intent, something stronger
256
is needed. That is where cryptographically strong one-way hash functions come in.
A cryptographically strong one-way hash function is designed in such a way that it is
computationally infeasible to find two messages that compute to the same hash
value. With a checksum, a modestly intelligent adversary can fairly easily alter the
message so that the checksum calculates to the same value as the original
message's checksum. Doing the same with a CRC is not much more difficult. But a
cryptographically strong one-way hash function makes this task all but impossible.
Two examples of cryptographically strong one-way hash algorithms are MD5 and
SHA-1. MD5 was created by Ron Rivest (of RSA fame) in 1992 [RFC1321] and
produces a 128-bit hash value. SHA-1 was created by the National Institute of
Standards and Technology (NIST) in 1995 [FIPS1801] and produces a 160-bit hash
Some examples of symmetric ciphers include DES, IDEA, AES (Rijndael), Twofish,
Blowfish and RC2.
• Asymmetric Ciphers Page
257
Asymmetric ciphers provide the same two functions as symmetric ciphers: message
encryption and message decryption. There are two major differences, however. First,
the key value used in message decryption is different than the key value used for
message encryption. Second, asymmetric ciphers are thousands of times slower than
symmetric key ciphers. But asymmetric ciphers offer a phenomenal advantage in
secure communications over symmetric ciphers.
The major advantage of the asymmetric cipher is that it uses TWO key values
instead of one: one for message encryption and one for message decryption. The
two keys are created during the same process and are known as a key pair. The one
for message encryption is known as the public key; the one for message decryption
is known as the private key. Messages encrypted with the public key can only be
decrypted with its associated private key. The private key is kept secret by the
If Bob needs to send some edits on the document back to Alice, he can do so by
having Alice send him her public key; he then encrypts the edited document using
Alice's public key and e-mails the secured document back to Alice. Again, the
message is secure from eavesdroppers, because only Alice's private key can decrypt
the message, and only Alice has her private key.
Note the very important difference between using an asymmetric cipher and a
symmetric cipher: No separate, secure channel is needed for Alice and Bob to
exchange a key value to be used to secure the message. This solves the major
problem of key management with symmetric ciphers: getting the key value
communicated to the other party. With asymmetric ciphers, the key value used to
send someone a message is published for all to see. This also solves another
symmetric key management headache: having to exchange a key value with each
party with whom one wishes to communicate. Anyone who wants to send a secure
message to Alice uses Alice's public key.
Some examples of asymmetric ciphers are RSA, Elgamal, and ECC (elliptic-curve
cryptography).
Recall that one of the differences between asymmetric and symmetric ciphers is that
Page
asymmetric ciphers are much slower, up to thousands of times slower. This
issue is resolved in practice by using the asymmetric cipher to communicate an
ephemeral symmetric key value and then using a symmetric cipher and the
258
ephemeral key to encrypt the actual message. The symmetric key is referred to as
ephemeral (meaning to last for a brief time) because it is only used once, for that
exchange. It is not persisted or reused, the way traditional symmetric key
mechanisms require. Going back to the earlier example of Alice e-mailing a
confidential document to Bob, Alice would first create an ephemeral key value to
encrypt the document with a symmetric cipher. Then she would create another
message, encrypting the ephemeral key value with Bob's public key, and then send
both messages to Bob. Upon receipt, Bob would first decrypt the ephemeral key
value with his private key and then decrypt the secured document with the
ephemeral key value (using the symmetric cipher) to recover the original document.
Page
259
Bob can verify that Alice has agreed to the documents by checking the digital
signature; he also performs an MD5 hash on the document, and then he decrypts the
Moreover, Alice cannot say that she never signed the document; she cannot refute
the signature, because only she holds the private key that could have produced the
digital signature. This ensures non-repudiation.
• Digital Certificates
A digital certificate is a document that uniquely identifies information about a party.
It contains a party's public key plus other identification information that is digitally
signed and issued by a trusted third party, also referred to as a Certificate Authority
(CA). A digital certificate is also known as an X.509 certificate and is commonly used
to solve problems associated with key management.
As explained earlier, the advent of asymmetric ciphers has greatly reduced the
problem of key management. Instead of requiring that each party exchange a
different key value with every other party with whom they wish to communicate over
separate, secure communication channels, one simply exchanges public keys with
the other parties or posts public keys in a directory.
However, another problem arises: How is one sure that the public key really belongs
to Alice?
For example, assume Charlie is a third party that both Alice and Bob trust. Alice
sends Charlie her public key, plus other identifying information such as her name,
address, and Web site URL. Charlie verifies Alice's public key, perhaps by calling her
on the phone and having her recite her public key fingerprint. Then Charlie creates a
Page
document that includes Alice's public key and identification, and digitally signs it
using his private key, and sends it back to Alice. This signed document is the digital
260
certificate of Alice's public key and identification, vouched (i.e. confirmed) for by
Charlie.
Now, when Bob goes to Alice's Web site and wants to securely send his credit card
number, Alice sends Bob her digital certificate. Bob verifies Charlie's signature on the
certificate using Charlie's public key (assume Bob has already verified Charlie's
public key), and if the signature is good, Bob can be assured that, according to
Charlie, the public key within the certificate is associated with the identification
within the certificate namely, Alice's name, address, and Web site URL. Bob can
encrypt his credit card number using the public key with confidence that only Alice
can decrypt it:
Page
that all input data are validated prior to application processing.
• Output Sanitation
261
Re-displaying or echoing the data values entered by users is a potential security
threat because it provides a hacker with a means to match the given input and its
output. This provides a way to insert malicious data inputs. With Web pages, if the
page generated by a user's request is not properly sanitized (i.e. verified and
cleaned) before it is displayed, a hacker may be able to identify a weakness in the
generated output. Then the hacker can design malicious HTML tags to create pop-up
banners; at the worst, hackers may be able to change the content originally
displayed by the site. To prevent these issues from arising, the generated output
must be verified for all known values. Any unknown values not intended for display
must be eliminated. All comments and identifiers in the output response must also
be removed.
Page
This information helps hackers crash applications or cause them to throw error
messages by sending invalid data that forces the applications to access non-existent
databases or resources. Adopting proper error handling mechanisms will display
262
error messages as user-specific messages based on user input; no internal details
related to the application environment or its components will be revealed. All user-
specific error messages are mapped to underlying application-specific error
conditions and stored as log files for auditing. In the event of an attack, the log files
provide diagnostic information for verifying the errors and for further auditing.
• Insecure Data Transit or Storage
Confidentiality of data in transit or storage is very important, because most security
is compromised when data is represented in plain text. Adopting cryptographic
mechanisms and data encryption techniques helps ensure the integrity and
confidentiality of data in transit or storage.
• Weak Session Identifiers
Page
• Session Theft
Also referred to as session hijacking, session theft occurs when attackers create a 263
new session or reuse an existing session. Session theft hijacks a client-to-server or
server-to-server session and bypasses the authentication. Hackers do not need to
intercept or inject data into the communication between hosts. Web applications that
use a single SessionID for multiple client-server sessions are also susceptible to
session theft, where session theft can be at the Web application session level, the
host session level, or the TCP protocol. In a TCP communication, session hijacking is
done via IP spoofing techniques, where an attacker uses source-routed IP packets to
insert commands into an active TCP communication between the two communicating
systems and disguises himself as one of the authenticated users. In Web-based
applications, session hijacking is done via forging or guessing SessionIDs and
stealing SessionID cookies. Preventing session hijacking is one of the first steps in
hardening Web application security, because session information usually carries
sensitive data such as credit card numbers, PINs, passwords, and so on. To prevent
Page
physically enter repetitive usernames and passwords or other forms of authentication
credentials.
265
• Deployment Problems
Many security exposure issues and vulnerabilities occur by chance because of
application deployment problems. These include inconsistencies within and conflicts
between application configuration data and the deployment infrastructure (hosts,
network environment, and so on). Human error in policy implementation also
contributes to these problems. In some cases, deployment problems are due to
application design flaws and related issues. To prevent these problems, it is
important to review and test all infrastructure security policies and to make sure
application-level security policies reflect the infrastructure security policies, and vice
versa. Where there are conflicts, the two policies will need to be reconciled. Some
trade-offs in constraints and restrictions related to OS administration, services,
protocols, and so on may need to be made.
The J2EE container-based security services primarily address the security requirements of the
application tiers and components. They provide authentication and authorization mechanisms
by which callers and service providers prove each other's identities, and then they provide
access control over the resources to which an identified user or system has access.
A J2EE container supports two kinds of security mechanisms. Declarative security allows
enforcement of security using a declarative syntax applied during the application's
deployment. Programmatic security allows expressing and enforcing security decisions at the
application's invoked methods and its associated parameters.
Declarative Security
In a declarative security model, the application security is expressed using rules and
permissions in a declarative syntax specific to the J2EE application environment. The security
rules and permissions will be defined in a deployment descriptor document packaged along
with the application component. The application deployer is responsible for assigning the
required rules and permissions granted to the application in the deployment descriptor. Figure
below shows the deployment descriptors meant for different J2EE components:
Page
266
Programmatic Security
In a programmatic security model, the J2EE container makes security decisions based on the
invoked business methods to determine whether the caller has been granted a privilege to
access or deny a resource. This determination is based on the parameters of the call, its
internal state, or other factors based on the time of the call or its processed data.
For example, an application component can perform fine-grained access control with the
identity of its caller by using EJBContext.getCallerPrincipal (EJB component) or
HttpServletRequest.getUserPrincipal (Web component) and by using
EJBContext.isCallerInRole (EJB component) and
HttpServletRequest.isUserInRole (Web component). This allows determining whether
the identity of the caller has the privileged role to execute a method for accessing a protected
resource.
Page
Using programmatic security helps when declarative security is not sufficient to build the
security requirements of the application component and where the component access control
decisions need to use complex and dynamic rules and policies.
267
Java Authentication and Authorization Service (JAAS)
Authentication is the process of verifying the identity of a user or a device to determine its
accuracy and trustworthiness. Authorization provides access rights and privileges depending
on the requesting identity's granted permissions to access a resource or execute a required
functionality.
JAAS provides API mechanisms and services for enabling authentication and authorization in
Java-based application solutions. JAAS is the Java implementation of the Pluggable
Authentication Module (PAM) framework originally developed for Sun's Solaris operating
system. PAM enables the plugging in of authentication mechanisms, which allows applications
to remain independent from the underlying authentication technologies. Using PAM, JAAS
JAAS Authentication
In a JAAS authentication process, the client applications initiate authentication by instantiating
Page
a LoginContext object. The LoginContext then communicates with the LoginModule,
268
which performs the actual authentication process. As the LoginContext uses the generic
interface provided by a LoginModule, changing authentication providers during runtime
becomes simpler without any changes in the LoginContext. A typical LoginModule will
prompt for and verify a username and password or interface with authentication providers
such as RSA SecureID, smart cards, and biometrics. LoginModules use a CallbackHandler
to communicate with the clients to perform user interaction to obtain authentication
information and to notify login process and authentication events.
• Configuring JAAS LoginModule for an application
The JAAS LoginModules are configured with an application using a JAAS configuration
file (e.g., my-jaas.conf), which identifies one or more JAAS LoginModules intended
for authentication. Each entry in the configuration file is identified by an application
Page
269