Professional Documents
Culture Documents
Clustering in GlassFish Version 3.1
Clustering in GlassFish Version 3.1
Core
Webtier
WS/XML
Tools
GlassFish Community
GlassFish Community
Downloads Discussion forums Mailing Lists Using GlassFish Training Samples Developing GlassFish Documentation Community Issue tracker
Other Languages
GlassFish 3.1.2 Samples Grizzly Jersey OpenMQ Shoal HK2 Governance Document A java.net project
Table of Contents
Basic Concepts Domain Administration Architecture Clustering Architecture Typical Failover Scenario Group Management Service Using the Command Line Interface for Monitoring Clusters Memory Replication Configuration Memory Replication Implementation Application Server Installation Domain Examination Creating a Cluster Using the Command Line Interface HTTP Load Balancer Plug-In Conclusion Acknow ledgments References
Basic Concepts
Clusters in an application server enhance scalability and availability, w hich are related concepts. In order to provide high availability of service, a softw are system must have the follow ing capabilities: The system must be able to create and run multiple instances of service-providing entities. In the case of application servers, the service-providing entities are Java EE application server instances configured to run in a cluster, and the service is a deployed Java EE application. The system must be able to scale to larger deployments by adding application server instances to clusters in order to accept increasing service loads. If one application server instance in a cluster fails, it must be able to fail over to another server instance so that service is not interrupted. Although failure of a server instance or physical machine is likely to degrade overall quality of service, complete interruption of service is not acceptable in a high-availability environment. If a process makes changes to the state of a user's session, session state must be preserved across process restarts. The most straightforw ard mechanism is to maintain a reliable replica of session state so that, if a process aborts, session state can be recovered w hen the process is restarted. The principle is similar to that used in high-reliability RAID storage systems.
Taken together, these demands necessarily result in a system that sacrifices high efficiency to attain high availability. In order to support the goals of scalability and high availability, the GlassFish application server provides the follow ing server-side entities: Server Instance A server instance is the Java EE server process (the GlassFish application server) that hosts your Java EE applications. As required by the Java EE specification, each server instance is configured for the various subsystems that it is expected to run.
converted by Web2PDFConvert.com
Node1 A node is a configuration of the GlassFish softw are that exists on every physical host w here a server instance runs. The life cycle of a server instance is managed either by the Domain Administration Server (DAS) described later in this article and/or by local operating system services that are responsible for starting and managing the instance. Nodes come in tw o flavors: Secure Shell ( SSH) and config . An SSH node provides centralized administration of instances using the SSH protocol. A config node provides just configuration information w ithout centralized administration. Cluster A cluster is a logical entity that determines the configuration of the server instances that make up the cluster. Usually, the configuration of a cluster implies that all the server instances w ithin the cluster have homogeneous configuration. An administrator typically view s the cluster as a single entity and uses the GlassFish Administration Console or a command-line interface (CLI) to manage the server instances in the cluster.
Nodes, server instances, and clusters can be created at GlassFish installation time, as described near the end of this article. Clusters and instances are organized into administrative domains, described below , that are characterized by the Domain Administration Server (DAS).
Used in a real-w orld enterprise deployment, it provides a process that is dedicated to configuration and administration of other processes. In this case, an administrative domain takes the form of a Domain Administration Server (DAS) that you can use purely for administration purposes.
In the file system, an administrative domain is composed of a set of configuration files. At runtime, it is a process administering itself, independent server instances, clusters, applications, and resources. In general, high-availability installations require clusters, not independent server instances. The GlassFish application server provides homogeneous clusters and enables you to manage and modify each cluster as though it w ere a single entity. As show n in the figure, each domain has a Domain Administration Server (DAS), w hich is used to manage Java EE Server instances in the domain. The Administration Node at the center of the figure supports the DAS. Applications, resources, and configuration information are stored very close to the DAS. The configuration information managed by the DAS is know n as the configuration central repository. Each domain process must run on a physical host. W hen running, the domain manifests itself as a DAS. Similarly, every server instance must run on a physical host and requires a Java Virtual Machine. The GlassFish application server must be installed on each machine that runs a server instance. Administrative Domains Don't confuse the concepts administrative domain and network domain the tw o are not related. In the w orld of Java EE, domain applies to an administrative domain: the
Two nodes are shown on the right side of the figure: SSH Node 1 and Config Node 2, each hosting two GlassFish server instances. Typically all of the nodes in a domain will be of the same type, either SSH or config.
W ith an SSH node, the node and the instances can be managed through the use of commands that are sent from the DAS using the SSH protocol. The asadmin subcommands such as createinstance and start-instance (or the console equivalents) internally use SSH, via sshd on the remote host, to run the asadmin commands that perform the operation on the node. The asadmin startcluster command provides the ability to start an entire cluster w ith a single command. In this w ay,
converted by Web2PDFConvert.com
an administrative domain: the the life cycle of instances can be administered centrally from the DAS. machines and server instances that an administrator controls. W ith a config node, the asadmin subcommands to manage instances, such as create-local-instance and start-local-instance, must be run by logging in to the node itself. For either type of node, data synchronization is accomplished using HTTP/S. To provide automatic startup and runtime monitoring (w atchdogs) for instances, the asadmin create-service subcommand can be used to create an operating system service for an instance. Once created, the service is managed using operating system service management tools. W ith this in place, if a server instance fails, it is restarted w ithout administrator or DAS intervention. If the DAS is unavailable w hen an instance is started, the instance is started using the cached repository information. Several administrative clients are show n on the left side of Figure 1. The follow ing administrative clients are of interest: Admin Console The Admin Console is a brow ser-based interface for managing the central repository. The central repository provides configuration at the DAS level. Command-Line Interface The asadmin command duplicates the functionality of the Admin Console. In addition, some actions can only be performed through asadmin, such as creating a domain. You cannot run the Admin Console unless you have a DAS, w hich presupposes a domain. The asadmin command provides the means to bootstrap the architecture. IDE The figure show s the logo for the NetBeans IDE. Tools like the NetBeans IDE can use the DAS to connect w ith and manage an application during development. The NetBeans IDE can also support cluster mode deployment. Most developers w ork w ithin a single domain and machine, in w hich the DAS itself acts as the host of all the applications. REST Interface A computer w ith an arbitrary management application can use the REST interface provided by the DAS to manage the domain.
Clustering Architecture
Figure 2 show s GlassFish clustering architecture from a runtime-centric view point. This view emphasizes the high-availability aspects of the architecture. The DAS is not show n in Figure 2, and the nodes w ith their application server instances are show n to be grouped as clustered instances.
converted by Web2PDFConvert.com
memory replication feature for GlassFish, starting w ith version 2. Memory replication relies on instances w ithin the cluster to store state information for one another in memory, not in a database. The HADB option is not supported in GlassFish 3.1. Memory Replication in Clusters Several features are required of a GlassFish-compatible fault-tolerant system that maintains state information in memory. The system must provide high availability for HTTP and EJB session state. The memory replication feature takes advantage of the clustering feature of GlassFish to provide most of the advantages of the HADB strategy w ith much less installation and administrative overhead. In GlassFish version 2, cluster instances w ere organized in a ring topology. Each member in the ring sent memory state data to the next member in the ring, its replica partner, and receives state data from the previous member. This replicated the entire state of one instance in only one other instance. In contrast, GlassFish 3.1 uses a consistent hash algorithm to determine w hich instance should replicate the individual sessions of another. Sessions from one instance are distribute among the other instances in the cluster. For example, if the load balancer is routing sessions S1, S2, and S3 to Instance 1, Instance 1 may replicate S1and S2 to Instance2 and S3 to Instance 3 based on the algorithm and available instances. This leads to more efficient fail over behavior as described below .
W henever an instance uses replica data to service a session (both Case 1 and Case 2), the replica data is first tested to make sure it is the current version.
converted by Web2PDFConvert.com
Timer migrations GMS selects an instance to pick up the timers of a failed instance if necessary
instance has failed, one instance has not been started, and the others are running normally.
bin/asadmin get-health myCluster instance01 failed since Thu Feb 24 11:03:59 EST 2011 instance02 not started instance03 started since Thu Feb 24 11:03:08 EST 2011 instance04 started since Thu Feb 24 11:03:08 EST 2011 Command get-health executed successfully.
If the state of an instance is not started even though the instance appears to be operational in its server log, there may be an issue with UDP multicast between that instance and the DAS machine. To diagnose these kinds of issues, a new asadmin subcommand, validate-multicast, has been introduced in GlassFish 3.1. This command can be run on 2 or more machines to verify that multicast traffic from one is seen by the other(s). The following shows the command output when the command is run on hosts host1 and host2. In this output, we see that they can communicate with each other. If host1 only received its own loopback message, then multicast is not working between these machines as currently configured.
bin/asadmin validate-multicast Will use port 2048 Will use address 228.9.3.1 Will use bind interface null Will use wait period 2,000 (in milliseconds) Listening for data... Sending message with content "host1" every 2,000 milliseconds Received data from host1 (loopback) Received data from host2 Exiting after 20 seconds. To change this timeout, use the --timeout command line option. Command validate-multicast executed successfully.
In the above, the default values were used. When diagnosing a potential issue between two instances, use the subcommand parameters to specify the same multicast address and port that are being used by the instances.
<jvm-options>-Xmx1024m</jvm-options> <jvm-options>-Xms1024m</jvm-options>
You also need to be sure to add the <distributable /> tag to your w eb application's web.xml file. This tag identifies the application as being cluster-capable. The requirement to insert the <distributable /> tag is a reminder to test your application in a cluster environment before deploying it to a cluster. Some applications w ork w ell w hen deployed to a single instance but fail w hen deployed to a cluster. For example, before an application can be successfully deployed in a cluster, any objects that become part of the application's HTTP session must be serializable so that their states can be preserved across a netw ork. Non-serializable objects may w ork w hen deployed to a single server instance but w ill fail in a cluster environment. Examine w hat goes into your session data to ensure that it w ill w ork correctly in a distributed environment.
Java EE w eb distribution, including the Java EE w eb profile that supports w eb applications Java EE SDK w hich includes GlassFish (either full or w eb profile) as w ell as Java EE samples. These distributions are available either in English or w ith multiple languages included. The installation types include an executable graphical installer for W indow s, an executable graphical installer for Unix or Unix-like platforms, and a ZIP file containing an installation image. To install the ZIP file form of GlassFish Application Server: 1. Type the follow ing command:
unzip -q filename.zip
For example:
unzip -q glassfish-3.1.jar
2. This unpacks GlassFish into a glassfish3 installation directory. The installation image is already configured w ith a domain called domain1 w hich supports clustering.2 To install the GlassFish Application Server using an executable installer, run the dow nload file and enter the requested information. The installer allow s you to choose the installation directory, choose w hether the update tool should be included, and choose w hether to create an initial domain.
Domain Examination
You can learn about and manage domains from the CLI (the asadmin command) or the GUI (the GlassFish Server Administration Console). Examining Domains From the Command-Line Interface The installation step created a glassfish/domains subdirectory in the installation directory. This directory stores all the GlassFish domains. You can interact w ith domains from the CLI w ith the asadmin command, located in the bin subdirectory beneath the installation directory. The asadmin command can be used in batch or interactive mode. For example, you can list all domains and their statuses w ith the follow ing command:
bin/asadmin list-domains
If you haven't started domain1 yet, the above command issues the follow ing output:
http://hostname:port
The default port is 4848. For example:
http://kindness.example.com:4848
converted by Web2PDFConvert.com
If the brow ser is running on the machine on w hich the Application Server w as installed, specify localhost for the host name. On W indow s, start the Application Server Administration Console from the Start menu.
W ith the default configuration that has no passw ord for the admin user, the brow ser w ill be directed to the home page for the console:
$ bin/asadmin create-local-instance --cluster cluster1 instance1 Rendezvoused with DAS on localhost:4848. Port Assignments for server instance instance1: JMX_SYSTEM_CONNECTOR_PORT=28686 JMS_PROVIDER_PORT=27676 HTTP_LISTENER_PORT=28080 ASADMIN_LISTENER_PORT=24848 JAVA_DEBUGGER_PORT=29009 IIOP_SSL_LISTENER_PORT=23820 IIOP_LISTENER_PORT=23700 OSGI_SHELL_TELNET_PORT=26666 HTTP_SSL_LISTENER_PORT=28181 IIOP_SSL_MUTUALAUTH_PORT=23920 Command create-local-instance executed successfully. $ asadmin create-local-instance --cluster cluster1 instance2 Rendezvoused with DAS on localhost:4848. Using DAS host localhost and port 4848 from existing das.properties for node localhost-domain1. To use a different DAS, create a new node using create-node-ssh or create-node-config. Create the instance with the new node and correct host and port: asadmin --host das_host --port das_port create-local-instance --node node_name instance_name. Port Assignments for server instance instance2: JMX_SYSTEM_CONNECTOR_PORT=28687 JMS_PROVIDER_PORT=27677 HTTP_LISTENER_PORT=28081 ASADMIN_LISTENER_PORT=24849 JAVA_DEBUGGER_PORT=29010 IIOP_SSL_LISTENER_PORT=23821 IIOP_LISTENER_PORT=23701 OSGI_SHELL_TELNET_PORT=26667 HTTP_SSL_LISTENER_PORT=28182
converted by Web2PDFConvert.com
$ bin/asadmin list-instances -l NAME HOST PORT PID CLUSTER STATE instance1 localhost 24848 15421 cluster1 running instance2 localhost 24849 15437 cluster1 running Command list-instances executed successfully. $ bin/asadmin collect-log-files --target cluster1 Log files are downloaded for instance1. Log files are downloaded for instance2. Created Zip file under /scratch/trm/test/glassfish3/glassfish/domains/domain1/collected-logs/log_2011-02-24_08-3225.zip. Command collect-log-files executed successfully.
This last command collects the log files from the instances in the cluster. For complete information about the cluster, it is also recommended to look at the DAS log file. To stop the cluster, use the asadmin stop-cluster command.
Conclusion
The GlassFish version 3.1 Application Server provides a flexible clustering architecture composed of administrative domains, domain administrative servers, server instances, and physical machines. The architecture combines ease of use w ith a high degree of administrative control to improve high availability and horizontal scalability. High availability - Multiple server instances, capable of sharing state, minimize single points of failure, particularly w hen combined w ith load balancing schemes. In-memory replication of server session data minimizes disruption for users w hen a server instance fails. Horizontal scalability - As user load increases, additional machines, server instances, and clusters can be added and easily configured to handle the increasing load. GMS eases the administrative burden of maintaining a high-availability cluster.
Acknowledgments
Thank you to Kedar Mhasw ade, Prashanth Abbagani, and Rick Palkovic w ho authored the original article about clustering in GlassFish 2 upon w hich this article w as based.
References
Oracle GlassFish Server 3.1 High Availability Administration Guide Oracle GlassFish Server 3.1 Collection of Guides Dow nload Page for GlassFish Community The Aquarium GlassFish Community Blog Oracle GlassFish Server page w ith links to the support offerings Java EE At a Glance Overview of Java EE w ith dow nloads, documentation, and training
1The node agent entity that was available in GlassFish 2 has been replaced by the node entity in GlassFish 3.1.
converted by Web2PDFConvert.com
2GlassFish 3.1 no longer has the concept of domain profiles such as developer and cluster from GlassFish 2. Any domain can support clustering or any other feature as long as the necessary software modules are installed.
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com
converted by Web2PDFConvert.com