Webmethods - Interview Questions

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 74

Some question from project specific.

1. Name of project( also, tell the summary)


2. Explain the key words in the name of project
3. Why this project
4. Your contribution
5. Advantages
6. Disadvantages
7. Explain by drawing a block diagram (if possible)
8. Explain the project.

On what all webmethods components you worked on?

Answer:

1) Designer

2) Developer

3) MWS

4) Native Brokers

5) PubSub Models

6) Integration Server

7) Triggers, Adapters and file polling ports.

What Is the Pipeline?


Answer:

The pipeline is the general term used to refer to the data structure in which
input, and output values are maintained for a flow service. It allows services in
the flow to share data.

How does jdbc notification works explain?

Answer:
An adapter notification monitors a specified database table for changes, such
as an insert, update, or delete operation, so that the appropriate Java or flow

PUBLIC
services can make use of the data, such as sending an invoice or publishing it
to the Integration Server.
JDBC Adapter notifications are polling-based. That is, the Integration Server
will invoke the notification periodically based on the polling interval that you
specify when you schedule the notification.

Enabled: The polling notification performs as scheduled.


Suspended: The polling notification is removed from the scheduler but the
database trigger and buffer table are not dropped.
Disabled: The polling notification is removed from the scheduler and the
database trigger and buffer table are dropped.

Adapter Notification Templates


The JDBC Adapter provides the following adapter notification templates:

Notification Notification
Description
Type Template

Insert Publishes notification of insert operations on a database


Insert Notification
Notification table.

Update Publishes notification of update operations on a database


Update Notification
Notification table.

Delete Publishes Notification of delete operations on a database


Delete Notification
Notification table.

Basic Polls a database table for data using a SQL Select


Basic Notification
Notification operation.

Stored Stored Procedure Publishes notification data by calling a stored procedure


Procedure Notification inside of a database.

Ordered Ordered Publishes notification data for multiple insert, update, or


Notification Notification delete operations on multiple tables for a given database.

PUBLIC
Most adapter notifications, such as Insert Notifications and Update
Notifications, can use the Exactly Once notification feature. This feature
ensures that notification data will not be duplicated even if a failure occurs
during processing. This is achieved by assigning unique IDs for each
publishable document. After a processing failure, the Integration Server
checks for duplicate records in storage and ignores any duplicate IDs.

Note: Stored Procedure Notifications do not support the Exactly Once


notification feature because they do not use publishable document unique IDs

Notification Types
There are six types of notifications: Insert, Update, Delete, Basic, Stored
Procedure, and Ordered Notifications. They vary in how they are structured
and operate, as described in the following sections.
Insert Notifications, Update Notifications, and Delete Notifications use a
combination of triggers and buffer tables to capture events that happen on
specific tables in a database. You configure the triggers and buffer tables
when you configure the notifications.
These types of notifications operate similarly, with the exception of the type of
SQL operation (insert, update, or delete) that they monitor. The adapter
creates the trigger and buffer table when you enable a notification. The buffer
table, which you specified when you configured the notification, holds the data
selected by the trigger. There are no special size constraints for the buffer
tables. The trigger monitors the database table you specified when you
configured the notification and inserts data into the buffer table. When you
disable a notification, the adapter drops the trigger and buffer table.When you
enable a notification, the database trigger monitors the table and inserts the
data into the buffer table. When the Integration Server invokes the notification,
it retrieves the rows of data from the buffer table, publishes each row in the
notifications publishable document, and then removes this row from the buffer
table.After you enable these types of notifications, the trigger and buffer table
remain in the database table when you: Shut down the Integration Server
Disable the package containing the enabled notification Reload the package
containing the enabled notification Suspend the notification.

Types of JDBC transactions and their working methods.

Answer:

Local Transaction,

No Transaction,

XA Transaction.

When the broker goes down what happens to the files.

PUBLIC
Answer:

If it is guaranteed then it will store in outbound document store and if volatile


then it get discarded.

Web services: how do you differentiate provider and consumer.

Answer:

1) we Can see in properties panel

2) Symbol

What is RESTFUL services.

Answer:

While REST stands for Representational State Transfer, which is an


architectural style for networked hypermedia applications, it is primarily used
to build Web services that are lightweight, maintainable, and scalable. A
service based on REST is called a RESTful service.

Difference between web


services and REST.
Answer:

SOAP REST

1)An XML based message protocol. An Architecture style Protocol.

2) Uses WSDL for communication between


Uses XML or JSON to send and receive data.
consumer and provider.

3) Invoking service by calling RPC methods. SImply call service via URL path.

4) Does not return human readable format. Result is readable, which is just plain XML or

PUBLIC
SOAP REST

JSON.

5) Transafer Over HTTP. Also use other protocols


Transfer is over HTTP only.
such as SMTP, FTP etc

6)Java Script can call SOAP, But it is difficult to


Easy to call from Java script
implement.

Performance is much better, Less CPU use,


7) Performance is not great.
Leaner code, Light weight etc.

How do you get thread


dumps.
Answer:

Following are methods for generating a Java thread dump on Unix:

Note the process ID number of the Java process (e.g. using top, a grep on ps
-axw, etc.)

Example: PS -ef | grep -i java

Send a QUIT signal to the process with the kill -QUIT or kill -3 command.

Example: kill -3 JAVA_PID

When you are trying to


send transaction to target

PUBLIC
system, If for some reason
the target system is down.
What you would be assure
that no data is lost.
Answer:

Use guaranteed document

The document will store in Client Queue

JMS triggers
Answer:

1) A JMS trigger is a trigger that receives messages from a destination (queue


or topic) on a JMS provider and then processes those messages.

2) A JMS message is a Broker document; its document type is related to that


of the topic or queue being published.

3) The webMethods Broker implementation of JMS supports the JMS publish-


subscribe (pubsub) and point-to-point (PTP) messaging models.

4) Pub-Sub Messaging

In pub-sub messaging, message producers publish messages, and message


subscribers consume those messages. If multiple message consumers
subscribe to the same topic, each consumer will be given a copy of the
message.

5) Point-to-Point Messaging

In JMS PTP messaging, a message producer sends messages to a message


queue, and a message consumer retrieves messages from that queue. A JMS

PUBLIC
message queue is implemented as a guaranteed Broker client with the same
name.

6)Durable Subscribers

The JMS pub-sub model supports durable subscribers. A durable subscriber’s


subscription remains valid until explicitly removed by the client program; thus,
a durable subscriber survives both client connection failures and server
restarts.

What is JMS Connection


Factories?
Answer:

A connection factory is the object a client uses to create a connection to a


provider. A connection factory encapsulates a set of connection configuration
parameters that has been defined by an administrator. Each connection
factory is an instance of the ConnectionFactory, QueueConnectionFactory,
or TopicConnectionFactory interface. To learn how to create connection
factories, see To Create JMS Administered Objects for the Synchronous
Receive Example.

At the beginning of a JMS client program, you usually inject a connection


factory resource into a ConnectionFactory object. For example, the following
code fragment specifies a resource whose JNDI name is
jms/ConnectionFactory and assigns it to a ConnectionFactory object:
@Resource(lookup = “jms/ConnectionFactory”)
private static ConnectionFactory connectionFactory;
In a Java EE application, JMS administered objects are normally placed in the
jms naming sub context.

What is durable
subscription?
Answer:

PUBLIC
To ensure that a pub/sub application receives all published messages, use
PERSISTENT delivery mode for the publishers. In addition, use durable
subscriptions for the subscribers.
A durable subscriber registers a durable subscription by specifying a unique
identity that is retained by the JMS provider. Subsequent subscriber objects
that have the same identity resume the subscription in the state in which it
was left by the preceding subscriber. If a durable subscription has no active
subscriber, the JMS provider retains the subscription’s messages until
they are received by the subscription or until they expire.

What are the ways to send


document form source and
destination?
Answer:

(i)Unicast – One source and one known destination. Point to point


(ii)Multicast – One source and multiple limited known destination. we need to
provide Range of IP address to send message in multicast.
e.g – 192.168.35.200-255
(iii)Publishing – One source and multiple destination- One to many

Have you involved in any


migration project,if yes did
you faced any issue on the
server level or so.
Answer:

1) Jar File..

PUBLIC
2) Internal DB (Scheduler missed or next run time is not visible)

3) Thread Hung

2) JDBC adapter is not suspending.

How do you deploy the


code from one environment
to another environment
Answer:

1) Use Deployer(Define, Build, Map, Deploy)

2) Install Inbound Release

What is default cache for


webemthods?
Answer:

Default cache for webmethods is EHCACHE

What are the different


Setting of the REPEAT
Counter?
Answer:

PUBLIC
The REPEAT step’s Count property specifies the maximum number of times
the server re-executes the child steps in the REPEAT step.
If you set “Count” to The REPEAT step:
0 Does not re-execute children.
Any value > 0 Re-executes children up to this number of times.
-1 or blank Re-executes children as long as the specified Repeat on condition
is true.

How flow service saved in


Integration server?
Answer:

The flow service is saved as xml on disk i.e when not in used or if you want a
local copy.

But when loaded in IS, it loads as Java.

How to check CPU usage


Answer:

You can also in AIX system use topas command

How do you check memory


usage
Answer:

1) In Administrator Page

2) Top Command

3) free command (e.g. Free -m)

PUBLIC
Explain in and out how does
Trading Network works
whe n an transaction is
submitted
Answer:

Trading Networks uses the information you specify at design time to process a
document at run time. It uses:

DESIGN TIME

* Sender’s profiles to ensure the user sending the document is an active


partner in your network

* Receiver’s profiles to obtain information specific to the receiving partner for


processing document (e.g., the partner’s HTTP hostname and port number if
delivering a document via HTTP)

* TN document types to recognise the type of document that was sent and to
determine document attributes to associate with the document

* Processing rules to determine the actions you want Trading Networks to


perform against the inbound document

The run-time processing that Trading Networks performs for an inbound


document can be divided into four areas:

RUN TIME PROCESSING

* Recognition processing, which is determining the TN document type that


matches the inbound document using the identification information that you
defined in TN document types, and after locating the matching TN document
type, obtaining the values of the document attributes that you specified in the
TN document type.

PUBLIC
* Processing rule selection, which is determining the processing rule to use for
the inbound document based on the criteria that you defined in processing
rules.

* Pre-processing actions, which is performing the pre-processing actions that


you defined in the TN document type and/or processing rule.

* Processing actions, which is performing the processing actions that you


defined in the processing rule.

Preprocessing Rule – Verify Digital Signature, Validate Structure Of


Document, Check Duplication, Save.

Processing Action – Execute a Service, Send an alert e-mail, Change the user
status, Deliver the document to Receiver, Respond with a message.

DELIVERY TIME PROCESSING

Trading Networks can deliver documents using one of the following delivery
options that you specify with the Deliver Document By processing action in a
processing rule:

* Immediate Delivery. Trading Networks attempts to deliver a document


directly to the receiving partner. You can create immediate delivery methods
using the standard delivery methods such as HTTP and FTP. In addition, you
can create immediate delivery methods using custom delivery services.

* Scheduled Delivery. Trading Networks queues documents to be delivered at


scheduled times. You define scheduled delivery queues in Trading Networks.
When you define the queue, you associate both a schedule and a scheduled
delivery service with the queue. At the time(s) the schedule indicates, Trading
Networks invokes the scheduled delivery service to act on the documents in
the queue to deliver them. Trading Networks provides one built-in scheduled
delivery service. You can add additional scheduled delivery services to meet
your needs.

* Queued for polling. Trading Networks places the document in an internally-


defined queue. The receiving partner later polls for documents and Trading
Networks

returns all the documents in the queue for which that partner is the receiver.

* Receiver’s Preferred Protocol. Trading Networks looks up the receiver’s


profile and uses the delivery method that is identified in the profile as the
preferred delivery method.The preferred delivery method can be any of the
immediate delivery methods, scheduled delivery, or queued for polling.

PUBLIC
What are public queues
Answer:

A queue that you define to schedule the delivery of documents that are aimed
at multiple different receiving partners. When you define a public queue, the
name of the public queue is added to the list of queues you can select when
specifying a scheduled delivery method with the Deliver Document By
processing action.

What is scope property in


Branch step?
Answer:

Scope property is use to define name of the document in the pipeline.To


restrict pipeline access to only the data in this document.

How to access flow service


from browser?
Answer:

Use a URL in the form:

http://servername:port/invoke/folder.subFolder.subsubFolder/serviceName

(the package name is not part of the URL in any way)

What is copy condition


property in wm?
Answer:

PUBLIC
i) We can associate a condition before linking 2 variables in the pipeline tab of
the map steps. (Generally used in Branch Step)

If the condition is true then only the variable value will be copied into the other
variable otherwise it won’t be copied.

To accomplish this, we need to set the “copy condition” property to TRUE and
write the condition you want to check in the copy condition text box in the
property panel.

This link appears in blue colour in the mapping.

ii) Copy condition allows linking two different input variables to one single
output variable.The selection of the value for output variable is based on the
condition matching the criteria being set in the copy condition property.When a
copy condition is set, the link appears to be blue in the pipeline.

What is data and Transient


error?
Answer:

Data Error – An error that arises due to invalid data, illegal character or
invalid data format.

Transient Error – an error that arises from a condition that might be resolved
quickly, such as the unavailability of a resource due to network issues or
failure to connect to a database. You can use webMethods Monitor to find and
resubmit documents with a status of FAILED.
Ex. java.net.SocketTimeoutException.

What Is a Startup
Service?
Answer:

PUBLIC
A startup service is one that Integration Server automatically executes when it
loads a package into memory.

How many types of


clustering?
Answer:

(i) Stateless – If one server in cluster goes down during request process. the
data will be lost.
(ii) Stateful – No data loss. It required below for clustering.
Same OS, Database, Port, adapters, Time Zone, messaging, connection,
prefix.

What is the difference


between drop and delete
pipeline variable?
Answer:

Drop pipeline is an explicit cleanup. It is a request for the pipeline to remove


a variable from the available list of variables and make the object it refers to
available for garbage collection by the Java Virtual Machine.

Delete is purely a design-time operation to remove the variable from the


current view. It is only of use if you have created a variable that you didn’t
mean to create. If you delete a variable that was there because it was
previously in the pipeline when you change the view in developer you will see
the variable appear again.

PUBLIC
What is the difference
between drop and delete
pipeline variable?
Answer:

Drop pipeline is an explicit cleanup. It is a request for the pipeline to remove


a variable from the available list of variables and make the object it refers to
available for garbage collection by the Java Virtual Machine.

What is lexical operator in


Trigger filter condition?
Answer:

You can use the lexical relational operators to create filters that compare
string values.

How to invoke flow service


inside java service?
Answer:

object.pub.invoke.full path of flow service.

How to Publish a
Documents to the Broker?

PUBLIC
Answer:

Step 1: A publishing service on the Integration Server sends a document to


the dispatcher (or an adapter notification publishes a document when an event

occurs on the resource the adapter monitors).Before the Integration Server


sends the document to the dispatcher, it validates

the document against its publishable document type. If the document is not
valid, the service returns an exception specifying the validation error.

Step 2: The dispatcher obtains a connection from the connection pool. The
connection pool is a reserved set of connections that the Integration Server
uses to publish documents to the Broker. To publish a document to the
Broker, the Integration Server uses a connection for the default client.

Step 3: The dispatcher sends the document to the Broker.

Step 4: The Broker examines the storage type for the document to determine
how to store the document.

* If the document is volatile, the Broker stores the document in memory.

* If the document is guaranteed, the Broker stores the document in memory

and on disk.

Step 5: The Broker routes the document to subscribers by doing one of the
following:

* If the document was published (broadcast), the Broker identifies subscribers


and places a copy of the document in the client queue for each subscriber.

* If the document was delivered, the Broker places the document in the queue
for the client specified in the delivery request.

* If there are no subscribers for the document, the Broker returns an


acknowledgement to the publisher and then discards the document.

A document remains in the queue on the Broker until it is picked up by the


subscribing client. If the time-to-live for the document elapses, the Broker

discards the document.

Step 6: If the document is guaranteed, the Broker returns an


acknowledgement to the dispatcher to indicate successful receipt and storage

PUBLIC
of the document. The

dispatcher returns the connection to the connection pool.

Step 7: The Integration Server returns control to the publishing service, which
executes the next step.

It is possible that a
document could match
more than one condition in
a trigger.However, the
Integration Server
executes only the service
associated with the first
matched condition.
Answer:

IS sorts by trigger conditions but when 2 or more condition are same, it can’t
decide which one to choose from. So by default it uses the first condition.

PUBLIC
Trigger service processing
mode (Serial or
Concurrent)
Answer:

The processing mode for a trigger determines whether the Integration Server
processes documents in a trigger queue serially or concurrently. In serial
processing,the Integration Server processes the documents one at a time in
the order in which the documents were placed in the trigger queue. In
concurrent processing, the Integration Server processes as many documents
as it can at one time, but not necessarily in the same order in which the
documents were placed in the queue.

What is synchronous and


asynchronous
request/reply?
Answer:

=> the publishing flow service stops executing while it waits for a response.
When the service receives a reply document from the specified client, the
service resumes execution

=> In an asynchronous request/reply, the publishing flow service continues


executing after publishing the requested document. That is, the publishing
service does not wait for a reply before executing the next step in the flow
service. The publishing flow service must invoke a separate service to retrieve
the reply document.

PUBLIC
What is Subscribe path for
documents delivered to the
default client?
Answer:

Step 1: The dispatcher on the Integration Server requests documents from


the default client’s queue on the Broker.

Note: The default client is the Broker client created for the Integration
Server.The Broker places documents in the default client’s Broker queue
only if the publisher delivered the document to the Integration Server’s
client ID.

Step 2: The thread retrieves documents delivered to the default client in


batches.The number of documents the thread retrieves at one time is
determined by the capacity and refill level of the default document store and
the number of documents available for the default client on the Broker.

Step 3: The dispatcher places a copy of the documents in memory in the


default document store.

Step 4: The dispatcher identifies subscribers to the document and routes a


copy of the document to each subscriber’s trigger queue.In the case of
delivered documents, the Integration Server saves the documents to a trigger
queue. The trigger queue is located within a trigger document store that is
saved on disk.

Step 5: The Integration Server removes the copy of the document from the
default document store and, if the document is guaranteed, returns an
acknowledgement to the Broker. The Broker removes the document from the
trigger client queue.

Step 6: The dispatcher obtains a thread from the server thread pool, pulls the
document from the trigger queue, and evaluates the document against the
conditions in the trigger.

Note: If exactly-once processing is configured for the trigger, the Integration


Server first determines whether the document is a duplicate of one already

PUBLIC
processed by the trigger. The Integration Server continues processing the
document only if the document is new.

Step 7: If the document matches a trigger condition, the Integration Server


executes the trigger service associated with that condition.If the document
does not match a trigger condition, the Integration Server sends an
acknowledgement to the trigger queue, discards the document (removes it
from the trigger queue), and returns the server thread to the server thread
pool. The Integration Server also generates a journal log message stating that
the document did not match a condition.

Step 8: After the trigger service executes to completion (success or error),


one of the followings occurs:

* If the trigger service executed successfully, the Integration Server returns an


acknowledgement to the trigger queue (if this is a guaranteed document),
removes the document from the trigger queue, and returns the server thread
to the thread pool.

* If a service exception occurs, the trigger service ends in the error and the
Integration Server rejects the document, removes the document from the
trigger queue, returns the server thread to the thread pool, and sends an error
document to indicate that an error has occurred. If the document is
guaranteed, the Integration Server returns an acknowledgement to the trigger
queue. The trigger queue removes its copy of the guaranteed document from
storage.

* If a transient error occurs during trigger service execution and the service
catches the error, wraps it and re-throws it as a run-time exception, then the
Integration Server waits for the length of the retry interval and re-executes the
service using the original document as input. If the Integration Server reaches
the maximum number of retries and the trigger service still fails because of a
transient error, the Integration Server treats the last failure as a service error.

What is Local Publishing?


Answer:

Local publishing refers to the process of publishing a document within the


Integration Server. Only subscribers located on the same Integration Server
can receive and process the document. In local publishing, the document
remains within the Integration Server.There is no Broker involvement.Local
publishing occurs when the service that publishes the document specifies that

PUBLIC
the document should be published locally or when the Integration Server is not
configured to connect to a Broker.

Note: If the dispatcher detects that the document is a duplicate of one already
in the trigger queue, the dispatcher discards the document. The dispatcher
detects duplicate documents using the universally unique identifier (UUID) for
the document.

What is Publishable
Document Type?
Answer:

A publishable document type is a named schema-like definition that describes


the structure and publication properties of a particular kind of document.
Essentially, a publishable document type is an IS document type with
specified publication properties such as storage type and time-to-live.

What is Adapter
notifications?
Answer:

Adapter notifications determine whether an event has occurred on the


adapter’s resource and then sends the notification data to the Integration
Server in the form of a published document. There are two types of adapter
notifications: polling notifications, which poll the resource for events that occur
on the resource, and listener notifications, which work with listeners to detect
and process events that occur on the adapter resource.For example, if you
are using the JDBC Adapter and a change occurs in a database table that an
adapter notification is monitoring, the adapter notification publishes a
document containing data from the event and sends it to the Integration
Server.

PUBLIC
What is Time-to-live for a
publishable document?
Answer:

The time-to-live value for a publishable document type determines how long
instances of that document type remain on the Broker. The time-to-live
commences when the Broker receives a document from a publishing
Integration Server. If the time-to-live expires before the Broker delivers the
document and receives an acknowledgement of document receipt, the Broker
discards the document. This happens for volatile as well as guaranteed
documents.

What is Publishing
Services?
Answer:

Using the publishing services, you can create services that publish or deliver
documents locally or to the Broker. The publishing services are located in the
WmPublic package.

Service Description

pub.publish:deliver Delivers a document to a specified destination.

Delivers a document to a specified destination and waits for a


pub.publish:deliverAndWait
response.

Publishes a document locally or to a configured Broker.Any clients


pub.publish:publish (triggers) with subscriptions to documents of this type will receive the
document.

PUBLIC
Publishes a document locally or to a configured Broker and waits for a
pub.publish:publishAndWai
response. Any clients (triggers) with subscriptions for the published
t
document will receive the document.

Delivers a reply document in answer to a document received by the


pub.publish:reply
client.

pub.publish:waitForReply Retrieves the reply document for a request published asynchronously.

Explain Triggers in detail.


Answer:

Triggers establish subscriptions to publishable document types and specifies


how to process instances of those publishable document types. When you
build a trigger, you create one or more conditions. A condition associates one
or more publishable document types with a single service. The publishable
document type acts as the subscription piece of the trigger. The service is the
processing piece. When the trigger receives documents to which it subscribes,
the Integration Server processes the document by invoking the service
specified in the condition. Triggers can contain multiple conditions.

* The trigger contains at least one condition.

* Each condition in the trigger specifies a unique name.

* Each condition in the trigger specifies a service.

* Each condition in the trigger specifies one or more publishable document


types.

What is Join Time-out?


Answer:

When you create a join condition (a condition with two or more publishable
document types), you need to specify a join time-out. A join time-out specifies
how long the Integration Server waits for the other documents in the join
condition. The Integration Server uses the join time-out period to avoid

PUBLIC
deadlock situations (such as waiting for a document that never arrives) and to
avoid duplicate service invocation.The Integration Server starts the join time-
out period when it pulls the first document that satisfies the join condition from
the trigger queue.

What is Trigger Queue


Capacity and Refill Level?
Answer:

The Integration Server contains a trigger document store in which it saves


documents waiting for processing. The Integration Server assigns each trigger
a queue in the trigger document store. A document remains in the trigger
queue until the server determines which trigger condition the document
satisfies and then executes the service specified in that condition. You can
determine the capacity of each trigger’s queue in the trigger queue. The
Capacity indicates the maximum number of documents that the Integration
Server can store for that trigger. You can also specify a Refill level to indicate
when the Integration Server should retrieve more documents for the trigger.
The difference between the Capacity and the Refill level determines up to how
many documents the Integration Server retrieves for the trigger from the
Broker. For example, if you assign the trigger queue a Capacity of 10 and a
Refill level of 4, the Integration Server initially retrieves 10 documents for the
trigger. When only 4 documents remain to be processed in the trigger queue,
the Integration Server retrieves up to 6 more documents. If 6 documents are
not available, the Integration Server retrieves as many as possible.

In the Properties panel, under Trigger queue, in the Capacity property, type
the maximum number of documents that the trigger queue can contain. The
default is 10.3 In the Refill level property, type the number of unprocessed
documents that must remain in this trigger queue before the Integration Server
retrieves more documents for the queue from the Broker. The default is 4. The
Refill level value must be less than or equal to the Capacity value.

PUBLIC
what is a Trigger Service
Retry Limit, how to
implement?
Answer:

When building a trigger, you can specify a retry limit for the trigger service.
The retry limit indicates the maximum number of times the Integration Server
re-executes the trigger service when the trigger service fails because of a run-
time exception. A run-time exception (specifically, an ISRuntimeException)
occurs when the trigger service catches, and wraps a transient error and
rethrows it as a run-time exception.

What Is Document
Processing?
Answer:

Within the publish-and-subscribe model, document processing is the process


of evaluating documents against trigger conditions and executing the
appropriate trigger services to act on those documents. The processing used
by the Integration Server depends on the document storage type and the
trigger settings. The Integration Server offers three types of document
processing.

* At-least-once processing indicates that a trigger processes a document one


or more times. The trigger might process duplicates of the document. The
Integration Server provides at-least-once processing for guaranteed
documents.

* At-most-once processing indicates that a trigger processes a document once


or not at all. Once the trigger receives the document, processing is attempted
but not guaranteed. The Integration Server provides at-most-once processing
for volatile documents (which are neither redelivered nor acknowledged). The
Integration Server might process multiple instances of a volatile document, but
only if the document was published more than once.

PUBLIC
* Exactly-once processing indicates that a trigger processes a document once
and only once. The trigger does not process duplicates of the document. The
Integration Server provides exactly-once processing for guaranteed
documents received by triggers for which exactly-once properties are
configured.

Note – At-least-once processing and exactly-once processing are types of


guaranteed processing. In guaranteed processing, the Integration Server
ensures that the trigger processes the document once it arrives in the trigger
queue. The server provides guaranteed processing for documents with a
guaranteed storage type.

The following section provides more information about how the Integration
Server ensures exactly-once processing.

Note: Guaranteed document delivery and guaranteed document processing


are not the same things. Guaranteed document delivery ensures that a
document, once published, is delivered at least once to the subscribing
triggers. Guaranteed document processing ensures that a trigger makes one
or more attempts to process the document.

What is Redelivery Count?


Answer:

The redelivery count indicates the number of times the transport (the Broker
or, for a local publish, the transient store) has redelivered a document to the
trigger. The transport that delivers the document to the trigger maintains the
document redelivery count. The transport updates the redelivery count
immediately after the trigger receives the document. A redelivery count other
than zero indicates that the trigger might have received and processed (or
partially processed) the document before.

Difference between direct


service invocation and
REST Service.
PUBLIC
Answer:

(i) Direct invocation only use get methods other http method not possible. In
REST service all http methods get, put, post and delete can be used.
(ii) Cannot pass input such as document, XMl, JSON or Java script. In REST
service all type of input variable can be passed.

What is Document History


Database?
Answer:

The document history database maintains a history of the guaranteed


documents processed by triggers. The Integration Server adds an entry to the
document history database when a trigger service begins executing and when
it executes to completion(whether it ends in success or failure). The document
history database contains document processing information only for triggers
for which the Use history property is set to true.

The database saves the following information about each document:

* Trigger ID. Universally unique identifier for the trigger processing the
document.

* Document UUID. Universally unique identifier for the document. The


publisher is responsible for generating and assigning this number. (The
Integration Server

automatically assigns a UUID to all the documents that it publishes.)

* Processing Status. Indicates whether the trigger service executed to


completion or is still processing the document. An entry in the document
history database has either a status of “processing” or a status of “completed.”
The Integration Server adds an entry with a “processing” status immediately
before executing the trigger service. When the trigger service executes to
completion, the Integration Server adds an entry with a

status of “completed” to the document history database.

* Time. The time the trigger service began executing. The document history

PUBLIC
database uses the same time for both entries it makes for a document. This
allows the Integration Server to remove both entries for a specific document at
the same time.

To determine whether a document is a duplicate of one already processed by


the trigger, the Integration Server checks for the document’s UUID in the
document history database. The existence or absence of the document’s
UUID can indicate whether the document is new or a duplicate.

What is Join Conditions?


Answer:

Join conditions are conditions that associate two or more document types with
a single trigger service. Typically, join conditions are used to combine data
published by different sources and process it with one service.

What are Join Types?


Answer:

The join type that you specify for a join condition determines whether the
Integration Server needs to receive all, any, or only one of the documents to
execute the trigger service. The following table describes the join types that
you can specify for a condition.

Join Type Description

All (AND) The Integration Server invokes the associated trigger service when
the server receives an instance of each specified publishable document type
within the join time-out period. The instance documents must have the same
activation ID. This is the default join type.For example, suppose that a join
condition specifies document types A and B and C. Instances of all the
document types must be received to satisfy the join condition. Additionally, all
documents must have the same activation ID and must be received before the
specified join time-out elapses.

Any (OR) The Integration Server invokes the associated trigger service when
it receives an instance of any one of the specified publishable document
types.

For example, suppose that the join condition specifies document types A or B
or C. Only one of these documents is required to satisfy the join condition. The

PUBLIC
Integration Server invokes the associated trigger service every time it receives
a document of type A, B, or C. The activation ID does not matter. No time-out
is

necessary.

Only one (XOR) The Integration Server invokes the associated trigger service
when it receives an instance of any of the specified document types. For the
duration of the join time-out period, the Integration Server discards (blocks)
any instances of the specified publishable document types with the same
activation ID. For example, suppose that the join condition specifies document
types A or B or C. Only one of these documents is required to satisfy the join
condition. It does not matter which one. The Integration Server invokes the
associated trigger service after it receives an instance of one of the specified
document types. The integration Server continues to discard

s of any qualified document types with the same activation ID until the
specified join time-out elapses.

Explain try & catch block?


Answer:

The try-catch block works the same way as Java. It is 3 sequence step Main,
Try and Catch block. You execute a sequence of steps, if you faced
unexpected exception/error you skip the processing and execute the catch
block.

To define the Try-Catch block, follow the following steps:

– Create a new flow service using SoftwareAG designer.


– Create a new sequence and call it ‘Main’.
– Create another two sequence under the ‘main’ sequence; call them, ‘Try’,
and ‘catch’.

Go to the properties of each sequence and configure the ‘exit on’ as follows:

PUBLIC
* ‘Success’ for the ‘Main’ sequence.
*’Failure’ for the ‘try’ sequence.
* ‘Done’ for the catch sequence.

The main sequence has the try, and Catch sequences. So by defining the ‘exit
on’ parameter of for the main to ‘Success’, this means that if the first
sequence (Try) finished successfully then exit the sequence ‘Main’ and the
‘Catch’ Block/sequence will not be executed.

The ‘Try’ sequence is configured to ‘exit on’ = ‘failure’, which means if one
step failed, all the steps following the failed step in the ‘Try’ block will not be
executed, and the code will jump to execute the ‘Catch’ block/sequence.
The ‘Catch’ block is configured to ‘exit on’ = ‘done’ which means that each
step in the ‘Catch’ block must be executed regardless of the result of each
step.

How to write flow service


to insert data into DB by
using Flat File?
Answer:

We can loop over the flatfilename/recordWithNoId and map each of the fields
to the insert adapter.

Difference between EAI &


B2B?
Answer:

EAI – When you want to communicate with any application in one company
(XYZ Ltd), that time you go with EAI

PUBLIC
B2B – When the company (XYZ Ltd) want to communicate with other
company(ABC Ltd), that time you go with B2B

Comunication, standards and cooperation are managed differently, depending


on if you are connecting two application in a business, or if you are connecting
two disparate businesses.

What is start transaction


commit & roll back
transaction & how to use
them? wht do you mean by
roll back transaction?
Answer:

These statements provide control over use of transactions

START TRANSACTION or BEGIN start a new transaction.

COMMIT – commits the current transaction, making its changes permanent.

ROLLBACK – rolls back the current transaction, canceling its changes.

ROLLBACK TRANSACTION rolls back an explicit or implicit transaction to the


beginning of the transaction, or to a savepoint inside the transaction. It also
frees resources held by the transaction

PUBLIC
Type of transaction?
Local ,XA,No transaction
Explain?
Answer:

1. NO_TXN: When you configure any JDBC Connection with NO_TXN as


TXN type, the Adapter Services which use this kind of connection need not be
used commit and rollback services explicitly by the developer. Which means
all the transactions are auto commit or auto roll back.

2. LOCAL_TXN: For Adapter Services which use any connection that was
configured with LOCAL_TXN as TXN Type, then the developer has to use the
pub.art.transaction.commit & pub.art.transaction.rollback explicitly in his code;
which means the transactions are not auto committed and needs explicit
transaction control.
Also, all the DB Interactions within the transaction boundary are committed or
rolled back together. Txn boundary is when you start a TXN using
pub.art.startTransaction ; You can write as many Adapter Services as you
want here and then endTransaction;as you are using LOCAL_TXN all these
adapter Services will be committed or rolled back together.

Note: in LOCAL_TXN the entire adapter Services you write between start and
end txns should be talking to same database (can talk to multiple tables,
though)

3. XA_TXN: Same as Local txn; but the adapter services you write in between
start and end can talk to Different Databases; which means XA_TXN supports
multiphase transactions. A transaction will be used when you want to say,
insert content into 2 Diff DBs, one after the other; If the insert in 2nd DB fails
then if you want to rollback in the 1st DB as well, then use XA txn.
No transaction — means the DB manages the transaction (not IS); this is
“auto-commit”
Local transaction — means IS manages the transaction; can only involve one
Local transaction connection and 0 or more “No transaction” connections
XA transaction — means IS manages the transaction and can handle
distributed transactions (operation over more than 1 database)
Local transaction means in a single transaction we can do multiple operations
on a single database. To be more clear all these multiple operations use the

PUBLIC
same Adapter connection.
XA transaction means in a single transaction we can do multiple operations on
different databases.
To be more clear all these multiple operations use different Adapter
connection.
Use no transaction for select and local transaction for any DML (insert, delete,
and update) operations to handle the transaction boundary (via start, commit
and rollback).

Diff between custom SQL


& dynamic sql ?
Answer:

 In Custom SQL, you can pass inputs to your SQL query at runtime. With
DynamicSQL, you can pass your entire SQL statement, or part of your
SQL statement can be passed at runtime; along with inputs to it. So
basically you can build your SQL dynamically at runtime.
 You use Custom SQL when SQL query is fixed with input variable that are
passed to the custom adapter service. You use dynamic SQL, if SQL
query changes during the runtime; in this cases you prepare the sql query
and pass it to dynamic adapter service in the runtime.
 Custom SQL and Dynamic SQL we have to write the queries explicitly.
The main difference between Custom SQL and Dynamic SQL is; in
Custom SQL we can give the input values at design time. In Dynamic
SQL we can give the input values at run time.
 Custom SQL is faster than the Dynamic SQL because Custom SQL is
pre-compiled (at design time) but dynamic SQL is not pre-compiled
(compiled at runtime).
 Dynamic SQL is more versatile than the Custom SQL.

Explain type of
Notification? Basic
Notification?
Answer:

PUBLIC
There are seven types of notifications: Insert, Update, Delete, Basic, Stored
Procedure, StoredProcedureNotificationWithSignature, and Ordered
Notifications. They vary in how they are structured and operate,

In contrast with Insert Notifications, Update Notifications, and Delete


Notifications, Basic Notifications require that you define a buffer table, and a
database trigger or other means of monitoring database changes so that
changes are written into the buffer table.

What is optimizer? how to


create rule?
Answer:

webMethods ‘Optimize for process’ is component which is responsible


for monitoring the customer requests, generating statistics, calculate the KPIs,
and notifying you with any violation for the rules you define to monitor your
business process.

How to recognize the TN


document? what is
document gateway?
Answer:

The document gateway service adds “hints” to TN_parms that Trading


Networks uses when performing document recognition for a flat file document.

When new flat file comes to TN has to be recognized and processed


document type matching. For finish this activity flat file doesn’t contain
metadata like XML file. It’s our responsibility to provide Meta data like who
is the sender, receiver, document type. we are using gateway service to
provide all this information to the TN as TN_parms pipeline variable which will
be provided by gateway service. It is our responsibility to invoke or provide the
gateway service to the each and every new flat file(document type). once you
have been defined the Gateway service to specific flat file document make

PUBLIC
sure that you need to give the gateway service to the whoever is interested to
send this type of flat file to TN in order to process the file in TN.

How to recognize the TN


document? what is
document gateway?
Answer:

The document gateway service adds “hints” to TN_parms that Trading


Networks uses when performing document recognition for a flat file document.

When new flat file comes to TN has to be recognized and processed


document type matching. For finish this activity flat file doesn’t contain
metadata like XML file. It’s our responsibility to provide Meta data like who
is the sender, receiver, document type. we are using gateway service to
provide all this information to the TN as TN_parms pipeline variable which will
be provided by gateway service. It is our responsibility to invoke or provide the
gateway service to the each and every new flat file(document type). once you
have been defined the Gateway service to specific flat file document make
sure that you need to give the gateway service to the whoever is interested to
send this type of flat file to TN in order to process the file in TN.

How to handle large doc in


TN? what is batch
process?
Answer:

To configure large document handling

PUBLIC
How to configure schedule
queue in TN? wht is
public,private?
Answer:

Public Queue – is a queue that you define to schedule the delivery of


documents that are aimed at multiple different receiving partners. When you
define a public queue, the name of the public queue is added to the list of
queues you can select when specifying a scheduled delivery method with the
Deliver Document By processing action.

Private Queue – are queues that contains only delivery tasks that correspond
to documents aimed for a specific receiving partner. You define private
queues in the profile of the receiving partner.

How to configure schedule


queue in TN? wht is
public,private?
Answer:

Public Queue – is a queue that you define to schedule the delivery of


documents that are aimed at multiple different receiving partners. When you
define a public queue, the name of the public queue is added to the list of
queues you can select when specifying a scheduled delivery method with the
Deliver Document By processing action.

Private Queue – are queues that contains only delivery tasks that correspond
to documents aimed for a specific receiving partner. You define private
queues in the profile of the receiving partner.

PUBLIC
What type of protocol use
in TN? wht is delivery
service?
Answer:

Protocol types : EDIINT, FTP, FTPS, HTTP, HTTPS, EMAIL,


WEBSERVICES, QUEUE.

A delivery service delivers a document to the receiving partner.

What is pre & post


processing action in TN?
Answer:

Preprocessing Action – Verify Digital Signature, Validate Structure Of


Document, Check Duplication, Save.

Processing Action – Execute a Service, Send an alert email, Change the user
status, Deliver the document to Receiver, Respond with a message.

Both the action performed at run time.

What is TPA? how to do


configuration of TN
certificate?
Answer:

PUBLIC
Trading Partner Agreement (TPA) – you can define trading partner
agreements for pairs of partners. Each TPA contains specific information for
two trading partners, where one partner represents a sender and the other
represents the receiver. You can create applications that use TPA information
to tailor how documents are exchanged. Other webMethods components
(e.g., webMethods EDI Module) use TPAs to perform processing.

Write flow service use


start & commit transaction
& applied to jdbc adapter
service?
Answer:

you can use start and commit transaction to control JDBC manually such as
use Local Transaction that will let you control DB by put inside your service
start transaction when service start and when it done will invoke commit or
rollback in case there is an error

Write flow service use


start & commit transaction
& applied to jdbc adapter
service?
Answer:

you can use start and commit transaction to control JDBC manually such as
use Local Transaction that will let you control DB by put inside your service

PUBLIC
start transaction when service start and when it done will invoke commit or
rollback in case there is an error

How to do batch process in


TN?
Answer:Scheduled delivery – is a way to batch multiple documents that are
acted on (delivered) at scheduled times. When the Deliver Document By
processing action indicates a scheduled delivery method, Trading Networks
creates a delivery task for the document and places the delivery task in the
queue identified with the Deliver Document By processing action. The queue
is associated with a schedule and a scheduled delivery service. At the times
the schedule indicates, Trading Networks invokes the scheduled delivery
service to act on the documents in the queue to deliver them.

What is broker & broker


server?
Answer:

Broker – Broker’s role is to manage the routing of documents between


applications running on different Integration Servers. For an Integration Server
to join in this process, it must first be configured to connect to Broker.

Broker Server – The server which host broker instance is called broker
server.

What are the type of flow


services?
Answer:

Type of Flow service

PUBLIC
The INVOKE Step

The BRANCH Step

The REPEAT Step

The SEQUENCE Step

The LOOP Step

The EXIT Step

The MAP Step

Difference between loop &


repeat?
Answer:

LOOP – The LOOP step repeats a sequence of child steps once for each
element in an array that you specify. For example, if your pipeline contains an
array of purchase‐order line items, you could use a LOOP step to process
each line item in the array. To specify the sequence of steps that make up the
body of the loop (that is, the set of steps you want the LOOP to repeat), you
indent those steps beneath the LOOP.

REPEAT- The REPEAT step allows you to conditionally repeat a sequence of


child steps based on the success or failure of those steps. You can use
REPEAT to:

Re-execute (retry) a set of steps if any step within the set fails.

Re-execute a set of steps until one of the steps within the set fails.

PUBLIC
Explain exit step? what are
properties used in exit
step?
Answer:

The EXIT flow step allows you to exit the entire flow service or a single flow
step. You specify whether you want to exit from:

The nearest ancestor (parent) LOOP or REPEAT flow step to the EXIT flow
step.

The parent flow step of the EXIT flow step.

A specified ancestor flow step to the EXIT flow step.

The entire flow service.

Properties used in exit step

Exit from- The flow step from which you want to exit. Specify any one of $loop,
$parent $flow Label.

Signal – Whether the exit is to be considered a success or a failure.Specify


one of the following:

Specify… To…

SUCCESS Exit the flow service or flow step with a success condition.

FAILURE Exit the flow service or flow step with a failure condition. An
exception is thrown after the exit. You specify the error message with the
Failure message property.

Failure message The text of the exception message you want to display. If
you want to use the value of a pipeline variable for this property, type the
variable name between % symbols (for example,%mymessage%).

This property is not used when Signal is set to SUCCESS

PUBLIC
Explain sequence step?
what is branch step
explain?
Answer:

SEQUENCE – You use the SEQUENCE step to build a set of steps that you
want to treat as a group. Steps in a group are executed in order, one after
another. By default, all steps in a flow service, except for children of a
BRANCH step, are executed as though they were members of an implicit
SEQUENCE step (that is, they execute in order, one after another). However,
there are times when it is useful to explicitly group a set of steps.

BRANCH – The BRANCH step allows you to conditionally execute a step


based on the value of a variable at run time. For example, you might use a
BRANCH step to process a purchase order one way if the PaymentType value
is “CREDIT CARD” and another way if it is “CORP ACCT”.

Branch on a switch value. Use a variable to determine which child step


executes. At run time, the BRANCH step matches the value of the switch
variable to the Label property of each of its targets. It executes the child step
whose label matches the value of the switch.

Branch on an expression. Use an expression to determine which child step


executes. At run time, the BRANCH step evaluates the expression in the
Label property of each child step. It executes the first child step whose
expression evaluates to “true.”

What is invoke step?


Answer:

INVOKE – Use the INVOKE step to request a service within a flow. You can
use the INVOKE step to:

􀂄 Invoke any type of service, including other flow services and Web service

PUBLIC
connectors.

􀂄 Invoke any service for which the caller of the current flow has access
rights on the local webMethods Integration Server.

􀂄 Invoke built‐in services and services on other webMethods Integration


Servers.

􀂄 Invoke flow services recursively (that is, a flow service that calls itself). If
you use a flow service recursively, bear in mind that you must provide a
means to end the recursion.

􀂄 Invoke any service, validating its input and/or output.

What is scope of variable


that declared outside the
main? is that accessible in
try & catch block?
Answer:

Yes, it should be accessable if it is declared before the main sequence as an


individual variable.

Scope of a variable is limited when declared inside a sequence when scope is


mentioned.

What is purpose of scope


in branch step?
Answer:Scope property is use to define name of the document in the
pipeline. To restrict pipeline access to only the data in this document

PUBLIC
What is ordered
notification?
Answer:

Ordered Notification – An OrderedNotification publishes notification data for


multiple insert, update, or delete

operations on multiple tables. You configure notifications using Developer or


Designer

What are way to reload


packages?
Answer:

We can reload package in following ways.

1) From Administrator page we can reload Package

2) From Designer also we can reload package.

What is UDDI?
Answer:

A UDDI registry (Universal Description, Discovery, and Integration) is an XML-


based registry for businesses worldwide to list themselves on the Internet. It
allows users to view, find, and share web services.

When working with a UDDI registry from Designer you can:

Discover the web services published in a UDDI registry.

Designer displays a list of the web services that are published in a UDDI
registry. By default, Designer displays all published services, but you can use
a filter to limit the number of services shown.

PUBLIC
Incorporate a web service into Integration Server.

You can incorporate a web service in the UDDI registry into your integration
solution by creating a consumer web service descriptor from the web service.
Designer automatically generates a web service connector for each operation
in the web service, which can be invoked in the same way as any other IS
service.

Publish services to a UDDI registry.

You can make a service that resides on Integration Server (such as a flow
service, Java service, C service, or adapter service) available as an operation
of a web service and then publish the web service to a UDDI registry.

What are components of


WSDL?
Answer:

An WSDL document describes a web service. It specifies the location of the


service, and the methods of the service, using these major elements:

Element Description

Defines the (XML Schema) data types used by the web service

Defines the data elements for each operation

Describes the operations that can be performed and the messages involved.

Defines the protocol and data format for each port type

How to invoke service by


browser?
Answer:

PUBLIC
To Invoke service from browser Use a URL in the form:

http://servername:port/invoke/folder.subFolder.subsubFolder/serviceName

(the package name is not part of the URL in any way)

What is diff between web


service use in v6.5 & 7.1?
Answer:

No Discussion on this question yet!

What is diff between web


service use in v6.5 & 7.1?
Answer:

No Discussion on this question yet!

Difference between Heap


& Stack?
Answer:

Heap: All object stored in heap memory and it is dynamic.


Stack: All static variable stored in stack

What is purpose of format


service specification?

PUBLIC
Answer:

pub.flatfile:FormatService – Gives you the flexibility to change the values in


the flat file to the defined format.

Is it possible to sort data


using select adapter
service?
Answer:

Yes We can sort data using select adapter service.While creating Select
adapter service , there is an option to select Sort Order. Using that option you
can sort the column based on ascending or descending. Screen shot below
for reference.

Difference between
savepiplelinetofile &
savepipline?
Answer:

pub.flow:savePipelineToFile – WmPublic. Saves the current pipeline to a file


on the machine running webMethods Integration Server.This service is helpful
in the interactive development or debugging of an application. In some cases,
however, using the Pipeline debug property for the debugging of an
application is more efficient.

pub.flow:savePipeline – WmPublic. Saves a pipeline into memory, for later


retrieval with pub.flow:restorePipeline.After a successful invocation of
savePipeline, a snapshot of pipeline fields will be saved in

memory under the key provided by $name. Note that because the pipeline is

PUBLIC
saved to memory, it will not be available after a server restart. This service is
helpful in the interactive development or debugging of an application

What is a tracePipeline?
Answer:

WmPublic. Writes the names and values of all fields in the pipeline to the
server log. Prior to Integration Server 7.1, Integration Server used a number-
based system to set the level of debug information written to the server log.
Integration Server maintains backward compatibility with this system.

What is purpose of
clearPiplenine?
Answer:

WmPublic. Removes all fields from the pipeline. You may optionally specify
fields that should not be cleared by this service.

Input – Preserve

How to parse flatfile?


Answer:

Step 1: Create Appropriate Flat File Schema

Create a flat file schema to be used for conversion and validation of the
document. Here we have first create a dictionary and then same set in
schema.

Step2:Receive the Flat File

Invoke the pub.file:getFile service to retrieve the file from the local file system.

Step3 : Parse Flat File Data to an IS Document

PUBLIC
To parse a flat file using a flat file schema, and to convert the data into IS
documents. We can call the convertToValues service.

Step 4: Process the IS Document

Now we can process IS document .As per the requirements we can map the
data.

·In the Developer Navigation Panel, select the saved flat file schema, click the
Flat File Structure tab, and then click Icon ‘Create Document Type’.
This creates an IS document type in the same folder as the schema with the
same name of Schema name. On the Pipeline tab under Service In, in the
ffSchema variable specify the location and name of the flat file schema.

·Perform the following to add a Document Reference to the IS document type


that is based on the flat file schema.

On the Pipeline tab click under Pipeline Out, Select Document Reference to
add a new document reference variable or else choose the IS document type
created in step 1 of this procedure .

·On the Pipeline tab under Service Out, map the value of the ffValues variable
to the·document reference variable created in this procedure and save the
service.

What is trigger throttle?


Answer:

The Queue Capacity Throttle reduces the capacity and refill levels for all the
trigger queues by the same percentage. For example, if you set the Queue
Capacity Throttle to 50% of maximum, a trigger queue with a capacity of 10
and a refill level of 4 will have an adjusted capacity of 5 and an adjusted refill
level of 2.

The Integration Server Administrator provides an Execution Threads Throttle


that you can use to reduce the execution threads for all concurrent triggers by
the same percentage. For example, if you set the Execution Threads Throttle
to 50% of maximum,the Integration Server reduces the maximum execution
threads for all concurrent triggers by half. A concurrent trigger with a
maximum execution threads value of 6, has an adjusted maximum execution
threads value of 3.

PUBLIC
What is subscription
trigger?
Answer:

Triggers, specifically webMethods messaging triggers, establish subscriptions


to publishable document types. Triggers also specify the services that will
process documents received by the subscription. Within a trigger, a condition
associates one or more publishable document types with a service.

What is best way for


appending a doc a list?
Answer:

Stay away from appendToDocumentList and use PSUtilities service


addToList.

Performance is badly hit with appendToDocumentList.

What are type of doc in


WM?
Answer:

IDATA type. This format can be processed be wM.

what will happen when


publish doc deleted?

PUBLIC
Answer:

The trigger listening to that publishable document wont be invoked.

What is purpose of iterate


in convertToValue?
Answer:

Whether you want to process the input all at one time.

false : Processes all input data at one time. This is the default.

true : Processes top level records (children of the document root) in the flat
file schema one at a time. After all child records of the top level record are
processed, the iterator moves to the top level of the next record in the flat file
schema, until all records are processed =

How to increase the


performance of trigger?
Answer:

The recommendations made here are generalized and suitable for most uses,
though it will be important to verify for each environment that the chosen
settings are appropriate by adequately testing the performance and behavior
profile.

Trigger Retries#

If you are not using trigger retries then set the retry count to 0. This will
noticeably improve performance, especially as documents get larger and more
complex.

Trigger Processing Mode#

Serial processing mode is used to enforce document order on consumption. In


a single instance environment, the order of processing is the order in the

PUBLIC
queue. In a clustered environment, the order of processing is based on
publisher order i.e. an instance acquires ownership for documents from one
source and then exclusively processes these in a single threaded fashion the
order they appear in the queue. Other sources may be processed by other IS
instances in the cluster. For most general purposes, the processing mode will
be set to concurrent and this gives far better performance.

Rough Guide:#

Trigger Processing Mode = Concurrent, assuming order of processing is not


important

Trigger Threads#

The number of threads should generally be no more than a small multiple of


the number of CPU cores available to the IS, also considering that all service
threads within the Integration Server must share CPU resources. The number
of threads may be increased further where the work done in the service has a
relatively low CPU content, for example where there is a lot of IO involved, or
where the service thread is waiting for external applications or resources.
Setting trigger threads too high will start to incur context-switching overheads
at the OS level and within the JVM.

Rough Guide:#

Trigger Threads = 4 x CPU, except where order of processing is important and


Serial processing mode is use

Other Considerations#

The amount of work each thread must do and, not just for one trigger but for
all thread consumers. If the trigger service is very short and lightweight then it
can support more threads than more computationally expensive threads.
Document size will play a factor but it’s only one reason that threads
become computationally expensive. Review all the triggers in the context of
the whole system and not just the single trigger.

Trigger Cache Size and Refill Level#

The trigger cache size defines the number of documents that may be held in
memory while documents are unacknowledged on the broker. The cache is
filled with documents (in batches of up to 160 at a time) from the Broker, so a
larger cache size reduces the number of read activities performed on the
Broker. The IS goes back to the Broker for more documents when the
documents left in the cache falls below the Refill Level. The objective in
setting these parameters is to ensure that whenever a trigger thread becomes

PUBLIC
available for use, there is a document already in the cache. The Cache Size
should be as small as it can be whilst still being effective, to minimize the use
of memory in the IS (note the size is specified in documents, not based on
total size held). If the processing of documents is generally very short, the
cache should be larger. As a rough guide, the cache size may be 5 to 10
times the number of trigger threads, and the refill level 30%-40% of that value
(or the refill should be twice the number of trigger threads).

Rough Guide:#

Trigger Cache Size = 5 x Trigger Threads Trigger Refill Level = 2 x Trigger


Threads Trigger Cache Memory Usage = Trigger Cache Size x Average
Document Size

Other Considerations#

For small documents with lightweight services these setting could be too
conservative and for large documents it could be too aggressive.

Acknowledgement Queue Size#

The AckQ is used to collect acknowledgements for documents processed by


the trigger threads when they complete. If set to a size of one, then the trigger
thread waits for the acknowledgement to be received by the Broker before it
completes. If the AckQ size is greater than one, then the trigger thread places
the acknowledgement in the AckQ and exits immediately. A separate
acknowledging thread polls the AckQ periodically to write acknowledgements
to the broker. If the AckQ reaches capacity then it is immediately written out to
the broker, with any trigger threads waiting to complete while this operation is
done. Setting the AckQ size greater than one enables the queue, and reduces
the wait time in the trigger threads. If performance is important, then the AckQ
should be set to a size of one to two times the number of trigger threads.
Acknowledgements only affect guaranteed document types. Volatile
documents are acknowledged automatically upon reading them from the
Broker into the Trigger Cache.

Rough Guide:#

Acknowledgement Queue Size = 2 x Trigger Threads

Other Considerations#

The potential caveat to this setting is the number of documents that might
need to be reprocessed in the event of a server crash.

PUBLIC
In-Memory Storage#

Volatile documents are handled entirely in memory and so the quality of


storage is propagated into the handling in the IS as well. Loss of memory
results in loss of a volatile document whether it is held by the Broker or by the
IS. This is also why acknowledgements are returned to the Broker upon
reading a volatile document.

For guaranteed messages, in-memory storage about the state of a message


can exist in both the Trigger Cache and in the Acknowledgement Queue. If the
IS terminates abnormally, then this state is lost. However, for
unacknowledged, guaranteed documents, the redelivery flag will always be
set on the Broker as soon as the document is accessed by the IS. Therefore
after an abrupt IS termination or disconnection, the unacknowledged
documents will be presented either to the same IS upon restart, or once the
Broker determines that the IS has lost its session, to another IS in the same
cluster.

All these documents will have the redelivery flag set and may be managed
using the duplicate detection features, described in the Pub/Sub User Guide.

In such a failure scenario, the number of possible unacknowledged messages


will be a worst case of Trigger Cache Size plus Acknowledgement Queue
Size. The number of documents that had completed processing but were not
acknowledged will be a worst case of Trigger Threads plus Acknowledgement
Queue Size. The number of documents that were part way through processing
but hadn’t completed will be a worst case of Trigger Threads. The number of
documents that will have the redelivery flag set but had actually undergone no
processing at all will be a worst case of Trigger Cache Size.

Other Considerations#

If the trigger is subscribing to multiple document types (has multiple


subscription conditions defined), then the trigger threads are shared by all
document types. This may give rise to variations in the processing required for
each message and the size of each message in the cache. Where this
complicates the situation, it is better to use one condition per trigger.

If document joins are being used, refer to the user guide for information about
setting join timeouts. A trigger thread is only consumed when the join is
completed and the document(s) are passed to the service for processing.

PUBLIC
in JDBC connection what is
purpose of block TimeOut
and Expire Timeout
Answer:

Block timeout: refers to how much time the IS should wait to get connection
from connection pool before throwing exception.

Expire Timeout: refers to how much time the free connection stay as it is
before it expires.

What is connection pool?


Answer:

JDBC Adapter Connection Pools (or “connection pooling”) refers to the


capability to concurrently open and use several connections to a given
database instance for the sake of performance

What will happen


notification
enabled,disabled &
suspended?
Answer:In suspended: Database trigger and buffer table are not dropped.
Document will store in queue.

In Disabled: Database trigger and buffer table are dropped. Document


discarded.

PUBLIC
What is event manager &
event handler?
Answer:

No Discussion on this question yet!

Diff insert,basic
notification?
Answer:

Insert Adapter Notification is used to retrieve the insert data from the buffer
table and publish for the trigger to use it. Like others, it is also polling based.
Point is the trigger and buffer table is created automatically and dropped by IS
when the notification is created and disabled (not suspended).

In basic, you create the trigger and buffer table. So the trigger and buffer table
is not created and deleted automatically when notification is disabled .

Purpose of canonical doc?


Answer:

A canonical document is standardized representation that a document might


assume while it is passing throughthe webMethod Integration platform. A
canonical document act as the intermediary data format between resources.

PUBLIC
When IS is up & broker is
down wht will happen when
we are publish the doc?
Answer:

Dispatcher will check if the broker is up or not. If not , then it will check the
document storage type and if the storage type is guaranteed then it will be
storing the document to outbound document store and if document type is
volatile then document will be discarded and exception will be thrown.

What is difference soap


Http, soap RPC in web
service call?
Answer:

No Discussion on this question yet!

What is deployer? explain


Answer:

Deployer is use for deployement of packages, services, subservices from one


environment to another environment by taking care of all the dependecy or
depenedent services. e.g. Test environment to Production Environment. Four
steps involved in deployement of Package – Create, build, Map, Deploy.

What is IDATA?

PUBLIC
Answer:

IDATA is the type of document which webMethods can process. And


document type can be converted to IDATA format by using the inbuild
services provide by wM.

Difference between flat


file schema & dictionary?
Answer:

Sequence: first FF Dictionary then FF Schema

Dictionary defines the data heads, sequencing, record with id or no id type,


take first line of FF or multiple line.

Schema deals with the value breaks, delimiter used in FF and the most
important part of creating the schema document. A schema is dependent on
the dictionary for creating the schema document.

How ll you ensure whether


IS has started
successfully?
Answer:

you will get message like…. “Config file directory saved”. in the server logs.

Where you can see


installed jars?
Answer:

PUBLIC
If it is a generic JAR (means which is used by many packages) then you can
see it inside “IntegrationServer/lib/jars” folder.

– If it is only for a single package, then you can see them inside
“/IntegartionServer/packages//code/jars”.

For more information, refer IS Admin Guide.

Place only the required JARs on IS, otherwise, too much JARs can slow down
the IS.

How many broker you can


connect to IS?
Answer:

one IS can connect only one broker, but one Broker can connect multiple ISs.

How is flow service


different from Java
service?
Answer:

Flow service uses pre compiled codes which are fast and efficient.

PUBLIC
When an IS is started
where is the session id
stored.?
Answer:

No Discussion on this question yet!

What do we have to do if
the repeat flow step has to
repeat infinitely.?
Answer:

-1

In the exit flow step what


are the three parameter
used. when do we use
them.?
Answer:

Exit From

PUBLIC
Signal

Failure Message these are three importent parameters for Exit flow step.

Exit From (which you want to exit) flow or parent or loop. Signal should be true
or false.

We can accomplish the


task of the broker of
publish subscribe using
reverse invoke server then
why do we need broker.?
Answer:

When we have dependency on other server’s response.

What is the use of default


and null in sequence flow
step.?
Answer:

Default is when no other conditions are met. Generally used and at the last of
the internal sequences.

Null is, when no value is there. Eg: the variable to compare doesn’t exist.
Someone can suggest more.

PUBLIC
When broker is down,
where will the document
sent be stored?
Answer:

If it is guaranteed document then it will store in outbound document store and


if volatile document then it get discarded.

What are public & private


queues?
Answer:

Public Queue – is a queue that you define to schedule the delivery of


documents that are aimed at multiple different receiving partners. When you
define a public queue, the name of the public queue is added to the list of
queues you can select when specifying a scheduled delivery method with the
Deliver Document By processing action.

Private queues – are queues that contains only delivery tasks that
correspond to documents aimed for a specific receiving partner. You define
private queues in the profile of the receiving partner.

What is difference
between local, xa and no
transactions?
Answer:

PUBLIC
Local transaction – means in a single transaction we can do multiple
operations on a single database. To be clearer all these multiple operations
use the same Adapter connection explicit commit required.

NO Transaction – Auto commit all transactions.

How to publish a document


to the broker?
Answer:

Step i: A publishing service on the Integration Server sends a document to


the dispatcher (or an adapter notification publishes a document when an event
occurs on the resource the adapter monitors). Before the Integration Server
sends the document to the dispatcher, it validates the document against its
publishable document type. If the document is not valid, the service returns an
exception specifying the validation error.

Step ii: The dispatcher obtains a connection from the connection pool. The
connection pool is a reserved set of connections that the Integration Server
uses to publish documents to the Broker. To publish a document to the
Broker, the Integration Server uses a connection for the default client.

Step iii: The dispatcher sends the document to the Broker.

Step iV: The Broker examines the storage type for the document to determine
how to store the document.
If the document is volatile, the Broker stores the document in memory.
If the document is guaranteed, the Broker stores the document in memory and
on disk.

Step v: The Broker routes the document to subscribers by doing one of the
following:

PUBLIC
What is Keystore and
Truststore?
Answer:

KEYSTORE: Integration Server stores its private keys and SSL certificates in
keystore files.
TRUSTSTORE: Integration Server uses a truststore to store its trusted root
certificates, which are the public keys for the signing CAs.It simply functions
as a database containing all the public keys for CAs within a specified trusted
directory

What is meaning of
different kill command in
UNIX? what is TERM?
Answer:

The ‘-9’ is the signal_number and specifies that the kill message sent should
be of the KILL (non-catchable, non-ignorable) type.

What is Transformers?
Answer:

Transformers are the services you use to accomplish value transformations in


the Pipeline view. You can only insert a transformer into a MAP step. You can
use any service as a transformer. This includes any Java, C, or flow service
that you create and any built-in services in WmPublic, such as
the pub.date.getCurrentDateString and the pub.string.concat services. By
using transformers, you can invoke multiple services

PUBLIC
Difference between Any
server and All server in
Scheduler.
Answer:

Any server – The task runs on any server connected to the database. Use this
option if the task only needs to run on one server and it doesn’t manner which
one. For example, in a clustered environment, if all servers in the cluster share
a single database for a parts inventory application, and a particular function
needs to run on that database once a day, any server in the cluster can
perform that function. The Any server option is the default setting when
clustering is enabled.
Note: The Any server option does not specify an order in which servers are
used to execute tasks. In other words, no load balancing is performed.
Instead, an instance of the scheduler runs on each server connected to the
database. Periodically, each instance checks the database in which
information about scheduled jobs is stored. The first scheduler instance to find
a task that is due to start runs it, then marks the task as

What is API?
Answer:

An API for a website is code that allows two software programs to


communicate with each another. The API spells out the proper way for a
developer to write a program requesting services from an operating system or
other application.

What are the HTTP


methods?(Verbs)
Answer:

PUBLIC
GET – to get one or more resource.
POST – To create a new resource.
PUT – TO update a resource
PATCH – Partially Update a resource.
DELETE – to Delete a resource.

What is difference
between min pool value set
to 0 and 1?
Answer:

Setting the min connection value to 0 permits the pool to close all connections
in the pool after the idle/expiration time has passed. This is the recommended
setting for production environments. This avoids keeping unused connections
too long and helps avoid stale connections (Stale connection : connections
that the adapter thinks are still good, but the resource has closed.)

Which communication
protocol used between IS
and Terracota?
Answer:

TCP communication protocol used to communicate between IS and Terracota.

What are the type of


certificate authentication?
Answer:

PUBLIC
(i) Required Certificate – Certificate must for authentication.
(ii) Request Certificate – If certificate not provided then it will ask for username

What is trading partner


agreement (tpa)?
Answer:

you can define trading partner agreements for pairs of partners. Each TPA
contains specific information for two trading partners, where one partner
represents a sender and the other represents the receiver. You can create
applications that use TPA information to tailor how documents are exchanged.
Other webMethods components (e.g., webMethods EDI Module) use TPAs to
perform processing.

What is Universal
Messaging (UM)?
Answer:

(i) Universal Messaging is a Message Orientated Middleware product that


guarantees message delivery across public, private and wireless
infrastructures.
(ii) Universal Messaging has been built from the ground up to overcome the
challenges of delivering data across different networks.
(iii) It provides its guaranteed messaging functionality without the use of a
web server or modifications to firewall policy.
(iv) Universal Messaging design supports both broker-based and umTransport
communication,and thus comprises client and server components.

What is the difference


between Broker and UM?
Answer:

PUBLIC
(i) UM Support multicast, Broker does not.
(ii) UM support Active-Active Cluster, Broker support Active-Passive cluster
(iii) Broker has default broker monitor, UM does not have such thing.

What is Realms and Zones


in UM?
Answer:

A Universal Messaging Realm is the name given to a single Universal


Messaging server. Universal Messaging realms can support multiple network
interfaces, each one supporting different Universal Messaging protocols. A
Universal Messaging Realm can contain many Channels or Message Queues.
Zones: Zones provide a logical grouping of one or more Realms which
maintain active connections to each other. Realms can be a member of zero
or one zone, but a realm cannot be a member of more than one zone .
Realms within the same zone will forward published channel messages to
other members of the same zone, if there is necessary interest on
corresponding nodes.

What is RNAMEs in UM?


Answer:

An RNAME is used by Universal Messaging Clients to specify how a


connection should be made to a Universal Messaging Realm Server.

What is JNDI and its


uses?
Answer:

It stands for Java Naming and Directory Interface. JNDI allows distributed
applications to look up services in an abstract, resource-independent way.The
most common use case is to set up a database connection pool on a Java EE
application server. Any application that’s deployed on that server can gain

PUBLIC
access to the connections they need using the JNDI name java:
/env/FooBarPool without having to know the details about the connection.

What is Topics and Queue


in UM?
Answer:

Topics: A JMS topic is the type of destination in a 1-to-many model of


distribution. The same published message is received by all-consuming
subscribers. You can also call this the ‘broadcast’ model. You can think of a
topic as the equivalent of a Subject in an Observer design pattern for
distributed computing. Some JMS providers efficiently choose to implement
this as UDP instead of TCP. For topic’s the message delivery is ‘fire-and-
forget’ – if no one listens, the message just disappears. If that’s not what you
want, you can use ‘durable subscriptions’.

Queue : A JMS queue is a 1-to-1 destination of messages. The message is


received by only one of the consuming receivers (please note: consistently
using subscribers for ‘topic client’s and receivers for queue client’s avoids
confusion). Messages sent to a queue are stored on disk or memory until
someone picks it up or it expires. So queues (and durable subscriptions) need
some active storage management, you need to think about slow consumers.

What is NHP and NSP in


UM?
Answer:

It is a Communication protocol used between IS

PUBLIC
Webmeth
ods
Broker
Soudip DuttaMay 9, 2020Uncategorized

Post navigation
Previous
Next
webMethods Broker:

 A Broker is where the client programs connect, where document types are stored, and
where client queues and subscriptions are monitored and stored.
 Each Broker Server has one or more entities, called Brokers that reside on it.
 When a Broker client publishes a document, the Broker determines which Broker clients
have subscribed to that document and places the document in the matching Broker client
queues.
 webMethods Broker is the primary component & facilitates asynchronous, message
based integration using the publish and subscribe model, in a WM integration
environment.

Broker Server Host:

 The system on which you install the webMethods Broker software is called the
webMethods Broker Server Host.

Broker Architecture and Components:

 webMethods Broker consists of two main components: Broker Server, the run‐time
component with which publishers and subscribers interact.
 Broker user interface, the administrative component that runs on My webMethods
Server. You use the Broker user interface to configure, monitor, and manage one or
more Broker Servers and the Brokers that they host.
 The Broker user interface is a plug‐in that executes on my webMethods Server. It
enables you to manage webMethods Broker from any browser‐equipped computer in
your organization’s network.
 Any machine that hosts a Broker Server will also host a Broker Monitor. The Broker
Monitor is automatically installed when you install Broker Server.
 Broker Monitor monitors all of the Broker Servers running on the machine where it is
installed. It will automatically attempt to restart any Broker Server that stops running.

PUBLIC
Publish-and-Subscribe Model:

 publish‐and‐subscribe model is a specific type of message‐based solution in which


applications exchange messages through a third entity called a broker.
 Publishers: Applications that produce information & send the information to the broker
entity.
 Subscriber: Applications that require the information connect to the broker and retrieve
the information from the broker entity.

 In the pub‐sub model, information producers and consumers are de‐coupled, meaning
they do not interact with one another directly. Instead, each participant interacts only
with the message and the broker entity.

 Participants in a pub‐sub solution interact asynchronously. A program that produces


information does not have to wait for the consumer to acknowledge receipt of that
information. It simply publishes the information to the broker and continues processing.

Document Type:

 Documents are messages that travel over a network from a publisher to a subscriber,
through the Broker.

Two Document Types:

1. Guaranteed( at least once)


2. Volatile (at most once)

 If the Integration Server on which you have a session is connected to a Broker, when
you make an IS document type publishable, the Integration Server automatically creates
a Broker document type. The Integration Server automatically assigns the Broker
document type a name.

Territories:

 Brokers can share information about their document type definitions and client groups
by joining a territory.
 Documents can travel from clients on one Broker to clients on another Broker in the
same territory.

Gateways:

 Gateways are links that you establish between territories.


 A gateway enables clients in one territory to receive documents that are published in
another territory.

Publishing a document to broker:

Publishing Documents When the Broker Is Not Available:

PUBLIC
 If the Broker is not connected, the Integration Server routes guaranteed documents to an
outbound document store. The documents remain in the outbound document store until
the connection to the Broker is re‐established.

Publishing Documents and Waiting for a Reply:

 In a publish‐and‐wait scenario, a service publishes a document (a request) and then waits


for a reply document. This is sometimes called the request/reply model. A request/reply
can be synchronous or asynchronous.
 In a synchronous request/reply, the publishing flow service stops executing while it
waits for a response. When the service receives a reply document from the specified
client, the service resumes execution.
 In an asynchronous request/reply, the publishing flow service continues executing after
publishing the request document. That is, the publishing service does not wait for a reply
before executing the next step in the flow service. The publishing flow service must
invoke a separate service to retrieve the reply document.

Subscribe Path for Published Documents:

 When a document is published or broadcast, the Broker places a copy of the document
in the client queue for each subscribing trigger. Each subscribing trigger will retrieve
and process the document.

The Subscribe Path for Delivered Documents:

 A publishing service can deliver a document by specifying the destination of the


document. That is, the publishing service specifies the Broker client that is to receive the
document. When the Broker receives a delivered document, it places a copy of the
document in the queue for the specified client only.

Triggers (Broker/Local Triggers):

 Trigger, specifically Broker/local triggers establish subscriptions to publishable


document types.
 Triggers also specify the services that will process documents received by the
subscription. Within a trigger, a condition associates one or more publishable document
types with a service.

File polling in IS is for getting files from a filesystem the server can access directly (usually
local or shared discs). You can use a filepoller to get a file from such a locally accesible
filesystem and then transfer it to another location using the builtin ftp or sftp services if
Integration Server (sftp is supported since 9.0). put, get and ls are available for both protocols
in wmpublic package.
If you want to poll from a remote location without additional coding, you need Active
Transfer, which is an additional product on top of Integration Server.

Difference B/w ESB and API

An ESB (Enterprise Service Bus) provides a means for service-to-service communication. ... An
API gateway is something that typically acts as a proxy for your web services and provides

PUBLIC
interesting value, such as: logging, making SOAP services callable like REST services,
debugging help, tracing, etc...

webMethods.io Integration is a powerful integration platform as a service (iPaaS) that enables


you to automate tasks by connecting apps and services, such as Marketo, Salesforce, Evernote,
and Gmail.

My webMethods Server(MWS) is a run-time container for functions made available by


webMethods applications. The user interface in which you perform these functions is called My
webMethods. ... To sum up, My webMethods Server provides: A Web container for hosting
custom Web, portlet, and BPM applications

Webmethods is a integration tool which can make integration between 2 heterogeneous systems
simple and fast .

PUBLIC

You might also like