Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 44

GRAFFERSID BLOGS

JMS vs Kafka: Which Message


Brokers is better?

Home » App Development » JMS vs Kafka: Which Message Brokers is better?

 January 2, 2024

In the ever-evolving landscape of modern technology, where efficient


communication forms the backbone of success, the choice of a robust
message broker becomes paramount. As the leader of any team, the
decision between JMS vs Kafka Message Broker looms large.

This blog aims to be your guiding light in deciphering the nuances


between JMS and Kafka, helping you make an informed decision that
aligns seamlessly with your organizational goals. Venture with us
through the intricacies of these two prominent message brokers, as
we dissect their features, performance, and suitability for the dynamic
challenges faced by businesses in the contemporary technological
landscape. Join us on this exploration, where clarity is the compass
and informed decisions pave the way for success.

What is JMS?
Java Message Service (JMS) is a powerful and versatile messaging
standard that facilitates communication between distributed client
applications. Developed under the Java Community Process, JMS
provides a vendor-agnostic interface, fostering seamless interaction
between different components in a distributed architecture.

Key Features of JMS:


 Messaging Models: JMS supports both Point-to-Point (P2P) and
Publish/Subscribe (Pub/Sub) messaging models, allowing
flexibility in designing communication patterns.
 Reliability with Transactions: JMS ensures message delivery
reliability through its transactional support, enabling atomic
operations for sending or receiving messages.
 Asynchronous Communication: The asynchronous nature of
JMS enhances system efficiency by decoupling message
producers and consumers, promoting parallel processing.
 Message Selectors: JMS offers message selectors, empowering
consumers to filter messages based on specific criteria, and
optimizing message processing.
 Scalability: With JMS, systems can easily scale horizontally by
adding more instances, ensuring that the messaging
infrastructure grows seamlessly with the demands of the
application.

Advantages of JMS:
 Interoperability: JMS’s vendor-neutral approach fosters
interoperability, allowing integration with diverse messaging
systems and technologies.
 Reliability: Transactional support ensures message delivery even
in the face of system failures, enhancing the overall reliability of
distributed applications.
 Flexibility: The support for different messaging models provides
flexibility in designing communication patterns tailored to specific
use cases.
 Standardization: Being a Java standard, JMS benefits from a
large community and well-defined specifications, ensuring a
stable and mature messaging solution.
 Robust Ecosystem: JMS integrates seamlessly with other Java-
based technologies, creating a robust ecosystem for building
complex, distributed systems.

Drawbacks of JMS:
 Complex Configuration: Setting up and configuring JMS can be
complex, especially for users new to messaging systems,
potentially leading to longer deployment times.
 Learning Curve: The learning curve for mastering JMS concepts
and best practices might be steep, requiring dedicated resources
for training and implementation.
 Scalability Challenges: While JMS supports horizontal scalability,
managing large-scale deployments may pose challenges,
demanding careful consideration of architecture and
configuration.
 Potential Latency: In certain scenarios, the synchronous nature
of point-to-point messaging may introduce latency, impacting
real-time communication requirements.
 Limited Language Support: Although primarily designed for Java,
JMS has limited native support for other programming
languages, which might pose integration challenges in a polyglot
environment.

Case Studies – JMS in Action:


 Global Banking System: A leading global bank implemented
JMS to streamline communication between its diverse banking
applications, resulting in improved transaction speeds and
enhanced reliability.
 E-commerce Platform: An e-commerce giant leveraged JMS to
manage order processing across multiple warehouses, ensuring
real-time inventory updates and reducing order fulfillment times.
 Telecommunications Network: A telecommunications
company integrated JMS to facilitate communication between
various network elements, optimizing the performance of its
distributed infrastructure.
 Healthcare Information Exchange: In a healthcare consortium,
JMS played a pivotal role in securely exchanging patient
information across different healthcare providers, ensuring data
integrity and compliance with regulatory standards.
 Supply Chain Management: A multinational logistics firm
utilized JMS to enhance visibility and coordination across its
supply chain, leading to improved inventory management and
reduced operational costs.

What is Kafka?
Apache Kafka stands as a distributed streaming platform that excels in
handling real-time data feeds and stream processing. Originally
developed by LinkedIn, Kafka has evolved into an open-source
powerhouse, providing a robust foundation for building scalable and
fault-tolerant data pipelines.

Key Features of Kafka:


 Fault Tolerance and Durability: Kafka ensures fault tolerance
by replicating data across multiple nodes and guarantees
durability by persisting messages to disk.
 Scalability: Kafka’s distributed architecture enables seamless
scalability, allowing organizations to handle a massive influx of
data by adding additional brokers to the cluster.
 High Throughput: With its efficient publish-subscribe model,
Kafka can handle high throughput, making it suitable for
scenarios with large volumes of real-time data.
 Stream Processing: Kafka enables real-time stream
processing, allowing users to process and analyze data in
motion, opening the door to a wide range of use cases, from
analytics to monitoring.
 Exactly-Once Semantics: Kafka supports exactly-once
processing semantics, ensuring that messages are delivered and
processed only once, even in the face of failures.

Advantages of Kafka:
 Real-time Data Processing: Kafka’s ability to handle real-time
data streams makes it an ideal choice for applications requiring
low-latency processing and analytics.
 Decoupling of Systems: Kafka provides a decoupled
architecture, allowing different components of a system to
operate independently, enhancing flexibility and resilience.
 Elasticity and Scalability: The distributed nature of Kafka
facilitates elasticity, enabling seamless scalability to adapt to
changing workloads and data volumes.
 Versatility: Kafka’s versatility extends beyond traditional
messaging, supporting use cases such as log aggregation, event
sourcing, and building data lakes.
 Community and Ecosystem: Kafka benefits from a vibrant
open-source community and an extensive ecosystem of
connectors, tools, and integrations, making it well-supported and
adaptable.

Drawbacks of Kafka:
 Complexity: Implementing and managing Kafka can be
complex, especially for users unfamiliar with distributed systems,
requiring a significant learning curve.
 Resource Intensive: Kafka may be resource-intensive,
particularly in scenarios with high data volume and velocity,
necessitating careful consideration of hardware and
infrastructure.
 Operational Overhead: Operating Kafka clusters involves
ongoing maintenance and monitoring, adding to the operational
overhead for organizations.
 Limited Message Retention Policies: Defining and managing
message retention policies in Kafka can be challenging, requiring
careful planning to balance storage needs and data accessibility.
 Integration Challenges: Integrating Kafka with existing systems
might pose challenges, especially when migrating from
traditional messaging solutions or when dealing with diverse
technology stacks.

Case Studies – Kafka in Action:


 Social Media Platform: A major social media platform
implemented Kafka to handle real-time data streams, enabling
instant content updates and personalized user experiences.
 Financial Services: A global financial institution utilized Kafka
for real-time transaction processing, ensuring secure and
efficient communication across its banking systems.
 Retail Analytics: A leading e-commerce company employed
Kafka for stream processing to analyze customer behavior in
real-time, improving targeted marketing strategies and inventory
management.
 IoT Data Processing: An IoT solution provider leveraged Kafka
to ingest and process data from a vast network of connected
devices, ensuring real-time monitoring and analytics.
 Logistics and Supply Chain: A logistics company adopted
Kafka to streamline its supply chain operations, providing real-
time visibility into inventory movements and optimizing logistics
workflows.

JMS vs. Kafka: A Comprehensive Comparison


As business owners evaluate messaging solutions for their diverse
needs, understanding the distinctions between Java Message Service
(JMS) and Apache Kafka is crucial. Below is a side-by-side
comparison across various dimensions to aid in making an informed
choice.

1. Messaging Models:

 JMS: Supports both Point-to-Point (P2P) and Publish/Subscribe


(Pub/Sub) models, providing flexibility in designing
communication patterns.
 Kafka: Primarily follows a Publish/Subscribe model,
emphasizing real-time stream processing and log-based
architecture.

2. Scalability:

 JMS: Horizontally scalable but may face challenges in managing


large-scale deployments.
 Kafka: Inherently designed for seamless horizontal scalability,
allowing organizations to handle increasing data volumes by
adding more brokers.

3. Reliability:

 JMS: Ensures reliability through transactional support, offering


atomic operations for sending or receiving messages.
 Kafka: Achieves reliability by replicating data across multiple
nodes, providing fault tolerance and durability even in the face of
node failures.

4. Latency and Throughput:

 JMS: May introduce latency in certain scenarios, especially with


synchronous point-to-point messaging.
 Kafka: Excels in low-latency processing and high throughput,
making it ideal for real-time data streaming and analytics.

5. Ease of Configuration and Learning Curve:

 JMS: Configuration can be complex, and there is a learning


curve, especially for users new to messaging systems.
 Kafka: Configuration and setup may be complex, requiring a
learning curve for those unfamiliar with distributed systems.

6. Language Support:

 JMS: Primarily designed for Java, with limited native support for
other programming languages.
 Kafka: Offers better language support with official client libraries
for Java, Python, Go, and more, catering to a polyglot
environment.

7. Message Retention Policies:

 JMS: Provides varying options for message retention, but


defining and managing policies can be challenging.
 Kafka: Offers flexible and granular control over message
retention policies, allowing organizations to balance storage
needs and data accessibility.

8. Use Cases:

 JMS: Well-suited for traditional enterprise messaging scenarios,


interconnecting diverse applications and services.
 Kafka: Excels in use cases involving real-time stream
processing, log aggregation, event sourcing, and building data
lakes.

9. Community and Ecosystem:

 JMS: Benefits from a mature Java community but may have a


more limited ecosystem compared to Kafka.
 Kafka: Thrives in a vibrant open-source community with an
extensive ecosystem of connectors, tools, and integrations.

10. Operational Overhead:

 JMS: Involves ongoing maintenance and monitoring, potentially


adding to operational overhead.
 Kafka: Requires operational efforts for cluster management and
monitoring, but its distributed architecture allows for efficient
handling of operational concerns.

Understanding the nuances and trade-offs in these dimensions will


empower decision-makers to align their choice of messaging solution
with the unique requirements of their organizations, ensuring
seamless integration into their technological landscape.

Popularity Analysis: JMS vs. Kafka


As of the latest available market data, both Java Message Service
(JMS) and Apache Kafka stand as prominent players in the
messaging landscape, each catering to distinct use cases and
preferences within the tech community.
JMS Popularity:

JMS, being a Java-based messaging standard, has been a stalwart in


the enterprise messaging domain for many years. Its popularity is
rooted in its maturity, reliability, and strong integration with Java-
centric environments. Many traditional enterprises, particularly those
with established Java-based systems, continue to leverage JMS for
their messaging needs. However, the adoption rate of JMS might see
variations in newer, more dynamic environments that demand real-
time data processing and scalability.

Kafka Popularity:

Apache Kafka, on the other hand, has experienced a surge in


popularity in recent years, especially in contexts where real-time data
streaming, scalability, and fault tolerance are paramount. Kafka’s
distributed, log-based architecture aligns well with modern
microservices and streaming data architectures, contributing to its
widespread adoption. The versatility of Kafka, extending beyond
traditional messaging to include log aggregation, event sourcing, and
data lakes, has further fueled its popularity among organizations
seeking a robust and scalable solution.

Market Trends:

Current market trends suggest a growing preference for Kafka,


particularly in industries and applications that prioritize low-latency
processing, high throughput, and the ability to handle massive data
volumes in real time. Kafka’s vibrant open-source community,
extensive ecosystem, and integration capabilities with various
programming languages have contributed to its popularity among a
diverse range of organizations, including startups and large
enterprises.

Consideration Factors:

The choice between JMS and Kafka often depends on the specific
requirements of the project, existing technology stack, and the nature
of data processing needs. While JMS remains a solid choice for
traditional enterprise messaging scenarios, Kafka’s popularity
continues to rise, driven by the demands of modern, data-intensive
applications.

In summary, both JMS and Kafka enjoy popularity in their respective


niches, and the decision between them should be guided by a careful
evaluation of the specific needs and goals of the organization, taking
into account factors such as scalability, real-time processing, and
ecosystem support.
Choosing Between JMS and Kafka: A Strategic Guide
The decision to use Java Message Service (JMS) or Apache Kafka
hinges on the specific requirements, architectural considerations, and
future scalability needs of an organization. Each messaging solution
excels in different scenarios, and understanding when to leverage
JMS or Kafka is pivotal for optimizing communication within a
technological ecosystem.
When to Use JMS:
Enterprise Legacy Systems: JMS remains a stalwart choice for
organizations with established Java-centric environments and legacy
systems. If your infrastructure heavily relies on Java applications and
you prioritize stability and proven solutions, JMS is a natural fit.

Interoperability within Java Ecosystems: When seamless integration


within Java ecosystems is crucial, JMS provides a standardized
messaging interface, ensuring interoperability and compatibility with a
wide array of Java applications.

Transaction-Intensive Applications: In scenarios where transactional


support is paramount, such as in financial applications or systems
where data integrity is critical, JMS’s reliable and transactional
capabilities make it an ideal choice.

Established Messaging Patterns: For organizations adhering to


traditional messaging patterns, such as Point-to-Point (P2P) or
Publish/Subscribe (Pub/Sub), JMS offers a mature and well-
established framework for such communication models.
When to Use Kafka:
Real-Time Data Processing: Kafka excels in scenarios demanding
real-time data processing, making it an optimal choice for applications
where low-latency and high-throughput communication are critical,
such as in analytics, monitoring, and streaming data architectures.

Scalable and Fault-Tolerant Architectures: Organizations requiring


seamless scalability and fault tolerance, especially in distributed and
microservices architectures, find Kafka well-suited for handling large
volumes of data across diverse nodes.

Log Aggregation and Event Sourcing: Kafka’s log-based architecture


makes it an excellent choice for log aggregation, event sourcing, and
building data lakes. If your use case involves capturing and
processing events in a chronological order, Kafka provides an efficient
solution.

Dynamic and Adaptive Environments: Startups, innovators, and


organizations entering dynamic and rapidly evolving environments
benefit from Kafka’s adaptability. Its versatile ecosystem and support
for multiple programming languages make it suitable for a variety of
scenarios.
Strategic Considerations For JMS and Kafka
Project Requirements: Assess the specific requirements of your
project, considering factors such as data volume, processing speed,
and integration needs.

Existing Infrastructure: Evaluate your current technology stack and


infrastructure. JMS might be a seamless integration choice for Java-
centric environments, while Kafka offers adaptability to diverse
technology stacks.
Future Scalability: Consider future scalability needs. Kafka’s inherent
scalability may be advantageous for organizations anticipating growth
and dynamic workloads.

Learning Curve and Expertise: Factor in the learning curve and


expertise of your team. If familiarity with Java is key, JMS might be a
more straightforward choice; however, if your team is adaptable and
open to new paradigms, Kafka offers a broader range of possibilities.

In conclusion, the decision between JMS and Kafka is not binary but
strategic, rooted in the unique characteristics and aspirations of your
organization. By aligning the strengths of each messaging solution
with your specific use cases, you can build a robust communication
infrastructure that propels your organization into the future of data
processing.
Cost Of JMS (Java Message Service):

 Apache ActiveMQ (Open Source):


o ActiveMQ, being an open-source JMS provider, is typically
free to use without any licensing costs.
o However, organizations may incur costs related to
infrastructure, maintenance, and support.
 Commercial JMS Providers:
o Commercial JMS providers such as IBM MQ, Oracle
WebLogic JMS, and Tibco EMS often follow a licensing
model.
o Licensing costs can range from thousands to tens of
thousands of dollars, depending on factors like features,
support, and deployment scale.
Cost of Kafka:

 Confluent Platform:
o Confluent, the company founded by the creators of Apache
Kafka, offers the Confluent Platform with additional
enterprise features and support.
o Confluent’s pricing typically involves a subscription model
based on factors like data volume, connectors, and level of
support.
o Costs can range from a few thousand dollars per month to
higher amounts based on the specific subscription tier.
 Managed Kafka Services:
o Cloud providers like AWS, Azure, and Google Cloud offer
managed Kafka services (e.g., Amazon MSK, Azure Event
Hubs for Kafka, and Google Cloud Pub/Sub with Kafka
interface).
o Pricing for managed services is usually based on factors
such as data transfer, storage, and the number of
operations.
o Costs can vary but are generally in the range of several
hundred to several thousand dollars per month, depending
on usage.
Considerations:

 Open Source vs. Commercial:


o JMS, with options like Apache ActiveMQ, provides open-
source solutions with potentially lower initial costs.
o Kafka, especially with the Confluent Platform, may involve
subscription fees for additional features and support.
 Cloud Provider Costs:
o When using managed Kafka services on cloud platforms,
organizations need to consider additional costs related to
data transfer, storage, and other cloud-specific services.
 Scaling and Usage Patterns:
o Costs for both JMS and Kafka can be influenced by factors
such as the scale of deployment, data volume, and the
level of support required.
 Support and Maintenance:
o Commercial JMS providers and Confluent often offer
different support tiers with varying costs, providing
organizations with options based on their support needs.
For accurate and current pricing information, it’s recommended to
directly consult the official websites of the respective vendors or
contact their sales teams for tailored quotes based on specific
requirements.

Conclusion
In the ever-evolving landscape of modern technology, the choice
between Java Message Service (JMS) and Apache Kafka is not just a
matter of preference but a strategic decision that shapes the efficiency
and scalability of communication within an organization. As CEOs,
CTOs, Hiring Managers, Project Managers, Entrepreneurs, and
Startup Founders navigate the intricate realm of messaging solutions,
understanding the nuances and distinctive features of JMS and Kafka
becomes paramount.

Reflecting on JMS:
JMS, with its solid foundation and maturity, has long been the stalwart
of traditional enterprise messaging. Its reliability, transactional support,
and established presence within Java-centric environments make it a
reliable choice, particularly for organizations with legacy systems and
a focus on stability. JMS continues to hold its ground in scenarios
where the emphasis is on proven solutions and interoperability within
Java ecosystems.

Embracing Kafka’s Rise:


In contrast, Apache Kafka emerges as a force to be reckoned with,
riding the wave of the industry’s shift towards real-time data
processing and scalable, fault-tolerant architectures. Kafka’s
distributed, log-based model, coupled with its versatility in supporting
diverse use cases beyond traditional messaging, positions it as a
frontrunner in the era of microservices, data streaming, and dynamic
scaling. Its vibrant community and extensive ecosystem contribute to
its popularity, especially in environments demanding adaptability and
performance.
The Path Forward:
As we conclude this exploration, it’s essential to recognize that the
choice between JMS and Kafka is not a one-size-fits-all decision.
Organizations must weigh the demands of their projects, existing
infrastructure, and future scalability requirements. JMS remains a
robust choice for established enterprises with a legacy Java stack,
ensuring a seamless integration of messaging capabilities.

On the other hand, Kafka’s popularity is indicative of its alignment with


the demands of modern architectures, where real-time processing,
scalability, and adaptability are pivotal. For startups, innovators, and
those venturing into the realms of streaming data, Kafka presents an
exciting avenue to explore.

In the above table, school_id is the foreign key.

There are two approaches to join three or more tables in SQL:

1. Using JOINS in SQL:


The same logic is applied here which is used to join two tables i.e.,
the minimum number of join statements to join n tables are (n-1).

1. select s_name, score, status, address_city, email_id,


2. accomplishments from student s inner join mark m on
3. s.s_id = m.s_id inner join details d on
4. d.school_id = m.school_id;

SELECT emp_name AS Employee, salary AS Salary FROM employee ORDER BY salary DESC LIMIT 0,1;

SELECT * FROM(

SELECT emp_name, salary, DENSE_RANK()

over(ORDER BY salary DESC) AS

ranking FROM employee)

AS k

WHERE ranking=3;

DIFFERENCE BETWEEN DELETE AND


TRUNCATE IN SQL
Difference Between DBMS and
RDBMS
DBMS RDBMS

DBMS stores data as


RDBMS stores data in tabular form.
file.

Data elements need to Multiple data elements can be accessed at the same
access individually. time.

No relationship between Data is stored in the form of tables which are related
data. to each other.

Normalization is not
Normalization is present.
present.
DBMS RDBMS

DBMS does not support


RDBMS supports distributed database.
distributed database.

It stores data in either a It uses a tabular structure where the headers are the
navigational or column names, and the rows contain corresponding
hierarchical form. values.

It deals with small


It deals with large amount of data.
quantity of data.

Data redundancy is
Keys and indexes do not allow Data redundancy.
common in this model.

It is used for small


organization and deal It is used to handle large amount of data.
with small data.

Not all Codd rules are


All 12 Codd rules are satisfied.
satisfied.

Security is less More security measures provided.

It supports single user. It supports multiple users.

Data fetching is slower


for the large amount of Data fetching is fast because of relational approach.
data.

The data in a DBMS is


subject to low security There exists multiple levels of data security in a
levels with regards to RDBMS.
data manipulation.
DBMS RDBMS

Low software and


Higher software and hardware necessities.
hardware necessities.

Examples: XML,
Window Registry, Examples: MySQL, PostgreSQL, SQL Server,
Forxpro, dbaseIIIplus Oracle, Microsoft Access etc.
etc.

Create table in MangoDb

db.createCollection("posts")

insert the record in collection


db.students.insertMany([

{ id: 1, name: 'Ryan', gender: 'M' },

{ id: 2, name: 'Joanna', gender: 'F' }

]);

Find the the record in

db.students.find({ gender: 'F' });

db.posts.find( {name: "Joanna"} )

db.posts.updateOne( { name: "Ryan" }, { $set: { likes: 2 } } )

db.posts.deleteOne({ title: "Post Title 5" })


db.createCollection("posts", {
validator: {
$jsonSchema: {
bsonType: "object",
required: [ "title", "body" ],
properties: {
title: {
bsonType: "string",
description: "Title of post - Required."
},
body: {
bsonType: "string",
description: "Body of post - Required."
},

MongoDB Update Operators


There are many update operators that can be used during document updates.

Fields
The following operators can be used to update fields:

 $currentDate: Sets the field value to the current date


 $inc: Increments the field value
 $rename: Renames the field
 $set: Sets the value of a field
 $unset: Removes the field from the document

Array
The following operators assist with updating arrays.

 $addToSet: Adds distinct elements to an array


 $pop: Removes the first or last element of an array
 $pull: Removes all elements from an array that match the query
 $push: Adds an element to an array
Following are the main differences between functions and
procedures:
Functions Procedures

A function has a return type and A procedure does not have a return type.
returns a value. But it returns values using the OUT
parameters.

You cannot use a function with You can use DML queries such
Data Manipulation queries. Only as insert, update, select etc… with
Select queries are allowed in procedures.
functions.

A function does not allow output A procedure allows both input and output
parameters parameters.

You cannot manage transactions You can manage transactions inside a


inside a function. procedure.

You cannot call stored procedures You can call a function from a stored
from a function procedure.

You can call a function using a You cannot call a procedure using select
select statement. statements.
The following are the key differences between triggers and stored
procedures:
1. Triggers cannot be manually invoked or executed.
2. There is no chance that triggers will receive parameters.
3. A transaction cannot be committed or rolled back inside a trigger.
Syntax:
ALTER TABLE statement
The ALTER TABLE statement allows you to:

 add a column to a table


 add a constraint to a table
 drop a column from a table
 drop an existing constraint from a table
 increase the width of a VARCHAR or VARCHAR FOR BIT DATA column
 override row-level locking for the table (or drop the override)
 change the increment value and start value of the identity column
 change the nullability constraint for a column
 change the default value for a column
\\

Open and fectch and deleration not required for cursor

Theoretically using OPEN-FETCH-CLOSE Cursor method and CURSOR-FOR-LOOP one can


implement the same functionality.
Only the difference is cusror-for-loop(implicit open,fetch,close) is a shortcut to the explicitly
opening,then fetching and closing cursor method .

Implicit cursors are automatically created when select statements are


executed. Explicit cursors needs to be defined explicitly by the user by
providing a name. They are capable of fetching a single row at a time.
Explicit cursors can fetch multiple rows

Difference between Implicit and Explicit Cursors :

Implicit Cursors Explicit Cursors


Implicit cursors are automatically
created when select statements are Explicit cursors needs to be defined
executed. explicitly by the user by providing a name.
They are capable of fetching a single
row at a time. Explicit cursors can fetch multiple rows.
Closes automatically after execution. Need to close after execution.
They are more vulnerable to errors They are less vulnerable to errors(Data
such as Data errors, etc. errors etc.)
Provides less programmatic control
to the users User/Programmer has the entire control.
Comparative to Implicit cursors, explicit
Implicit cursors are less efficient. cursors are more efficient.
Implicit Cursors are defined as: Explicit cursors are defined as:

BEGIN DECLARE
SELECT attr_name from table_name CURSOR cur_name IS
where CONDITION; SELECT attr_name from table_name
END where CONDITION;
BEGIN
...

Implicit cursors requires anonymous Explicit cursors use user-defined memory


buffer memory for storage purpose. space for storage purpose
Cursor attributes use prefix “SQL”.
Structure for implicit cursors: SQL
Structure for explicit cursors: cur_name
%attr_name
%attr_name
Few implicit cursors attributes are:
Few explicit cursors are: cur_name
SQL%FOUND, SQL
%FOUND, cur_name%NOTFOUND,
%NOTFOUND, SQL
cur_name%ROWCOUNT
%ROWCOUNT

View :
A view is a virtual table that not actually exist in the database
but it can be produced upon request by a particular user. A
view is an object that gives the user a logical view of data
from a base table we can restrict to what user can view by
allowing them to see an only necessary column from the
table and hide the other database details. View also permits
users to access data according to their requirements, so the
same data can be access by a different user in a different
way according to their needs.
2. Cursor :
A cursor is a temporary work area created in memory for
processing and storing the information related to an SQL
statement when it is executed. The temporary work area is
used to store the data retrieved from the database and
manipulate data according to need. It contains all the
necessary information on data access by the select
statement. It can hold a set of rows called active set but can
access only a single row at a time. There are two different
types of cursors –
1. Implicit Cursor
2. Explicit Cursor
Difference between View and Cursor in SQL :
Basis of
Sr.No. Comparison View Cursor

A CURSOR (CURrent Set of


A view is a virtual table that Records) is a temporary
1.
Terminology gives logical view of data workstation created in the
from base table. database server when the
SQL statement is executed.

Views are dynamic in nature


which means any changes
Cursor can be static as well
2. Nature made in base table are
as dynamic in nature.
immediately reflected in
view.

There are some steps for


creating a Explicit cursor –
 Declare the cursor in
declaration section.
We can perform CRUD  Open the cursor in
operations on view like execution section.
3. Operations
create, insert, delete and  Fetch the cursor to
update. retrieve data into
PL/SQL variable.
 Close the cursor to
release allocated
memory.

Using cursors, data retrieval


Views are used to fetch or
4. Data retrieval from the resultset takes place
update data.
row by row.

4. Types There are two types of view Cursor has two types i.e.
Basis of
Sr.No. Comparison View Cursor

Implicit Cursor (pre-


i.e. Simple View (created defined) and Explicit
from the single table) Cursor (user defined).
and Complex View (created Implicit cursor gets
from multiple tables). In automatically created by
simple view, group Oracle whenever DML
functions like COUNT(), operations or SQL statement
MIN(), can be used whereas is executed whereas in
in complex view group explicit cursor user need to
functions cannot be used. define them explicitly by
giving a name.

The cursor is defined and


View is a database object
used within the block of
similar to table so it can be
5. Usage stored procedure which
used with both SQL and
means it can be only used
PL/SQL.
with PL/SQL.

General Syntax of Creating General Syntax of Creating


View : Explicit Cursor:
6. Syntax CREATE VIEW CURSOR cursor_name IS
“VIEW_NAME” AS “SQL select_statement;
Statement”;

Cursor in PL/SQL :
A cursor can be basically referred to as a pointer to the
context area.Context area is a memory area that is created
by Oracle when SQL statement is processed.The cursor is
thus responsible for holding the rows that have been
returned by a SQL statement.Thus the PL/SQL controls the
context area by the help of cursor.An Active set is basically
the set of rows that the cursor holds. The cursor can be of
two types: Implicit Cursor, and Explicit Cursor.
Advantages of Cursor:
 They are helpful in performing the row by row processing
and also row wise validation on each row.
 Better concurrency control can be achieved by using
cursors.
 Cursors are faster than while loops.
Disadvantages of Cursor:
 They use more resources each time and thus may result
in network round trip.
 More number of network round trips can degrade the
performance and reduce the speed.
2. Trigger in PL/SQL :
A Trigger is basically a program which gets automatically
executed in response to some events such as modification in
the database.Some of the events for their execution are DDL
statement, DML statement or any Database
operation.Triggers are thus stored within the database and
come into action when specific conditions match.Hence, they
can be defined on any schema, table, view etc. There are six
types of triggers: BEFORE INSERT, AFTER INSERT,
BEFORE UPDATE, AFTER UPDATE, BEFORE DELETE,
and AFTER DELETE.
Advantages of Trigger:
 They are helpful in keeping the track of all the changes
within the database.
 They also help in maintaining the integrity constraints.
Disadvantages of Trigger:
 They are very difficult to view which makes the debugging
also difficult.
 Too much use of the triggers or writing complex codes
within a trigger can slow down the performance.
Difference between Cursor and Trigger:
S.NO Cursor Trigger

It is a pointer which is used to control the context area It is a program which gets executed in response to
1.
and also to go through the records in the database. occurrence of some events.

A cursor can be created within a trigger by writing the


2. A trigger cannot be created within a cursor.
declare statement inside the trigger.

It gets created in response to execution of SQL


3. It is a previously stored program.
statement thus it is not previously stored.

The main function of the cursor is retrieval of rows The main function of trigger is to maintain the
4.
from the result set one at a time (row by row). integrity of the database.

A trigger is executed in response to a DDL


A cursor is activated and thus created in response to
5. statement, DML statement or any database
any SQL statement.
operation.

The main disadvantage of cursor is that it uses more The main disadvantage of trigger is that they are
6. resources each time and thus results in network round hard to view which makes the debugging really
trip. difficult.

o Inner Joins (Simple Join)


o Outer Joins
o Left Outer Join (Left Join)
o Right Outer Join (Right Join)
o Full Outer Join (Full Join)
o Equijoins
o Self Joins
o Cross Joins (Cartesian Products)
o Antijoins
o Semijoin

Difference between Views and Materialized Views in SQL


The following table highlights the important differences between Views and
Materialized Views −

Key Views Materialized Views

Technically, the View of Materialized views are


a table is a logical also the logical virtual
virtual copy of the table copy of data−driven by
created by the "select the "select query", but
query", but the result is the result of the query
not stored anywhere in will get stored in the
Definition
the disk. Whenever we table or disk.
need the data, we need
to fire the query. So, the
user always gets the
updated or latest data
from the original tables.

In Views the resulting In case of Materialized


tuples of the query views both query
expression is not get expression and resulting
Storage
storing on the disk only tuples of the query get
the query expression is stored on the disk.
stored on the disk.

The query expression is The result of the query


stored on the disk and gets stored on the disk
not its result, so the and hence the query
query expression gets expression does not get
Query executed every time executed every time
Execution when the user tries to when user try to fetch
fetch the data from it so the data so that user will
that the user will get the not get the latest
latest updated value updated value if it get
every time. changed in database.

Cost As Views does not have Materialized Views have


Effective any storage cost a storage cost
associated with it so associated with it so
they also does not have also have update cost
any update cost associated with it.
associated with it.

Views in SQL are Materialized Views in


designed with a fixed SQL are designed with a
architecture approach generic architecture
due to which there is an approach, so there is no
SQL standard of defining SQL standard for
Design
a view. defining it, and its
functionality is provided
by some databases
systems as an
extension.

Views are generally used Materialized Views are


when data is to be used when data is to be
accessed infrequently accessed frequently and
Usage
and data in table get data in table not get
updated on frequent updated on frequent
basis. basis.

How to process larges files in BusinessWorks and BusinessWorks Container Edition

Parsing a Large Number of Records


The input for this activity is placed in a process variable and takes up memory as it is
being processed. When reading a large number of records from a file, the process may
consume significant machine resources. To avoid too much memory, you may want to
read the input in parts, parsing and processing a small set of records before moving on to
the next set of records.

This procedure is a general guideline for creating a loop group for parsing a large set of
input records in parts. You may want to modify the procedure to include additional
processing of the records, or you may want to change the XPath expressions to suit your
business process. If processing a large number of records, do the following.
Procedure

1. Select and drop the Parse Data activity on the process editor.
2. On the General tab, specify the fields and select the Manually Specify Start
Record check box.
3. Select the Parse Data activity and click the group icon on the tool bar to create a
group containing the Parse Data activity.
4. Specify Repeat Until True Loop as the Group action, and specify an index name
(for example, "i").

The loop must exit when the EOF output item for the Parse Data activity is set
to true. For example, the condition for the loop can be set to the
following: string($ParseData/Output/done) = string(true())

5. Set the noOfRecords input item for the Parse Data activity to the number of
records you want to process for each execution of the loop.

If you do not select the Manually Specify Start Record check box on
the General tab of the Parse Data activity, the loop processes the
specified noOfRecords with each iteration, until there are no more input records
to parse.

You can optionally select the Manually Specify Start Record check box to specify
the startRecord on the Input tab. If you do this, you must create an XPath
expression to properly specify the starting record to read with each iteration of
the loop. For example, the count of records in the input starts at zero, so
the startRecord input item could be set to the current value of the loop index
minus one. For example, $i - 1

How to enable Engine Memory Saving mode in BusinessWorks 6.X

Enabling Memory Saving Mode at Design


Time
Use the following steps to enable memory saving mode at
design-time.

Procedure

1. To enable memory saving mode on TIBCO Business Studio™


for BusinessWorks™, navigate to >
Windows > Preferences > BusinessWorks > Process
Diagramand select the option Save information to support
memory saving mode. This option is not selected by default.
2. To update memory saving mode for existing processes,
right-click on TIBCO BusinessWorks™ Container
Edition Projects and select Refactor > Repair
BusinessWorks Projects. In the dialog box select Update
memory saving mode variables option. The variables that
can be freed from different activities is displayed in
the Preview page and select Ok.

The corresponding BX VariableDescriptor model is created


and serialized in the process file.

3. To remove memory saving mode for existing processes,


right-click on TIBCO BusinessWorks™ Container
Edition Projects and select Refactor > Repair
BusinessWorks Projects. In the dialog box select Remove
memory saving mode variables option. The variables that
can be removed from different activities is displayed in
the Preview page and select Ok.

The corresponding BX VariableDescriptor model is removed


in the process file

Configure the following bwengine property in the BW_JAVA_OPTS


environment variable while running the application to enable and
disable Memory Saving Mode:
bw.engine.enable.memory.saving.mode=true

It can be defined in different ways.appnode and appspace

. Set the property to true

bw.engine.enable.memory.saving.mode=true
Finally you can set it once for all for all appnodes of a server by commenting this

property in both ‘appnode_config.ini_template’ and ‘appspace_config.ini_template’

(before appnode/appspace creation) and adding the following line at the end of the

bwappnode.tra file :

java.property.bw.engine.enable.memory.saving.mode=true

In BWCE environment, this can be done by adding the following parameters to the

BW_JAVA_OPTS list of options (see BWCE documentation for details):

-Dbw.engine.enable.memory.saving.mode=true

Engine Memory saving mode in BusinessWorks 5.X

Engine Memory Saving is also available for BusinessWorks 5.X.

To enable it simply update the ‘EnableMemorySavingMode’ property in Administrator

(nothing to do at design time).

Go to your application configuration, and go to the ‘Advanced’ tab of the process archive

configuration:
How to execute a Process at application start-up or shutdown in
BusinessWorks and

This can be done in BusinessWorks and BusinessWorks Container Edition by using an

Activator Process, such Process can be created with the following:

. Go to the Module Descriptor Overview panel of your application and click on the Create

Activator Process button

Module Descriptor Overview — Creation of an Activator Process

. The Activator Process Creation panel is displayed, selected the target Process Folder and

Package, then enter a name for the Process and click Finish
Creation of an Activator Process

. The Activator Process is now created

An empty Activator Process

. Now you can implement your Activator Process as you need

How to manage HTTP Flows in TIBCO BusinessWorks 5.X

How to adjust the configuration to manage the incoming HTTP


request

This part is well known while the main properties available to configure the
elements managing the input flows are exposed in TIBCO Administrator.
Flow control in Administrator

Max Job’, this property is related to a mechanism that is available since the
very first release of BusinessWorks to control incoming flows for any
transport. The ‘Max Job’ property defines the number of process instances
that can be created for a given process starter, when this limit is reached and
new events are received process instances are created, serialized and then
written to disk without being executed, then when the number of process
instances goes below the limit a pending process instance is read from the
disk, de-serialized and instantiated for execution.

When the value ‘0’ is displayed in Administrated the


default value 75 is used at runtime .

his value can be defined in TIBCO Administrator and also in the XML
deployment file used by AppManage. The default value 75 can be changed
for all HTTP process starters of a given BusinessWorks engine at a time with
the following property: bw.plugin.http.server.maxProcessors

What is less known is that it is possible to have BusinessWorks to reject


incoming HTTP requests once Flow Limit is reached or about to be reached.
This can be done using the bw.plugin.http.server.acceptCount property.

Using Oauth 2.0 in BusinessWorks and BusinessWorks Container Edition


How can I overwrite a global variable value from
command line?
How can I overwrite a global variable value
from Summary
How can I overwrite a global variable value from command line?

Environment
Details
Details

Resolution:
You can achieve the target by passing global variables in BW command line
arguments. After deploying a BW project, a .cmd file is created. And in this file,
there are command like this:
-"C:/tibco/bw/5.3/bin/bwengine.exe" --run --propFile "…tra"

You can add some properties in it, like this:


"C:/tibco/bw/5.3/bin/bwengine.exe" -tibco.clientVar.dbpassword tibco -
tibco.clientVar.dbuser tibco --run --propFile "….tra"
Where “dbpassword” and “dbuser” are global variables used in the BW project as
JDBC connection‘s password and user name.

Please also note that there is another way to do this. IF you add the GV's in the
bwengine.xml then you can
modify these values during deployment.

Keywords :- command line, global variables, properties file, bwengine.xml

How to change a global variable without redeploying


the application?
How to change a global variable without redeploying the
application?

Products

TIBCO ActiveMatrix BusinessWorks

Not Applicable

Not Applicable

Not Applicable

Summary
How to change a global variable without redeploying the application?

Environment
Details
Details

Resolution:
Environment:
===========
ALL

Resolution:
==========
1). Create a file with name “Properties.cfg” in any location.

2). Add the entry tibco.clientVar.<Global Variable Name> value e.g.


tibco.clientVar.password admin3

3). Open your application.tra

4). Search for tibco.env.APP_ARGS and add the path of Properties.cfg created in
Step 1 with the parameter “-p” E.g. tibco.env.APP_ARGS =-p
C:/tibco/bw/5.9/bin/Properties.cfg

AND/OR

5). Manually add tibco.clientVar.<Global Variable Name> value e.g.


tibco.clientVar.password admin3 in your application.tra

6). Restart the applications

Resolution

You cannot change the global variable without stopping


the process. If you update the global variable you will
have to re deploy the application again
https://www.acte.in/tibco-hawk-interview-questions-and-answers

TIBCO FTL® is a robust, high performance messaging platform for real-time enterprise
communications that provides high throughput data distribution to virtually any device.

Optimized to leverage the latest advancements in hardware, networking, and web


technologies, TIBCO FTL can handle higher message throughput with lower latencies—and a
greater number of concurrent connections—than other messaging products

. All this capability is delivered with a peer-to-peer architecture and API libraries that run on
commodity general-purpose systems with no specialized hardware or servers require

 Publishers, Subscribers and Message Streams

An application can send messages, receive messages, or both. The


terms publisher and subscriber refer to TIBCO FTL objects that perform these two roles within
applications.

A message stream is a sequence of messages.

 A publisher is the source of a stream of messages.


 A subscriber expresses interest in a stream of messages.

 One-to-Many Publishing
 The primary data-transfer model in TIBCO FTL software
is one-to-many publishing. A publisher produces a stream of
messages, and every subscriber to that stream can receive
every message in the stream.
 One-to-Many Publishing

 This model is analogous to radio broadcast: a radio station
broadcasts a stream of audio data, and every radio receiver
on the same frequency can receive that audio stream.

You might also like