Download as pdf or txt
Download as pdf or txt
You are on page 1of 30

PLATFORM CORE

EVENTS

The Event System is a framework provided by the service layer allowing you to send and receive events
within the SAP Hybris.
• One software component acts as a source and publishes an event that is received by registered listeners
• Event listeners are objects that are notified of events and perform business logic corresponding to the
• event that occurred
• Events can be published locally or across cluster nodes
• SAP Hybris Event System is based on the Spring event system
To create your own event, you need to extend the class de.hybris.platform.servicelayer.event.events.AbstractEvent

Then to create a listener you extend de.hybris.platform.servicelayer.event.impl.AbstractEventListener :

Finally, you need to create a Spring bean for your listener :


To try you event run this Groovy script :

You should be able to read the following message on SAP Hybris console : INFO [hybrisHTTP8]
[HybhubListener] Received event(Hybhub Event) : Hybhub Event : Event published from Groovy console !
The only flaw here is that everything is executed synchronously by the same thread ! To process them in an
asynchronous way you can :
• make your event cluster aware by implementing de.hybris.platform.servicelayer.event.ClusterAwareEvent,
when you implement this interface, your events are always processed asynchronously even if they are
executed on the same node they were created.

• update the Spring bean platformClusterEventSender with an executor :

IMPEX
• SAP Hybris needs to exchange a lot of data with external systems, and it needs to be easily configurable
without having to create complex SQL query, Impex offers a way to easily store and exchange, it means import
and export, it is an out of the box CSV based import framework.

• Hybris has come up with an extension called “Impex” inside platform/ext folder which helps in inserting,
updating, deleting, and exporting data.
• Impex import always means data flows into Hybris system.
• Impex export always means data flows out of Hybris system.

Impex are useful for:


• initial data injection (stores definitions, initial catalogs, CMS components creation, catalogs creation…)
• update data during runtime
• test data/configurations during development
• migrate data between systems
• backups

An Impex query has :


• a header defining the mapping between an item type and the value lines
• line value(s) defining the date
• comments
• macro definition
• BeanShells directive
• user rights definition

Impex Header:

Header is a single line preceding the value lines, a header has :


• A mode which deines what kind of operations you are executing, it can be :
– INSERT, creates a new item
– UPDATE, update an existing item based on unique identifiers
– INSERT_UPDATE, update an existing item if it can find the unique identifier otherwise create a new
entry
– REMOVE, try to remove an item based on unique attributes, log a warning if it can’t find any
elements
• An item type, like User or Product
• Attributes of the related item type, like code, name
• Attribute modifiers, gives additional processing instructions of a given attribute
– alias for export ([alias=theAlias])
– allownull ([allownull=true]), not compatible with the service layer
– cellDecorator
([cellDecorator=de.hybris.platform.catalog.jalo.classiication.eclass.EClassSuperCategoryDecorator])
– collection-delimiter, ; is the default delimiter ([collection-delimiter=;])
– dateformat ([dateformat=dd-MM-yyyy])
– default ([default=‘default value’])
– forceWrite ([forceWrite=true), not compatible with the service layer
– ignoreKeyCase ([ignoreKeyCase=true])
– ignorenull ([ignorenull=true]), ignore null for collection type imports
– key2value-delimiter ([key2value-delimiter=->]), speciies the operator to delimit keys and values
– lang ([lang=en])
– map-delimiter ([map-delimiter=|])
– mode ([mode=append]) for collections import append the collections, replace or remove elements
– numberformat ([numberformat=#.###,##])
– path-delimiter ([path-delimiter=:])
– pos ([pos=3]) change positions of values, not recommended
– translator ([translator=de.hybris.platform.impex.jalo.translators.ItemPKTranslator])
– unique ([unique=true]) used to mark this attribute as unique, can be configured on multiple

• Attributes if needed
– virtual ([virtual=true,default=value]) needs to have a default modifier as well
• Header modifiers, are configured with the item type, for example : INSERT MyType[headerModiier=something];…
– batchmode ([batchmode=true]), used with an update or remove it allows to modify more than one
item that matches the query
– cacheUnique ([cacheUnique=true])
– processor ([processor=de.hybris.platform.impex.jalo.imp.DefaultImportProcessor])
– impex.legacy.mode ([impex.legacy.mode=true])

INSERT_UPDATE
category;code[unique=true];name[lang=de];name[lang=en];$supercategories
;$thumbnail;description[lang=de];order

Impex header syntax is case sensitive


Example with Title item type :

You can also specify a subtype of the Header item type on the value line, for example :

The header item type could be abstract, but the value line item type obviously can’t.
Setting an attribute value for atomic types is straightforward, a string attribute for example can be directly entered
within the value (see example with User item type above), but for reference attributes Impex expects you to enter
the primary key of the referenced item, this is not possible to know items primary key before they are created so
this is not the right way to create relations. To efficiently and without external dependencies insert relations
between items you need to lookup, for them based on their own attributes.

To localized attributes, you need to use a modifier within the header attributes like [lang=en] to localize an
attribute in English :

en is defined with the Language items and correspond to unique ISO code of the language,
have a look at the internationalization tab under the HMC or HAC.

Value Lines:

Comment:
A commented line starts with a dash # and are completely ignored during import :

Macro:
Impex files can easily be huge, it’s not occasional to see Impex import files of more than a thousand lines! If you
need to update one common attribute for each of the value lines, macros come in handy. A Macro
definition starts with a $ sign :

Distributed Impex:
SAP Hybris V6 introduced a new concept of distributed Impex to share the Impex import between nodes,
it speeds up the execution of queries by splitting them into batches that are executed in different nodes,
this doesn’t need to be configured since it’s the new default Impex engine. Importing data using the new
distributed mode consists of three steps :
• prepare then split
• single import execution
• finishing

Access properties from Impex:


If you need to access your .properties configuration from an Impex file there is a way to do it with the
ConfigPropertyImportProcessor* class.
So, for example in your local.properties you have the following configuration :

In your Impex you access it using the following syntax :

Impex Cell Decorator


While importing impex value lines, sometimes it is required to modify the value based on some business
logic rather than directly inserting the value.
Hybris has provided different ways to handle the same:
1) Using Cell Decorator
2) Using Translator
3) Using Special Translator

Cell Decorator is used to obtain the specific cell of value line after parsing but before translating it. We can
then decorate the cell value based on business logic. Decorating cell value means modifying the cell value based
on some business requirement.

Requirement
Assume we are Updating Customer record through impex
We need to check customer belongs to which country using customer id provided
in impex
If customer belongs to US then append _US to customer id before saving
If customer belongs to Canada then append _CA to customer id before saving

Let’s implement this requirement with the help of Cell Decorator


Step 1: We need to configure cell decorator by adding the modifier cellDecorator to a header attribute specifying
our decorator class.

Step 2: Create the class which implements CSVCellDecorator and override the decorate method with our custom
logic.

In the above class, we have written a logic to return the customer Id by appending “_US” to US
customers and “_CA” to Canada customers and return customer Id without appending
anything if user is different from US and CA.
Step 3: Run the impex and check the updated customer record in HMC/BackOffice.

Impex Translator

Translator can be used to change the translation logic for a value.


Translator can be applied on any attribute as below:

The above configured translator class must extend the AbstractValueTranslator.


It defines 2 methods:

The valueExpr is the parsed cell value (possible decorators already applied if any), toItem is the resolved item.

Implement the translation logic for the export case here, The given object has to be translated to a String that has
to be returned.

Requirement: Translator for Price Attribute to prevent importing negative prices.


Step 1:
We need to configure Translator by adding the modifier “translator” to a header attribute specifying our
Translator class.

Step 2:
Create the class which implements AbstractValueTranslator and override
the importValue and exportValue methods with our custom logic.
Step 3:
Run the impex with price value greater than or equal to Zero, it will run successfully otherwise it will throw an
error.

Batch Operations on Impex:


Sometime its required for us to update value of few attributes (columns in table) for more than one instance (row
in table).
Example :
Need to update the maxOrderQty and minOrderQty of all products.
Here, We can do this using traditional way by specifying each product’s unique code and updating values as below:
In this way we have to specify unique attribute of Product for each instance and this is quiet manageable if we
have few products to update.

Let us assume we must update this to all products in our application:

Let’s say volume is around 2000 products.


It’s very difficult to specify all 2000 products in impex to update these 2 attributes.
So, solution to achieve this without specifying all products in impex is Batch Mode.
Batch Mode: It’s a feature within Hybris Impex which allows to update more than one instance of item type.
In other words, it allows us to do bulk update.
We just need to use [batchmode=true] in the impex header:
1. If the batch mode is not specified (as by default), the hybris throws an exception if more than one instance
matches for a value line.
2. If the batch mode is enabled, the hybris modifies all instances that matches the value line.
Solution to above problem(updating all products) with Batch mode is as below:

In the above example, we specify $catalogVersion as unique column.


So all product instances belonging to Staged version of powertoolsProductCatalog will be updated.
Example:
Update All customers without specifying any unique value:

We can also use Batch mode on Remove operation.

Example: If we want to remove all customers.


We can use Batchmode with remove as below:

Some more examples:


Update with item type as unique column:

Update with unique column other than item type:

Remove with item type as unique column:


Remove with unique column other than item type:

Note:
We can use item type as one of the unique value in header if we want to update all instances of that type.
We can specify unique value in header other than item type if we want to update matching instances for that
unique value rather than all instances.

Escaping special characters in Impex file:


• When we insert data through impex, its sometime necessary to insert special characters as part of data.
• When we provide these values in impex, impex can’t run it successfully.
• It’s mainly because impex has its own definition for semicolon and colon characters.
• Semicolon in impex is field separator and colon is used for composite key.

How do we escape special meaning of these characters and insert those characters as part of data?

We can achieve this by using special delimiter called “path-delimiter”.


We can use any character for path-delimiter to escape special characters.
Consider below examples:

Example 1:

Now when we run this impex, the value for syncJob(code) will be stored as:
sync electronics-ukContentCatalog:Staged->Online
This is because, we are using path-delimiter =! .
If we don’t use this path-delimiter, then we would have got error as below:

Example 2:

Now when we run this impex, hybris searches for catalog id with value as “myStore:catalog” exactly in the table.
If we don’t use path-delimiter, then we will get error as below:

Ignore and Update in Impex:


• We know that we use impex to update any of the existing records.
• When we are updating multiple records through impex, assume some of the records don’t have
corresponding row in the database.
• Hybris gets below exception in this case:
“no existing item found for update”.

Example :
If we are updating name of 1000 products, obviously we will specify product code as unique for each value
line and provide new name to update. Assume out of 1000 products, 1 product is not there in DB.

Now this example leads to the above exception and stops further execution.
Look at below impex which has invalid product in 3rd value line:

Solution to above problem is, ignore those records which are not exist in DB and update other records which
are valid.
This can be done by adding below line at the top of impex:

So complete impex should be as below:

Now rows which contains errors will be skipped and they will be placed in a dump file.

SPRING CONTEXT
SAP Hybris is built heavily using Spring, it offers developers the flexibility to implement their work on top of the out
of the box extensions. Spring offers the ability to contain beans within different contexts, SAP Hybris provides
different application contexts :
• a global static application context shared by all tenants

– to configure it you need to configure an extra property within your extension configuration
extname.global-context=spring.xml

• a core application context for each tenant


– its parent the global application context
– to configure it within your extension
– use the convention resources/extname-spring.xml
– configure a different path extname.application-context=spring.xml

• a web application context for each web application


– configured inside the web.xml file of your extension (your extension needs to have a web module
defined within its extension.xml configuration file).
– the listener org.springframework.web.context.ContextLoaderListener configured within the web.xml
automatically configured the core application as parent of the web application context.
– web application context can use core beans but cannot modify them.

The order of loading Spring configuration is the same as in the build process, so to override a Spring bean
defined in extension A from extension B, add a dependency in extension B for extension A. You can configure
more than one Spring configuration file, simply separate them with a coma, for example : extname.application-
context=spring1.xml,spring2.xml.
To manually load a bean, it is recommended to use Registry.getApplicationContext() as this will first try to load it
from a web application context then from the core context and finally from the global context.

CRONJOBS
A Cronjob is an automated task performed at a certain time (every day at 2 am for example), or in fixed intervals
(every hour for example), it can be used for:
• data backups
• Catalog synchronization
• importing or exporting data

A Cronjob is made of :
• a job to do
• a trigger to start the job (not mandatory)
• a Cronjob item which link the job and the trigger and configure the environment the job will be performed in.

To create a new Cronjob first you need to create a new item which extend the Cronjob item type, we will create a
Cronjob that deactivate products that haven’t been updated since a given time :

Then we need to create the actual job that deactivate products based on the minLastUpdate attribute :
You need to register your class as a Spring Bean.

You need to update your system for SAP Hybris to find the new Job.
Don’t forget to switch the commit mode to on if you execute this more than once it will fail since Cronjob codes
needs to be unique!

NOTE:

Cron Job:
• It’s an Item Type we define in the Items.xml with all the attributes required for the Job to be executed. So,
we can say that Cronjob provides the configuration details for the Job.
• Hybris Out of the Box(OOTB) provides Cronjob item Type which already has many attributes defined
inside it like Code, active, status. logText, startTime, endTime etc.
• Check Cronjob itemtype under below file for more details
/platform/ext/processing/resources/processing-items.xml
• All these attributes are considered as a configuration for the Job.
• In case our Job needs some additional configuration parameters other than what is provided
with Cronjob Item Type, then we should define a new Cronjob Item Type in the *items.xml by extending
the Cronjob Item Type.

Job:
• Job defines the logic to be executed.
• We create a new class and extend AbstractJobPerformable class or
implement JobPerformable interface.
• Inside this newly created class, we provide the definition for perform(CronjobModel
CronjobModel) method. What we define inside this method is called as Business logic of the cron job.
• And this class which contains the business logic is called as Job.
Note: This class should be defined as a Spring bean in order to consider this as a Job.

• Job will have 2 main things:


Status:
o It is of type CronjobStatus enum
o It gives the status of a Running cron Job which indicates whether it’s
in Progress,or completed the execution.
o Possible values for the same can be checked in the CronjobStatus.java enum.

Result:
o It is of type CronjobResult enum.
o It gives the information about the last execution of the cron job, like whether cron job has
ended with Success or Failure
o Possible values for the same can be checked in the CronjobResult.java enum.
We can say that Cron Job is completed successfully with the below combination of CronjobStatus and
CronjobResult:
CronjobStatus.FINISHED and CronjobResult.SUCCESS

So PerformResult is returned by perform() method after the Job completes Business logic execution,
which then can be used to determine the overall result and status of the Job.
Trigger:
• It defines the scheduling of a Job which helps to decide when the Job has to be executed.

• It defines scheduling using the attributes like seconds, minutes, hours etc. or it can define it
using Cron Expression.

• Trigger is actually linked to the Job indirectly which means Trigger will always be defined
for Cronjob and Cronjob holds the actual Job to be executed.

• So, whenever we define Trigger, we define it for Cronjob not for Job.
Cronjob in turn identify the corresponding Job and makes sure it gets executed.

In Simple words,
Job defines what should be done, a Trigger says when it has to be done, and a Cronjob defines the
setup/configuration required for a Job.

Cron Job Real Time Example:

Requirement:
We need to run some tasks which should remove the products whose price is not defined, and it is X days
old in the system. Now our business logic is to remove the products if its price is not defined, and it is X no
of days old.

Steps for Business Logic:


Step 1: Since we need to add new configuration property called X days as dynamic value to the Cron Job,
we need to define a new Cronjob Item type. Define new Cron Job item type in items.xml as below:
hybris\bin\custom\training\trainingcore\resources\trainingcore-items.xml.

Defined a new Cron Job item type ProductsRemovalCronjob which extends Cronjob item type.
So all the attributes of Cronjob item type are also inherited to this new Cron Job item type.

Do ant all and refresh the platform folder to check ProductsRemovalCronjobModel.java is generated.

Step 2: Create a class which acts as a Job by extending AbstractJobPerformable class and override
the perform() method to have our business logic.
This is a class which does not have any business logic written for the Job to perform.

Let’s define the Business logic for the same as below:


hybris\hybris\bin\custom\training\trainingcore\src\org\training\core\jobs\ProductsRemovalJob.java.
We have written the business logic which gets all the products older than specified days.

Then we are iterating the product List and checking each product whether they have price row or not, If
price row is not defined then we are adding such products to a list of deleting products.
Then, we are removing all such products.

• After that we are calling the SOLR full indexing to index only those products which have the price.
• Since we need to define the query to get all the products older than specified days, we will create a
new DAO Implementation class and interface as below:
CustomProductsDAO.java
hybris\hybris\bin\custom\training\trainingcore\src\org\training\core\dao\CustomProductsDAO.java

CustomProductsDAOImpl.java
hybris\hybris\bin\custom\training\trainingcore\src\org\training\core\dao\impl\CustomProductsDAOI
mpl.java
Step 3: Define the above classes as spring beans in the *-spring.xml file as below
hybris\bin\custom\training\trainingcore\resources\trainingcore-spring.xml:

Step 4: Write an impex to create a cron job instance as below


hybris\bin\custom\training\trainingcore\resources\impex\essentialdataJobs.impex:
We are specifying old days as 5 so that 5 days older products will be considered.

Step 5: Write an impex into the same file to create a Trigger instance as below:

Look at the relation between Cronjob,Job and Trigger here.


While inserting ProductsRemovalCronjob Cron Job we are giving the reference of Job which should be
same as spring bean id of the Job.
While inserting Trigger, we are giving the reference of ProductsRemovalCronjob which should be same
as code defined while inserting ProductsRemovalCronjob.
Also, we have used Cron Expression in the Trigger which schedules it to run every day at 10:15 AM.

Step 6: Do Update System by selecting the extension where we defined our job
Now Cron Job will be executed at the scheduled time automatically.

Note:
After Updating the system, we can run the below query to check the ServiceLayerJob created for the Job
that we defined as a spring bean.

We should be able to see the Job details if everything is fine.

Step 7: Check the Products in HMC whose price not defined, and they are X days old before Job runs.
Search the same products after the Job runs successfully. We should not be able to see those products
now.

How to configure the Cronjob manually?

• Log in to HMC.
• Go to System -> Cronjobs.
• Type productsRemovalCronjob in Value Text Box for code; then click on search,
open productsRemovalCronjob.
• Go to Time Schedule tab ->set time in (Trigger text box) then save it.
If we want to start it immediately then Click Start Cronjob now, then our job will perform and display
popup box with Cronjob performed Result.
How to execute the Cronjob through code rather than through trigger?

We can also execute the Cron Job through code rather than running it through Triggers as below:

In the above code, we have written a code to call the Cron job through CronjobService’s
performCronjob() method.

Cron Jobs Subsidiary Information:

How Trigger gets Triggered?


The execution of a Cronjob is taken care by the Trigger as per the scheduling we define in
that Trigger using Cron Expression.
So, Who will be responsible for invoking the Trigger ?
Who evaluates the Cron expression ?
TaskEngine: For every trigger there is always one Task item gets created in Hybris.

TaskEngine will keep on polling the Tasks for every X second which we configure in the local.properties file
as below:

By default, the timer is set to 30 seconds, so if we want to change this value, we can define it with new
value in the local.properties file as mentioned above.

Here X is 30 seconds, so for every 30 seconds Timer task fires a DB query to check for any triggers to be
fired, if any trigger matches the current time or its overdue then Trigger is fired immediately.

NOTE:
Since Timer Task polls a DB every X seconds as we specify in the local.properties file, we should not
give its value too less unless and until its very much essential otherwise more DB calls will be made
unnecessarily.

So, this Task polled by TaskEngine will check the cron expression whether it matches or exceeded the
current time, if so, it will immediately execute the Cron job.

How to abort a Cronjob?

Sometimes when the jobs you are performing are time consuming you need to be able to abort a running
Cronjob. By default, a Cronjob will run until it’s done but you can implement your own logic to give it a
chance to abort itself. First you need to create a job aware of this new functionality to do so two
methods:

1. isAbortable() method in Cronjob performable class should be overridden to return true. Define
a new property in the *-spring.xml file where we defined our Job’s bean definition.

2. You need to add within your perform method implementation an exit door like this:

Setting session related attributes to the Cronjob:


Some time we write a Cronjob whose logic requires some session attributes like user,
sessionLanguage and sessionCurrency etc.

So, how do we set these attributes to the Cronjob so that we can access them while writing the logic of
the Cronjob?

It can be done in any one of the 2 ways listed below:

• Set the session attributes through impex

Here when we define the instance of Cron Job , we also specify the session attributes like user,v
currency and language.
Above session values can be used while writing the logic inside Perform() method of Job.

• Set the session attributes through code


Here we have created the instance of CronjobModel and set all the session attributes then saved the
Cron job model to DB.
Then we are calling performCronjob() method by passing the Cron job code.

Cronjobs in Clustered Systems:


How cronjobs run in a clustered hybris suite, will it run in all the nodes or only on single node ?
Remember Cron jobs always executes on a single node only.
We can specify the Node Identifier for the cron job to indicate on which Node it has to run using the
below code:

Now above Cronjob runs only in the Node 3 of the clustered environment.
If we don’t set any Node Id for the Cron Job, then Cron job can be executed by any Node within the
cluster but only one Node at a time.

Running Cronjob through Ant?


We can also run the Cronjob using Ant command as below:

Above command runs the Cronjob called myCronjob in a master tenant.

Cronjob execution during server startup


Whenever we start the Hybris server, each trigger will be evaluated first , if the trigger evaluated time
is overdue or matches the current time then Trigger will be fired which means Cron jobs will be executed
immediately.
How much overdue time we can allow for the triggers to get fired during the server startup ?
This can be done by setting maxAcceptableDelay attribute to the Trigger.
This attribute should have the value in seconds
If its value is set as 300, then delay of 5 minutes is allowed for the overdue triggers to be fired at the
server startup.
So, if the trigger defines the Cron expression to run the job at 5 PM , server started at 5:05 PM then
above trigger will be fired(as 5 minutes of delay is accepted).
If the server started at 5:06 PM then trigger will not be fired as it does not allow the delay of more than
5 minutes.

How to start the cronjob after server startup automatically?

We can also override contextInitialized() method inside existing hybris context


listener HybrisStorefrontContextLoaderListener instead of creating new one but we should not forget
calling super.contextInitialized(event);
Cluster:
When you deploy SAP Hybris into production you would need to configure a cluster of SAP Hybris nodes,
a cluster offers :
• node specific configuration
• Cronjobs / events for specific nodes
• session fail over
• load balancing compatibility (load balancing needs to be done by a third-party component)
• choice of UDP (unicast or multicast) or TCP (Jgroup) for nodes communication
All nodes are using the same database.
Cache Invalidation:
Each SAP Hybris nodes have their own local cache, when one node updates, an item in the cache
it needs to tell the other nodes to invalidate the entry in their local cache for consistency
purpose, to do so the node would send a request (TCP or UDP depends on your cluster
configuration) so other nodes would discard the item and reload it from the database the next
time they need to access that object.

Configuration:
Node specific configuration can be done by using:

Nodes need to have a unique identifier, prior to SAP Hybris V6 you had to manually configure an
id for each node:

This could make deployment more complex as you needed to provision a unique identifier for
each node, since SAP Hybris V6 you can activate auto discovery mode, this will keep track of each
node within the database:

Cluster configuration is available under your platform project.properties file. If you want to
change the out of the box configuration remember to do it under your config local.properties
file.

Testing:

SAP Hybris uses Junit to run unit tests and integration tests; tests are located within the testsrc and
web/testsrc folders. You can execute tests from your IDE or from ant.
There are other ant commands (manual tests, performance tests…) to see all available tests execute
“ant -p”.

Unit Tests:
Unit tests are simple tests that do not need any access to SAP Hybris platform (database, services…). They
focus on testing the correct behavior of a single java class. When needed you can isolate the class by using
Mockito to mock anything you need (interfaces, pojo…). Example of a Unit test class generated by SAP
Hybris:

Example of a unit test using Mockito:


Integration Tests:
Integration tests let you interact with SAP Hybris (database, services…) using a dedicated tenant junit.
The easiest way to create integration tests is to extend:
• ServicelayerTest, this way you can import any service using the @Resource annotation and gain
access to methods to create basic data (createDefaultUsers, createHardwareCatalog,
createDefaultCatalog…).

• ServiceLayerTransactionalTest is only adding transaction-based isolation logic to your tests. When


you integration test is done the transaction is rolled back.

Other Tests:
• @DemoTest
• @PerformanceTest
• @ManualTest

You might also like