Professional Documents
Culture Documents
7734ba068ecd4762a48d33e91a4e0c27
7734ba068ecd4762a48d33e91a4e0c27
7734ba068ecd4762a48d33e91a4e0c27
4 Foundation Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.1 Importing Foundation Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Foundation Data Translation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Importing Data Translations for Legacy Foundation Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Importing Data Translations for Metadata Framework Foundation Objects. . . . . . . . . . . . . . . . . . . . 33
Everything about the different methods in Employee Central to perform mass transactions on employee and/or
company data.
There can be many scenarios for performing mass changes can be many, ranging from a system-wide data
migration to ad-hoc data updates.
SAP SuccessFactors provides dedicated solutions to help achieve your objective. This guide elaborates on the
different options available in Employee Central for mass data changes.
Mass Data Management [page Use this solution to perform Position data objects. System Administrators, HR
5] bulk updates to data objects Representatives
from the UI.
Foundation Data [page 23] Use this solution to import Foundation data objects, in- System Administrators
foundation data, and perform cluding MDF (Generic) data
mass changes to foundation objects.
data.
Employee Data Imports [page Use this solution to import All HRIS entities. System Administrators
39] employee data, and perform
mass changes to employee
data.
Mass Changes to Job Infor- Use this solution to perform Job Information and Job Rela- System Administrators
mation and Job Relationship mass changes to Job Infor- tionship entities.
[page 14] mation and Job Relationship
data. This solution performs
mass changes on the basis of
business rules.
Mass Changes to Positions Use this solution to perform Position data objects. System Administrators
[page 15] mass changes to Position
data. This solution performs
mass changes on the basis of
business rules.
Mass Data Management is a UI based solution to perform bulk changes to data in Employee Central.
Mass Data Management is simple and effective solution for performing data modifications in bulk. It moves away
from the traditional approach of configuring rules and a backend job for performing mass changes, to a UI based
approach where you can exercise much better control over the data you want to modify.
Whether you are an administrator with a strong technical background or an HR representative with a functional
(non-technical) background, you can use this solution to address your mass change requirements. Currently, Mass
Data Management supports changes to Position data objects only.
• A Fiori based UI, providing a simple, adaptive, and intuitive user experience.
• Comparatively flat learning curve, requires little to no prior technical knowledge.
• Readily extensible to support a wide range of data objects in Employee Central.
To perform mass changes and related tasks, you must have the following permissions.
Administrator Manage Mass Data Enable Mass Data Provides access to the
Management Management Admin Center Mass Data
Management page.
Before you create your first mass change request, you must configure the UI to include data object fields that can
be filterable, editable, and visible.
Context
The mass change UI configuration acts as a canvas for all your mass data transactions. The different sections of
the mass change UI aren't preconfigured by default. As the administrator, you can configure these sections by
adding fields from the target data object to:
• Search data
• Display data, and
• Modify data.
Therefore, creating a mass change UI configuration is necessary to be able to create change requests.
Note
You can only have one mass change UI configuration for each instance.
Procedure
Based on this value, your mass change request is processed synchronously (instantly) or asynchronously (in
the background). If the number of records you’ve modified exceeds this value, the mass change request
is processed asynchronously. The data to process includes your changes, and changes resulting out of
associated business rule execution. Changes resulting out of business rule execution are always processed
asynchronously.
Note
5. Add fields from the position data object to the configuration, by selecting from the fieldName dropdown.
Note
• The <effectiveStatusStr> field is added to the configuration by default, and can’t be removed.
6. To configure a field to be filterable by default on the mass change UI, select the corresponding defaultFilter
value to Yes.
7. To configure a field as a filter, select Details and set it as filterable.
Note
If the field has a parent, ensure to add the parent field to the configuration and set it as filterable.
8. To configure a field to be editable as part of a mass change request, select Details and set it as editable.
Note
• If the field has a parent, ensure to add the parent field to the configuration and set it as editable.
• Except <externalCode>, all other fields can be configured as editable.
9. Optional: Add matrix relationship to the configuration by selecting from the associationName dropdown.
Note
By default, matrix relationship is editable as part of a mass change request. You can’t configure the
association to be filterable as it isn't supported.
Results
You’ve created a request job. You can perform either of the following actions from Take Action:
• Modify a configuration.
Note
You can't modify a configuration if there are mass change requests in the Processing Draft status. You can
edit once the requests are moved to the Draft status.
Caution
All drafts associated with a mass change configuration are deleted if you:
• Modify or delete fields from the job configuration.
• Delete the MassChangesJobConfiguration object.
Related Information
Prerequisites
Context
You can create a mass change request to update position records only.
Procedure
Note
Effective Date is the only filter criteria that's mandatory. Though it's initially preselected to show the current
date, you can select any other date in the past or future, as applicable.
Note
You can retain filter values while viewing the Draft screen in Mass Data Management. By hovering over
Other Filters you can also view the list of filter values applicable for that search result. Currently, a filter
group can contain a maximum 8 filters (excluding Effective Date).
To filter based on the matrix relationship for position, you can select a new filter Matrix Position in Adapt
Filters. You’re required to configure this field in Manage Data and Configure Object Definition for it to be
available in the New Position Mass Change Adapt Filters Matrix Position . This helps you update
positions in bulk with Matrix Position as a filter category.
Records matching the filter criteria are fetched. Position and Start Date columns are added to the table
configuration by default.
Note
If you initiate a search with no criteria in the search box, an empty page appears with no data. You need to
refresh the page to load the data again.
To change the table configuration, select (Columns). All fields added to your mass change UI configuration
are displayed here.
Recommendation
For a better user experience, we recommend adding not more than 15 columns.
Note
• In the Matrix Relationship for Position section, you should select the values in a sequential order to
ensure the values are editable, if not the values will be in read-only mode.
For example, you configure 5 fields for Matrix Relationship For Position such as, Type, Related
Position, Custom Field 1, Custom Field 2, and Custom Field 3. Type and Related Position values are
required to be provided in order to make Customer Field 1 editable. If the Custom Field 1 value is
provided the system would enable Custom Field 2 and so on.
• When you modify a matrix relationship for a position, you can view the changes in the Matrix
Relationship for Position dialog box. The old matrix relationship record is crossed out and the new
value is displayed.
• You can't add a new matrix relationship by selecting a relationship type that already exists for any
of the selected positions.
• You can't delete existing matrix relationships.
• On incremental changes to the draft for Matrix Relationships, the edit page defaults to the
position's original value rather than to the last updated value. All other fields on the edit page
defaults to the last updated values.
Modified records are shown with a clickable link Modified. Select the link to see your edits, and the changes
resulting out of business rule execution.
11. Choose Apply and then OK to apply the changes to the selected records.
The Modified page appears. You can view a list of all modified records on this page.
Note
The onChange rules configured on the modified fields are triggered on applying the changes. All changes
are applied to the selected records, and your mass change request is saved as draft. Depending on
the number of records you’ve modified, your mass change request is processed synchronously or
asynchronously. During asynchronous processing, your changes are being processed and the results will be
available. The status of your mass change request is momentarily set to DRAFT PROCESSING. You cannot
perform any changes to the mass change request until the status changes back to DRAFT. You can have at
most 10 drafts active at a given point in time. Drafts that remain unchanged for 30 days from the date of
creation are automatically deleted.
12. Optional: If you want to edit other records from the list, choose the All tab and make the necessary changes.
13. If you want to review records containing error, choose the Containing Errors tab.
You can also cancel the mass change request anytime as long as the request is in the draft status.
Note
16. Optional: At this point, you can share the mass change request with other users. To do so, go to the Mass
Actions Overview page and select the checkbox against your mass change request. Then select Share.
Note
A mass change request is successfully created, and you’re redirected to the Mass Actions Overview page. The
change request is saved in the PROCESSING status.
Results
You’ve successfully initiated a mass change request. You'll receive an email notification after the request is
processed successfully. A log file is also created for reference purposes, which is available for 6 months from
its creation date.
Mass change requests failing to complete successfully or containing any errors are classified with the status
ERROR. You can review the change request or notify someone to resolve the errors.
Related Information
After creating and submitting a mass change request the following changes take effect:
A mass change request is successfully created, and you’re redirected to the Mass Actions Overview page. The
change request is saved in the PROCESSING status.
Note
While searching a mass change request, the upper and lowercase letters are considered as the same
string values. For example, ABC, Abc, or aBc is considered as the same string value.
• In the Mass Actions Overview page, by default you can view the associated number of requests on All
Requests, My Requests and Requests Shared With Me tabs.
After submitting the mass change request, you can also view the updates to the position data from Manage Data.
Go to Manage Data and select Position in the search drop-down list. Then, search and choose the position which
has the updates from the submitted job, under the search drop-down list next to Position. You can view all the
details of the position here.
You can delete mass change requests that are in the draft status from Mass Data Management.
Context
Deleting mass change requests will permanently delete all the draft data associated with it.
Procedure
All mass change requests in the draft status appears on the All Requests tab.
3. If you want to delete a single mass change request or multiple requests, perform the following steps.
Otherwise, skip the step.
a. Choose requests that you want to discard.
b. Choose Delete.
Remember
You can use the Delete option only for draft records. It is disabled if records of different statuses are
chosen for deletion.
c. Choose Ok.
Mass Changes allows HR Admins to create and run Mass Changes for the employees. HR Admins can manage
changes to employees’ job information and job relationships based on an Employee Group. Additionally, with this
feature, the HR Admins can efficiently execute changes to employees’ data such as; manager reassignments, or
reorganizations affecting a large number of employees.
Furthermore, HR Admins can manage the mass changes to Position objects using business rules. However, we
recommend using the Mass Data Management tool which provides good user experience such as, filter capabilities,
table view of results, draft option and multiple other features hence, providing better control to HR Admins in
managing the data
Related Information
Simultaneously make changes to a lot of job information and job relationship records.
Prerequisites
You have view and edit permissions for mass changes: Administrator Permissions Manage Mass Changes
Context
You can make effective-dated changes to employee records in 1 batch job. Common example for creating a mass
change is to move a lot of employees to a new manager, a new building, or a new project.
Procedure
Employee Group Select a group from the list or create a new group. If you
select an existing group, you can select View to review the
members of that group.
Effective Date Select the date the change comes into effect.
Mass Change Area Select Job Information or Job Relationship or fields in those
areas
Event Reason Select the relevant event reason, for example, Data Change.
4. Select Save to save this as a draft version. Select Save and Initiate to run the batch the immediately.
In the Mass Changes list, you can see all mass changes created in the system or filter to see those created by
you. For mass changes with the status Draft, select the mass change to open it again.
Results
Once the batch is run, the status changes to Completed Successfully or Completed with Errors. For batches with
errors, you cannot reinitiate them. However, you can copy them using Actions to create a new batch.
For successful batches, new records are created in the Job History for the affected employees.
You can see the results of the batch job in a CSV where necessary.
To view all the jobs created in Manage Mass Change, go to Manage Data and choose MassChangeDefinition in
the search field. All the created jobs are available under the search drop-down list, next to MassChangeDefinition.
There is a mass change feature you can use to make changes simultaneously to a large number of positions.
Overview
• You can use a single rule to define the position target population and the change attributes.
• Changes are effective dated.
Note
For more information about Mass Data Management, please refer to the Related Information section.
Prerequisites
• The Mass Change Run object is RBP-secured by default. You grant access to it by going to the Admin Center
and choosing Manage Permission Roles . You can find the permission under Miscellaneous Permissions.
• You grant access to Manage Mass Changes for Metadata Objects also in Manage Permission Roles, this time
under Metadata Framework.
Restrictions
• If a pending position is valid for the change, but the effective start date is before the change date, the relevant
record can't be updated.
• No role-based permissions are applied to selecting and changing the positions.
• Only Set statements are allowed in the Select And Update Rule THEN condition.
• If any records in the mass change run contain an optimistic locking exception, then the whole batch will be
rolled back. For more information, see the Related Links section below.
You create a mass change run by going to the Admin Center and choosing Manage Mass Changes for Metadata
Objects. Here's an example, showing the sort of entries you can make:
• Code
Unique code for the new mass change run.
• Name
Translatable name for the new mass change run.
• Object Type To Be Changed
Indicates whether the object is a position or a time object.
• Change Date
This is the date on which the changes take effect. All records (active, inactive, pending) valid from this date that
match the IF condition of the Select And Update Rule are included.
• Synchronize To Incumbents
This indicates whether the changes in the position objects should be synchronized to the incumbents.
• Select And Update Rule
Enter a rule that defines which objects are selected and what is updated. Use IF conditions to restrict the
number of objects to be changed by this mass change run. Use SET statements in the THEN condition to
define the new values of objects.
Note
Only rules created with the Update Rule for Mass Change Run rule scenario can be selected. Only SET
statements are supported in the THEN condition.
• Execution Mode
You can choose Run or Simulate. When you choose Simulate, the mass changes aren't saved, but you can see
the result in the log. When you choose Run, the mass changes are executed and saved. You can monitor the
progress and check results of the job in the Scheduled Job Manager page.
Note
The status of these jobs will continue to be available in the legacy Monitor Jobs page.
• Execution Status
This field shows the status of the mass change run:
• Scheduled means that the mass change run is scheduled and will run soon
• In Progress means that the mass change run is still running.
• Executed means that the mass change run was successful.
• Failed means that there were errors in the last mass change run.
• Log
The log shows information about the executed mass change run. The information includes the number of
updated and failed objects/records and CSV field with detailed information about each record.
Note
If you select Simulate or Run as the execution mode and then save the mass change run, the system triggers
a QUARTZ Job (Name MassChangeRun_<Code><UniqueNumber>) that processes the change. So, it may take
a while before the run starts. You can review the status of the QUARTZ Job if you reload the mass change run in
Execution Status.
After the mass change run has finished, the user who started the mass change run receives an email. The
details can be found in the Log section of the mass change run.
In order to ensure that the mass change runs execute as smoothly as possible, we recommend that you enable
the rule cache.
The result is that the mass change run is much quicker than before, and you're significantly less likely to
encounter a timeout.
Once the Mass Change request is created, you can check the log in Scheduled Job Manager in Admin Centre and
when the Mass Change is completed then, the Execution status changes to Executed and the logs are generate.
To view the submitted jobs for position object inManage Data, go to Manage Data and choose Mass Change Run. All
the submitted jobs are available under the search drop-down list, next to Mass Change Run. You can also view the
execution status of the job here.
Processing Details
Let's assume the mass change run discussed above is executed. The assigned rule "PosJobTitleChange" would
look like this:
Before the run, the positions data in the system looks like this:
It is also possible to modify the data of a composite object, such as Matrix Relationship. The rule shown in the
picture below is an example of such a rule.
The IF condition returns all positions that are assigned to company = SAP and that already have a matrix
relationship with Type = HR Manager Position and Related Positions is not equal to Expert Developer. The SET
statement updates this existing matrix relationship and sets the related positions to Expert Developer.
The SET statement on a composite object creates a new record if no record specified by the Select-Statement
in the SET-Condition was found. So, it's important to restrict this in the IF condition as shown in the screenshot
above.
Related Information
Optimistic Locking
Foundation Data consists of information that defines your company, and everything else related to it such as the
organization, pay, job structures, and so on.
Foundation Data is commonly shared across the company. Therefore, it’s important to import foundation data
before importing employee data. You can import foundation data and also translate it to ensure that your
proprietary information of your company is available in different languages.
There are preconfigured templates available in Employee Cental to import Foundation Data. Each template
corresponds to a data object, also known as a Foundation Object (FO). Each Foundation Object stores a specific
type of foundation data.
Foundation Objects can be broadly classified into the following types, which include legacy Foundation Objects as
well as Metadata Framework Foundation Objects (Generic Objects):
Remember
Ensure you have the Administrator Permission Manage Foundation Objects Import Transaltions
permission to import data and perform translations for selected foundation objects
Related Information
Prerequisites
Corporate Data Model must be configured with the required Foundation Object definitions.
Importing Foundation Data is one of the first steps in the data import process. You can import Foundation Data
using preconfigured file templates available in Employee Central for each Foundation Object.
Tip
To ensure you’re using the up-to-date version of CSV template, always download a copy from the system.
This topic covers the process of importing Foundation Data with legacy Foundation Objects. For information about
importing Foundation Data and Translations with MDF Objects (also known as Generic Objects), refer to the
Related Information section.
• Legal Entity
• Local Legal Entity
• Pay Group
• Department
• Business Unit
• Job Function
• Job Classification
• Local Job Classification
• Cost Center
• Pay Calendar
• Division
Procedure
Before importing Foundation Data, it’s important that you keep note of the associations established between
Foundation Objects while setting up the Corporate Data Model. To ensure that the import is successful, you
must import data of dependent and referenced objects before importing the Corporate Data Model.
Example
Let's suppose that you’ve created an association between Legal Entity and Department Foundation Objects,
where Legal Entity is associated with Department Foundation Object. As a result, the import file template
For more information about associations between different Foundation Objects, refer to the Related
Information section.
7. Choose a method of data import.
• Select Full Purge, if you’re importing the data for the first time, or you want to completely overwrite
existing data with information included in the template. Existing records not included in your import file are
unaffected.
Example
You add a new record for Cost Center B by importing data in Full Purge mode.
RESULT:
• Existing records with Cost Center ID as A remain unchanged.
• There is a single record for Cost Center ID as B.
Note
This option is selectable for all Foundation Objects except Dynamic Role and Position Dynamic Role, which
support data imports in Full Purge mode only.
Note
• If the number of records is greater than 10, the data is imported in the background.
Next Steps
Repeat the process to import data with other Foundation Objects as required.
Related Information
By translating Foundation Data, you can ensure that proprietary information of your company is available in
different languages.
Since foundation data includes fundamental information about your organization, pay, and job structure, which is
reflected globally, you can create multilingual versions to facilitate users across the globe to view this information
in their preferred language. Foundation data is translatable only if the user's preferred language is supported by the
system. To find the list of supported languages, go to Options Change Language .
Note
Foundation data isn’t translated in the People Profile. The People Profile shows basic foundation data such as
organizational information, such as division, department, location, and so on.
Related Information
Prerequisites
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Context
In terms of classification, the following data objects are identified as legacy Foundation Objects.
• Location Group
• Geo Zone
• Location
• Pay Range
• Pay Grade
• Pay Component
• Pay Component Group
• Frequency
• Dynamic Role
• Workflow Config
• Event Reason
You can translate the name and decription attributes in all these Foundation Objects.
You can also translate custom fields in the Foundation Objects, provided they are of type translatable.
Note
The following procedure shows how you can perform a mass translation of Foundation Object data. If you
want to add or update translations to specific instances of legacy Foundation Objects, refer to the Related
Information section.
Procedure
Field Value
Note
A business key for an MDF object is a set of fields of
the MDF object that can be used as a unique key.
Hide External Code - visible only when Key Preference is set Select one of the following:
to Business Key. • Yes
• No (default value)
3. Select Download.
A popup window appears prompting you to select a location on your computer to download the zip file. Select
an appropriate location and save the zip file on your computer.
4. Prepare the import file template with data to import.
Title Description
foType This column must contain the type of the legacy Foundation
Object.
Example
businessUnit, jobFunction, company, to name a few.
value.<locale ID> There's one column for each language pack selected in
Provisioning. These columns must contain translations of
the data that you’ve entered for the foField of a given foType
respectively.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Example
5. To upload the template, select Import Data from the Select an action to perform dropdown on the Import and
Export Data page.
6. Select one of the following options:
• CSV File, if you’ve downloaded the template excluding any related dependencies. On select, a form appears
with the following fields:
Field Value
File Encoding Choose the correct file encoding for your data.
Note
A business key for an MDF object is a set of
fields of the MDF object that can be used as a
unique key.
• ZIP File, if you’ve downloaded the template including related dependencies. On select, a form appears with
the following fields:
Field Value
If successful, data translations for legacy Foundation Objects are imported into the system.
Results
After the import, the system determines the language in which foundation data is shown to users based on the
following order of precedence:
1. Log on language of the user: The user's preferred language as selected in the Options Change Language
section.
2. Default company language: When there’s no translation is available in the logon language of the user. Default
company language is based on the configuration setting made in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Prerequisites
Data translations for legacy foundation objects are imported in the system.
Context
After importing data translations for legacy foundation objects through the mass import method, you can modify
translations in an existing instance of a legacy foundation object if required. To do so, you must edit the respective
foundation object instance and modify the translations by adding or updating data as necessary.
Recommendation
Follow this method only you want to add or change only a few terms in the legacy foundation object instance.
Example: Revising terms that have not been translated appropriately.
For information about mass importing translations for legacy foundation objects, refer to the Related Information
section.
Procedure
Example
Location Group
3. Select a corresponding instance of legacy foundation object type from the second Search dropdown.
The selected data object instance is displayed on screen.
Note
Translatable fields of the data object are associated with the icon.
A popup window appears with a list of supported languages and free text fields for entering translations. The
list of languages correspond to the language packs activated in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
6. Click Finished.
7. Enter the translations for other fields in a similar manner.
8. Click Save.
The foundation object instance is updated with the new translations.
Related Information
Create multilingual versions of data imported with Metadata Framework foundation objects.
Prerequisites
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Context
You can import data as well as translations for Metadata Framework Objects at the same time. In Employee
Centrall, the following data objects are identified as Metadata Framework foundation objects (also known as
Generic Objects).
• Legal Entity
• Business Unit
• Cost Center
• Division
• Department
• Job Classification
• Job Function
• Pay Group
• Pay Calendar
You can translate the name and decription attributes in all these foundation objects.
Note
You can also translate custom fields in the foundation objects only if they are of type translatable.
Note
The following procedure shows how you can perform a mass import of Generic Object data and translations. If
you want to add or update translations in specific Generic Object instances, refer to the Related Information
section.
Procedure
Field Value
Example
Legal Entity
Note
A business key for an MDF object is a set of fields of
the MDF object that can be used as a unique key.
Hide External Code - visible only when Key Preference is set Select one of the following:
to Business Key. • Yes
• No (default value)
3. Select Download.
A popup window appears prompting you to select a location on your computer to download the zip file. Select
an appropriate location and save the zip file on your computer.
4. Prepare the import file template with the required data.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
5. To upload the template, select Import Data from the Select an action to perform dropdown on the Import and
Export Data page.
6. Select one of the following options:
• CSV File, if you’ve downloaded the template excluding any related dependencies. On select, a form appears
with the following fields:
Field Value
Select Generic Object Select the respective foundation object for which you
want to import translations.
File Encoding Choose the correct file encoding for your data.
Note
A business key for an MDF object is a set of
fields of the MDF object that can be used as a
unique key.
• ZIP File, if you’ve downloaded the template including related dependencies. On select, a form appears with
the following fields:
Results
1. Log on language of the user: The user's preferred language as selected in the Options Change Language
section.
2. Default company language: When there’s no translation is available in the logon language of the user. Default
company language is based on the configuration setting made in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Related Information
Prerequisites
Data translations for Metadata Framework foundation objects (also known as generic objects) are imported in the
system.
After importing data translations for generic foundation objects through the mass import method, you can modify
translations in an existing instance of a generic foundation object if required. To do so, you must edit the respective
foundation object instance and modify the translations by adding or updating data as necessary.
Recommendation
Follow this method only you want to add or change only a few terms in the generic foundation object instance.
Example: Revising terms that have not been translated appropriately.
For information about mass importing translations for generic foundation objects, refer to the Related Information
section.
Procedure
Example
Legal Entity
3. Select a corresponding instance of generic foundation object type from the second Search dropdown.
The selected data object instance is displayed on screen.
Note
Translatable fields of the data object are associated with the icon.
A popup window appears with a list of supported languages and free text fields for entering translations.
There is also an entry that shows the default value currently set for the selected field. The list of languages
correspond to the language packs activated in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
6. Click Finished.
7. Enter the translations for other fields in a similar manner.
8. Click Save.
The foundation object instance is updated with the new translations.
A solution part of SAP SuccessFactors Employee Central to perform mass changes to employee data.
Employee Data Imports features as a dedicated method to help you execute changes to employee data in bulk.
Using CSV templates corresponding to each HRIS element in Employee Central, Employee Data Imports enables
you to manage changes to employee data efficiently.
Employee Data Imports can be used to manage data for different types of users in your company such as
employees, contingent workers, and so on.
Following are some of the prominent scenarios where mass data transactions can be administrated using
Employee Data Imports:
Currently, Centralized services support several participating entities to bring data consistency and a host of
performance-related improvements to the data import process. To know more about Centralized services, refer to
the Related Information section.
Related Information
Centralized services is an umbrella term for a collection of specialized services governing different processes in
Employee Central.
Centralized services is basically a framework that acts as a common platform supporting various features in
Employee Central, with the aim to bring consistency without compromising on quality.
Centralized services are enabled by default, and are applicable to data imports initiated from the Import Employee
Data page or OData APIs.
Note
You have the option to deselect settings that are Admin Opt-out, in which case the respective entity exhibits
legacy behavior without Centralized services.
The following HRIS entities are currently supported by Centralized services on Imports:
Address Universal
The following HRIS entities are currently not supported by Centralized services on Imports:
Unsupported Entities
Basic Import
Background Import
Consolidated Dependents
Extended Import
Before beginning the data import process, you must prepare your system to meet certain prerequisites.
Key Considerations
• To ensure that data is imported successfully, it’s imperative that you have a thorough knowledge of the HR data
to be migrated as well as the data models across both the applications.
• You must account for dependencies or constraints that can impact the data import process, if any. If there’s
Personally Identifiable Information or confidential data present, you must be aware of the parties who can
review and validate the data.
Note
At any given time, you can execute a maximum of three Employee Data Import jobs, simultaneously. If a new
job is submitted during this time, the new job is added to the queue until one of the ongoing jobs has completed
its execution.
Related Information
Review the existing system settings to ensure an optimal performance during the data import process.
Context
Before you begin the data import process, be it foundation data or employee data, there are a few important
system settings you must review. Depending on the volume of data to be imported, you can configure these
settings accordingly. Based on established performance benchmarks, we also provide recommended values for
your convenience.
Procedure
3. Enter a value in the Set a batch size for employee and foundation data imports. (Enter a value between 1-100)
field.
The system divides and groups the records in your import file into smaller units called 'batches'. The batch size
is the number of records that the system processes in each batch. It's recommended that you set the batch
size value as 100.
Note
A value greater than 100 is not considered, and is automatically truncated to the maximum value of 100.
This value corresponds to the number of days to retain details of the scheduled import jobs, for monitoring and
reporting purposes.
Note
The maximum number of records supported by Employee Central Imports is 30,000. If your import file has
more than 30,000 records, we recommend that you distribute the data into multiple files and upload each file
separately.
Note
• For Employment based entities, the maximum number of records supported by Employee Central
Imports is 30,000 without rules execution. If there are large number of rules configured or if there are
cross-entity rules, then we recommend that you distribute the data into multiple files with a maximum
of 20,000 records and upload each file separately.
• For Person based entities, the maximum number of records supported by Employee Central Imports is
30,000. If your import file has more than 30,000 records, we recommend that you distribute the data
into multiple files and upload each file separately.
• At any given time, you can execute a maximum of three Employee Data Import jobs, simultaneously. If
a new job is submitted during this time, the new job is added to the queue until one of the ongoing jobs
has completed its execution.
• If an upload of an import file takes more than 2 hours, resulting in interruption, we recommended
that you split the file into two. Splitting your file ensures that you use system resources optimally. For
example, in the below table for Job Information import with rules, we recommend a maximum upload
of 20000 records. If the time taken to complete this upload is more than 2 hours, then in the next job
split the upload into two files of 10000 records each.
Addresses 30000
Results
Related Information
To import employee data and perform related tasks, you must have the required permissions.
You can define permissions for different roles before assigning them to users, so that they can perform tasks
permitted by their role. It’s useful when there are many users who can access the Import Employee Data page, and
you want to define the level of access for each user.
Administrator Permissions Employee Central Import Enable Workflows for selected Workflows attached with the
Settings entities selected Employee Central Im-
port entities are triggered
when you’re importing em-
ployee data.
Manage Hires Rehire Inactive Employee You can rehire employees who
have previously worked at
your company.
Manage User Allow users to view all the jobs You can monitor jobs submit-
ted by all users.
Admin Center Permissions Monitor Scheduled Jobs Allows you to monitor your im-
port jobs using the Scheduled
Job Monitor tool.
User Permissions Employee Central Import Employee Central Import Allows you to import em-
Entities ployee data with the selected
Entities
Employee Central Import enti-
Supported Import Entities: ties.
For HRIS fields that are configured with Visibility as View in combination with Allow Imports status in the Manage
Business Configuration page, you can choose to allow or disallow data uploads.
Visibility = View With imports, you can upload field values for this configuration.
If you have these fields as part of the template, values from the
Allow Imports = Yes
template are imported.
Visibility = View With imports, you cannot upload field values for this configura-
tion. These fields are not part of the template.
Allow Imports = No
However, values from the previous record are copied over to
the record being imported.
Configure the system to scan the data in the import files and take appropriate actions.
Context
As a security measure, you can setup your system to scan the data included in your import file and reject malicious
content, if any.
Procedure
The Platform Feature Settings page appears where you can enable or disable platform features.
2. Select Security Scan of User Inputs.
Performance benchmarks are standards established with respect to the amount of time taken to import a set of
records with each employee data import entity.
Note
If you encounter issues when performing mass uploads of more than 10k records, you may choose to
• Disable rule processing by disabling execution of rules from Employee Central Import Settings Enable
Business Rules for selected entities
• Trigger rules selectively by applying rule context when configuring them in the Manage Business
Configuration page.
The benchmarks for each entity have been established in a test environment under the following conditions:
• The maximum number of records in the import file template for each entity applied was 10,000, unless
specified otherwise.
• Business rules are not applied while importing these records. If you have business rules applicable, there can
be a difference in the reported benchmark data and the actual data.
• Centralized services for Employee Central Imports is enabled, unless specified otherwise.
Note
The time required to import is proportional to the number of records in your import file.
Basic Import - 18 -
Employment Details - 1 -
Incremental Load 2
Personal Documents - 1 -
Information
Emergency Contact - 1 -
Incremental Load 2 -
-
Note Incremental Load 2
Recurring
Incremental Load 6 -
Termination Details - 4` -
Tip
Suffix the ID with a “d” to
distinguish between em-
ployees and their corre-
sponding dependents.
Note
The duration for import remains unchanged as these dependent entities are not yet available on Centralized
services.
Related Information
A standard process to follow for performing various transactions on employee data through imports.
The Employee Data Import process involves 3 main steps. Following is a clickable diagram highlighting the different
stages in the process.
You must execute these 3 steps to perform any kind of bulk transactions on employee data.
The following use cases describe the data import sequence for regular or full-time employees.
Note
To know about the data import sequence for contingent workers, refer to the Related Information section.
Note
1. Basic User Information This step involves the following data imports:
Example
The social security number (SSN) in the United
States, the Permanent Account Number (PAN) in In-
dia.
To modify existing user data, follow the data import process to import the corresponding entity data. There’s no
recommended import sequence in this case.
Example
To modify employment information of users, import data for Employment Details entity.
Deleting user data is a sensitive case as it can cause downstream implications, and must be carried out carefully.
You can import data to delete information associated with a single entity or multiple entities together. For more
information, see Employee Data Deletion with Imports [page 124]
Note
Data deletion through Imports is not supported for non-effective dated entities like Biographical Information,
Employment Information, and so on.
Rehire Users
Rehire Users with new User ID
To rehire users with a new user ID, the recommended import sequence is as follows,
1. Employment Details
2. Job History
To rehire users with an existing user ID, follow the data import process to import Job History details.
For more information, see Rehire Employees with Data Imports [page 157].
Related Information
Begin the data import process by downloading an up-to-date version of the preconfigured import file template.
Prerequisites
• The HRIS elements corresponding to the data you want to import must be configured to include the required
data fields.
• The HRIS elements that support country/region-specific templates must be configured, to download country/
region-specific import templates.
• If Centralized services is not enabled for Person Relationship imports, to include country/region-specific
address information in Person Relationship imports, enable the Enable Country/Region-Specific support for
Address Information in Personal Relationships Imports (Not applicable for Centralized Services) setting on the
Company System and Logo Settings page.
Note
By enabling this setting, fields and country/region-specific fields configured for the homeAddress HRIS
element will be available for selection while downloading import templates for Person Relationship. If the
setting isn't enabled and the field is-address-same-as-person is enabled, 9 hard-coded address fields
are available for you to include in the template, no matter whether the fields are configured in any data
model.
Downloading the import file template from the server is recommended to ensure that you have the latest and up-
to-date version of the template. You can also choose to download a country/region-specific template, or customize
your template by selecting from the list of available fields.
Procedure
Choose this option to download the template for the required country/region. This option appears only if the
entity you've selected supports country/region-specific information under the Available Data Fields column.
You can select multiple countries/regions if required.
Note
For country/region-specific entities, you have an option to download the base model fields by deselecting
all the countries in the Select country/region.
5. (Optional) If you want to encode the template in a format other than Unicode (UTF-8), select an encoding
format from the File encoding dropdown.
6. Select the data fields to include in the import template from Available Data Fields.
This option appears only if the entity you've selected supports data field selection.
The selected fields appear under the Selected Data Fields column.
Note
To remove a selected data field, choose (Delete) against the respective field.
Mandatory fields or 'Business Keys', are automatically included under the Selected Data Fields, and can't
be removed. In SAP SuccessFactors, each record is identified by a set of unique identifiers known as
Business Keys. Based on the business keys, employee records created or updated respectively.
Note
If you have selected multiple countries/regions, ensure that you select only the required fields as
custom data fields configured for other countries/regions that you haven’t selected can also appear in
the list.
Example: While downloading a Job Information template for Canada, custom fields configured for USA
also appear in the list of Available Data Fields.
Related Information
SAP SuccessFactors provides preconfigured templates for importing different types of employee data.
An import file template is a CSV file intended to serve the purpose of importing a specific type of employee data.
Since Employee Central identifies and stores employee data in what are known as HRIS elements, there is an
import file template corresponding to every HRIS element involved in the data import process.
Corresponding HRIS
Element as Present in Supports Country/ Requires Role-based
Business Configuration Region-Specific Supports Manual Data Permissions: Employee
Import Template UI (BCUI) Templates? Field Selection? Central Import Entities
Extended Import No No No
Background Import No No No
Compound Delete No No No
• Biographical Information
• Employment Details
• National ID Information
• Email Information
• Phone Information
This template supports the import of multiple national IDs, phone numbers, and email IDs.
Related Information
After downloading the import file template, the next step in the data import process is to prepare the data to import
in the required format.
Procedure
1. Ensure that all the required fields are present in the import file template.
2. Understand the format, the permitted values, and other attributes of each field in your import file template.
Refer to the corresponding subtopic of the entity for which you're importing data to for more information about
the business keys, Data Object tables, additional supported configurations, and related use cases.
3. Ensure that your spreadsheet management application supports the file encoding format you selected while
downloading the import file template.
Unsupported file encoding formats can cause special characters in your import file to get corrupt. If the field
labels or data contains a comma (,), ensure that the corresponding labels have quotes around them.
Note
4. To retain leading zeros in your import file template, change the format of the required cell. To do so:
a. Right-click the cell, and select Format Cells.
b. On the Number tab, select the Text option.
c. Save your changes.
We've described the process in reference to the Microsoft Excel application, but the same logic can be
applied to other popular spreadsheet management applications.
5. Pay special attention while mapping data with fields of the following data types:
Expected input in
Data Type Description Sample Field import file Sample Value
Remember
• Provide a
value
correspondin
g to your
locale.
• The system
always
considers
active picklist
values.
DD/MM/
YYYY
6. If your import file contains data of new employees, ensure that the <start-date> value in the initial records of
Job Information, Personal Information, and Compensation Information (with the Hire event) matches with
the Hire Date in their Employment Details.
With Centralized services enabled, the Employment Information <start-date> is updated with the effective
start date of the most recently processed Hire/Rehire record. This sync happens whenever a Hire/Rehire
record is modified or inserted.
Since Employee Central is an effective-dated system, any mismatch in the date values leads to data
inconsistencies.
7. If you're importing data for an entity other than Addresses and Job Relationship (Work Permit) imports, you
don't need to enter the <end-date> value in the import files.
Even if you specify a value, the end dates are calculated automatically based on the corresponding start dates.
8. Follow the recommended sequence for importing data for different roles.
Before you import employee data, the employee's manager and the HR representative data must be imported.
Related Information
Notable points to help you prepare data for importing Basic User Information.
General Information
• STATUS Basic User Information: Data Field Basic User entity is currently not
Definitions supported by Centralized services.
• USERID
• USERNAME
• FIRSTNAME
• LASTNAME
• GENDER
• EMAIL
• MANAGER
• HR
• DEPARTMENT
• TIMEZONE
• Assignment ID Definition
For more information about each configuration/process, refer to the Related Information section.
New passwords generated with basic user imports will serve for basic authentication and token-based SSO only.
No emails are generated for resetting passwords. To reset passwords, go to your identity provider's administrator
panel.
Related Information
Notable points to help you prepare data for importing Biographical Information.
General Information
Note
User ID is a case sensitive field. Ensure that you enter the correct value to avoid validation errors.
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing Employment Details.
General Information
Note
User ID is a case sensitive field. Ensure that you enter the correct value to avoid validation errors.
Remember
You cannot hire a user and set the new hire as a manager in the same import. You must first hire the user and
then update the manager in a second import.
• Assignment ID Definition
• Business Rule Configuration
• Identical Record Suppression
• Rehire Former Employees (with a new User ID)
For more information about each configuration/process, refer to the Related Information section.
You can rehire previous employees either with their old user ID or give them a new ID to start a new employment
record. While rehiring inactive employees with a new employment, the <Original Start Date> field located
under the Employment Information section of the Employee Profile highlights the effective start date of the
employee. Since the <Original Start Date> of the new employment is different as compared to the old
employment, when you import the rehire data, the system automatically updates the <Original Start Date>
field with the value provided in your import file.
You can configure your system to keep the Original Start Date of employees unaffected by data imports. To do so,
you must:
As a result, when the data is imported, even if the <Original Start Date> is updated, the custom date field will
always show the start date of the old employment.
Note
To rehire a user with their old employment information and existing user ID, use the Job History import.
Related Information
Notable points to help you prepare data for importing Global Assignment information.
General Information
• Assignment ID Definition
With Centralized services enabled, several supported configurations/processes are enhanced and a few new
configurations/processes are supported, which are as follows:
For more information about each configuration/process, refer to the Related Information section.
Data Validation
• When importing the Global Assignment record, the system checks whether there is an active job information
record, as of the global assignment start date and either successfully imports the global assignment record or
throws a validation, meaning that the import fails.
The status of the Job Information record, as of the global assignment start date, cannot contain any of the
following statuses: Termination, Retired, or No Show.
Example
Future termination record for Septem- Import a global assignment with start Successful
ber 30th date on September 15th
Future termination record for Septem- Import a global assignment with start Successful
ber 30th date on Oct 10th
Future termination record for Septem- Import a global assignment with start Failed with Validations
ber 30th date on Oct 10th
Related Information
Notable points to help you prepare data for importing Job History.
General Information
• User ID Job History: Data Fields Definition Job (Information) History imports is
universally supported by Centralized
• Event Date
services.
• Sequence Number
• Business Rules
• Document Attachment
• Employee Data Deletion
• Identical Record Suppression
• Forward Propagation of Data
• Workflow Configuration
Points to note for workflow configuration:
• Workflows are triggered only if your import file contains 1 record for each user. If there are multiple records
for the same user with at least 1 record requiring approval, the data for this user isn’t imported.
• Workflows are triggered only by the entity used in the import, for example, Job History. If no workflow is
triggered by the entity in the import, all changes are immediately saved to the database, including derived
entities, such as cross-entity rule results.
• Workflows are triggered only when a new Job History record is inserted in the import.
• Workflows aren't supported for the Update and Delete operations. This means that a workflow isn't
triggered when a Job History record with a configured workflow is updated, with or without derived entities
involved, such as Compensation Information.
• Workflows aren't triggered for a new hire record, with an exception for internal hires.
• Carbon copy (Cc) role notifications are supported.
• Cross-entity rules are supported.
For more information, refer to the linked topics listed in the Related Information section.
With Centralized services, several supported configurations/processes are enhanced and a few new
configurations/processes are supported, which are as follows:
Note
You must import Employment Details before importing Job History for Hire, Rehire, or Termination events,
because the start and/or end dates of the employment are always adjusted with these events.
Data Validation
Centralized services introduce several validations to ensure data consistency. Job History imports are validated for
the following scenarios:
• To ensure that active employees don't have inactive managers on the start date of the newly imported Job
Information record. To ensure that active employees don't have inactive managers during the validity of the
newly imported Job Information record, a warning message is raised. The validation runs for active employees
only.
• To ensure that exactly 1 hire record exists for each user, which is also the first record.
• To ensure the employment status doesn't change when the event or event reason is changed.
The validation checks whether your changes are compatible when you edit an existing record. You can change
a Location Change record and make it a Department Change by selecting another event reason, or you can
change 'New Hire' into 'Hire from Affiliate' - meaning it can be the same event or a different event, so long as
neither the old nor the new event introduces a status change.
It is possible that termination information changes the employment status, however, follow-on processes, such
as for Position Management, are not triggered.
• To ensure that working days for each week is greater than or equal to 0, or less than or equal to 7.
• To ensure that only the Set action is used in rules where Job Information or Job Information Model is the target
object.
• To ensure that the user can’t be their own manager, or the user can’t be their manager’s manager.
• To ensure that duplicate records are not created from cross-entity rules, the system checks onSave rules for
Job Information. The existing hire record is updated, however, forward propagation is stopped for the record.
If the rule result contains a 'special event', you can use a fallback event reason. This fallback event reason and
Position to Job Sync Validation: If you've Position Management enabled, you can execute position-specific
processes with Job History imports to validate the modified data. For more information, refer to the Related
Information section.
Higher Duty: During the Job History import of higher duty or temporary assignment records, the validations
related to higher duty or temporary assignment are executed and the reference salaries and higher duty allowance
are calculated and updated, where required.
Employments Information: Derivation to ensure that the start and/or end date of Employment Information will
always be adjusted whenever a Job Information Hire/Termination/Rehire record is modified.
Global Assignment: Derivation to ensure that the start Global Assignment and end Global Assignment dates
on Global Assignment records are always adjusted when the corresponding records are imported using the Job
History import.
Pension Payouts: Derivation to ensure that the obsolete pension payout dates are always adjusted when the
corresponding records are imported using the Job History import.
• Update fields other than the event reason field for a leave of absence.
Note
A leave of absence can only be added in Time Off. The system does not allow Job Information events to be
created in the import.
• Update the expected return date field when a enabled in Time Management.
Related Information
Notable points to help you prepare data for importing Job Relationships information
General Information
• User ID Job Relationships: Data Fields Definition Job Relationships entity is universally
supported by Centralized services.
• Event Date
• Relationship Type
Job Relationships entity is universally supported by Centralized services. Several supported configurations/
processes are enhanced and a few new configurations/processes are supported, which are as follows:
• Business Rules
• Employee Data Deletion
• End Date Correction
• Identical Record Suppression
• Forward Propagation of Data
For more information about each configuration/process, refer to the Related Information section.
Data Validation
Centralized services introduce several validations to ensure data consistency. Job Relationships imports are
validated for the following scenarios:
• To ensure that the start date is not before the first Job Information Hire record.
• To ensure that active employees don't have inactive related users on the start date of the newly imported Job
Relationships record, an error message is raised. To ensure that active employees don't have inactive related
users during the validity of the newly imported Job Relationships record, a warning message is raised.
Notable points to help you prepare data for importing Compensation Information.
General Information
For more information about each configuration/process, refer to the Related Information section.
Data Validation
Centralized services introduce several validations to ensure data consistency. Compensation Information imports
are validated for the following scenarios:
• When Compensation Information is imported, the system checks that the associated Pay Component
Recurring is imported and validated.
• If the Allow Retroactive Employee Data Changes permission setting is activated, the system validates changes
prior to the earliest retroactive date but only shows a warning message. If the Allow Retroactive Employee Data
Changes permission setting is deactivated, an error message is shown.
Related Information
Notable points to help you prepare data for importing Recurring Pay Component Information.
General Information
• Event Date Recurring Pay Component: Data Fields The Recurring Pay Component import
Definition is universally supported by Centralized
• User ID
services.
• Event Date
• Sequence Number
For more information about each configuration/process, refer to the Related Information section.
Data Validation
Centralized services introduce several validations to ensure data consistency. Recurring Pay Component imports
are validated for the following scenarios:
• To ensure that the Recurring Pay Component is validated along with the corresponding Compensation
Information when the import is executed.
• To ensure that a business rule with recurring pay components as the base object can't create or delete other
recurring pay components.
• To ensure that only pay components defined as recurring can be created or updated for Compensation
Information.
Pay Component Value: For pay components of type Amount and Percentage, the system ensures that a null value
is not accepted. However, a value of 0.0 is accepted. For pay components of type Number, usually the value is not
provided by the user since it is calculated and set by the system. Any value is accepted, including null.
Related Information
Notable points to help you prepare data for importing non-recurring pay component information.
General Information
• Issue Date Non-Recurring Pay Component Data The non-recurring pay component
Fields import is universally supported by
• Sequence Number (if enabled)
Centralized services.
• Type
• User ID
Note
In the Incremental Load mode, if the
sequence number field,
<allow-import>=false: user_id +
pay_date + pay_comp_code is the
business key.
<allow-import>=true: user_id
+ sequence_number (when the
sequence number is enabled) is the
business key.
Sequence Number
For the non-recurring pay component data import process, the <sequence-number> field is a required field, but
not a business key, if it is disabled. This field accepts an alphanumeric value with the maximum of 38 characters.
With a unique sequence number, you can import multiple records having the same pay component on the same
day, without the risk of existing data being overwritten.
Mode of Input User input or sys- System generated User input User input System generated
tem generated
Based on your business requirements, there are two cases related to sequence number configuration. To check
your existing configuration, go to Admin Center Manage Business Configuration Employee Central HRIS
Elements payComponentNonRecurring .
The system automatically generates a sequence number for each user record. However, the value isn’t displayed on
the UI.
• In the Full Purge mode, the sequence number is automatically generated if it doesn't exist in the import record.
• In the Incremental Load mode, ensure that there are unique values in the <sequence-number> field for each
user record.
Centralized services introduce several validations to ensure data consistency. Non-Recurring Pay Component
imports are validated for these scenarios:
• A non-recurring pay component record that refers to a time account payout record, can't be deleted.
• The payment date can't be changed for a non-recurring pay component record that refers to a time account
payout record.
• A non-recurring pay component record is imported only if a user's Job Information record exists.
• The pay component associated with a non-recurring pay component record is active and non-recurring.
• An import file doesn't have a non-recurring pay component record with multiple data operations for the same
business key. For example, an import record doesn't have an entry for a Create operation and then a Delete
operation with the same business key.
• Duplicate sequence numbers do not exist in an import record for a user.
Related Information
Prerequisites
You've enabled Admin Center Company System and Logo Settings Enable Centralized Services for Non-
Recurring Pay Component Information (Applicable only for data imports from UI and API)
For this scenario, the sequence number field is enabled with the attribute allow-import=true.
You add these records by importing data in the Full Purge mode.
Result
The user record that's effective on August 1st is deleted from the database and the two records with the sequence
number and amount provided for each are imported.
For this scenario, the sequence number field is either enabled with the attribute allow-import=false or the
sequence number field is disabled.
You add these records by importing data in the Full Purge mode.
The user record that's effective on August 1st is deleted from the database and the two records with the amount
provided for each are added.
Note
In the Full Purge mode, the sequence number is automatically generated if it doesn't exist in the import record.
However, we recommend that you provide the sequence number in the import file.
For this scenario, the sequence number field is enabled with the attribute allow-import=true.
Use case 1
Result
With the new issue date and pay component but with the same sequence number, the user record that matches
the business key (User ID + Sequence Number) is automatically updated, keeping the second record as is.
Result
With the new issue date, pay component, and sequence number, a new user record is added to the existing records
as there isn't a record that matches the business key (User ID + Sequence Number).
For this scenario, the sequence number field is either enabled with the attribute allow-import=false or the
sequence number field is disabled.
Use case 1
Sample User Data
Issue Date (dd/mm/yyyy) User ID Pay Component Amount
Result
Use case 2
Sample User Data
Issue Date (dd/mm/yyyy) User ID Pay Component Amount
Result
With the same issue date and a different pay component, a new record is created in addition to the existing records
as there isn't a record that matches the business key (User ID + Issue Date + Pay Component).
Use case 3
Sample User Data
Issue Date (dd/mm/yyyy) User ID Pay Component Amount
Result
With a different issue date and same pay component, a new record is created in addition to the existing records as
there isn't a record that matches the business key (User ID + Issue Date + Pay Component).
Use case 4
Sample User Data
Issue Date (dd/mm/yyyy) User ID Pay Component Amount
Result
With a different issue date and pay component, a new record is created in addition to the existing records as there
isn't a record that matches the business key (User ID + Issue Date + Pay Component).
Notable points to help you prepare data for importing Personal Information.
General Information
Related Information
Notable points to help you prepare data for importing Global Information.
General Information
• Country Global Information: Data Fields Definition Global Information entity is universally
supported by Centralized services.
• Event Date
• Person ID External
Global Information supports updating existing country/region-specific information, even when you're importing
data in Full Purge mode.
To do so, you must have the Support cumulative update of country-specific data for global information import in full
purge mode permission.
Example
You add new records by importing Global Information in Full Purge mode.
If you've the Support cumulative update of country-specific data for global information import in full purge mode
permission, the existing country/region-specific record of the user is updated.
If you don't have Support cumulative update of country-specific data for global information import in full purge
mode permission, the first record is updated with the new import data since the business keys match and the
second record is deleted.
Notable points to help you prepare data for importing Phone Information.
General Information
• Person ID External Phone Information: Data Fields Definition Phone Information entity is universally
supported by Centralized services.
• Phone Type
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing Email Information.
General Information
• Person ID External Email Information: Data Fields Definition Email Information entity is universally
supported by Centralized services.
• Email Type
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing Social Account Information.
General Information
• Person ID External Social Account Information: Data Fields Social Accounts Information is
Definition universally supported by Centralized
• Domain
services.
With Centralized services enabled, Social Accounts Information entity supports the following configurations/
processes:
For more information about each configuration/process, refer to the Related Information section.
Related Information
Notable points to help you prepare data for importing National ID Information.
General Information
Related Information
Provide a temporary national ID in the import template for employees who don't have a valid national ID during the
hiring process.
Prerequisites
Note
For information about a complete list of fields available for nationalIdCard element, refer to the Related
Information section.
Procedure
Note
That this is optional. You can also leave this column blank.
On a successful import, the employee's temporary national ID is stored in the system. You can view the national ID
information on the Employee Profile page.
Notable points to help you prepare data for importing Address information.
General Information
Providing an end date in the address record for an employee isn't a mandatory requirement as it is automatically
calculated based on the respective start date. But if you intend to provide an end date, note the following.
For multiple records with same Person ID External, it's recommended that you provide End Date values in all such
records, or don't provide any value at all. Providing End Date value in some records and not in others results in
validation errors.
End dates explicitly provided in the import template will be considered only with Full Purge imports. If you are
importing data in Incremental Load mode, end dates are calculated automatically.
End date correction is a method by which the system determines whether or not there can be gaps (time frame
between the end date of one record and the start date of the next record) between different address records of an
employee. While preparing import data, you can choose to specify an end date value along with other information
in your import file template. Based on whether you import the data in Incremental Load or Full Purge mode, the end
dates are adjusted automatically.
Example
Let's suppose you import the following records in Incremental Load mode.
Any date gaps between the address records are automatically adjusted. The end date of the last record is also
adjusted.
Now, if you add a new record that is active anywhere between the start date of the first record and the end date
of the last record.
The end date of the first record is corrected in accordance with the start date of the newly added record. Also,
the end date of the new record is corrected in accordance with the start date of the third record. The reason
being, the end of any record can't be greater than the start date of the subsequent record. This process of end
date calculation holds true even if you don't provide an end date value in your import file template.
If you're importing data in Full Purge mode for the same address type with end dates provided, gaps in between
records are considered and end dates aren't corrected.
Let' suppose your import file template has the following records ready to be imported in Full Purge mode.
Related Information
Notable points to help you prepare data for importing Emergency Contact Information.
General Information
• Person ID External Emergency Contact Information: Data Emergency Contact Information entity
Fields Definition is universally supported by Centralized
• Name
services.
• Relationship
Data Validation
Data validations on Emergency Contact Import are introduced for the following:
You can't delete the record of a primary emergency contact for employees through data import. You must either
update an existing record as the primary record or import a new record (that will act as the primary record) before
deleting the existing primary record. However, to delete the primary record only, you can do it manually in People
Profile.
Notable points to help you prepare data for importing Person Relationship Information.
General Information
• Person ID External Person Relationship: Data Fields Definition Person Relationship entity is supported
by Centralized services by default
• Event Date
through the setting Admin Center
Company System and Logo Settings
Enable Centralized Services for
Dependents (Applicable for data
imports from UI and API and saving
changes on Editing UI) .
With Centralized services enabled, several configurations and processes are supported as follows:
Tip
To edit address information of dependents by import, set the is-address-same-as-person field to Yes
or import another template of address information for dependents.
• After you update the person relationship information for a dependent, the dependent's data, for example,
personal information and address information, is aligned automatically. This also applies to the employee's
other dependents.
If the dependent is also an employee in the same company, you can't delete the dependent's data expect
for the person relationship through a DELIMIT operation because the data is shared by the employee
records.
Data Validation
• A data validation process is introduced to ensure that the person ID external of employees is not reused in the
related-person-id-external field for dependents.
For separate data management of employees and dependents, employees and dependents must have different
person ID external even if in reality they are the same person.
• The setting Company System and Logo Settings Enable Address Validations applies to addresses of
dependents as well.
• Employee addresses aren't copied for dependents if any incorrect or invalid address information is found.
• When you import person relationship information, the dependents' other data will be validated.
You can assign a dependent as a beneficiary only for Australian employees. To do so, you must enter the details of
the dependent beneficiary in the Is-Beneficiary column of your import file template.
Note
If you've assigned a dependent beneficiary for non-Australian employees, the system skips importing such
data. However, if the non-Australian employees have a pension payout, you can assign as many dependent
beneficiaries as required.
When preparing to import work permit data on the Centralized services, understand and comply with the following
requirements.
General Information
Note
• Importing work permits on the Centralized services is now based on user ID; previously, it's based on
person ID. Because one person can have multiple user accounts, so you could end up purging work permit
information of multiple users when you chose to purge data of one person. With a user ID-based importing
process, you only purge work permit information for the particular user ID you specified.
• Personal Documents Information supports data imports in Full Purge and Incremental Load modes.
• Document Attachment
• Identical Record Suppression
For more information about each configuration/process, refer to the Related Information section.
Use Cases
Example
Data imports are validated differently before and after Issue Date is included in business keys.
Record existing in Logged-in user's United States Work Permit 2283D2FBC2021 2021-10-01
system ID
Record in import Logged-in user's United States Work Permit 2283D2FBC2021 2022-04-09
file ID
Before: Business keys consisted of User ID, Country, Document Type, and Document Number, but not Issue
Date. So, as the two records have identical business keys and they're considered as the same record, the
existing record would be simply updated to have a new issue date.
After: Business keys consist of all the five fields. So, as the two records don't have identical business keys, the
record in the import file will be added to the system as a new record.
Record 1 in import Logged-in user's United States Work Permit 2283D2FBC2021 2021-10-01
file ID
Record 2 in import Logged-in user's United States Work Permit 2283D2FBC2021 2022-04-09
file ID
Let's assume that, at the time of importing, no other work permit record existing in the system has a business
key identical to that of record 1 or record 2, whether the business key is a combination of the four or the five
fields.
Before: Business keys consisted of User ID, Country, Document Type, and Document Number, but not Issue
Date. So, as the two records have identical business keys, neither would be imported.
After: Business keys consist of all the five fields. So, as the two records don't have identical business keys,
they'll both be added to the system as new records.
Related Information
Notable points to help you prepare data for importing employee termination details.
General Information
When you import Termination Details data, the system creates both Termination Details and the corresponding Job
Information record with the termination date +1 day and the same event reason as was used in the Termination
Details import. You can terminate no-show new hires using the termination import as well.
Import Termination Details first and if needed, you can import Job Information to update any other details.
If data is imported in the wrong order (meaning, Job Information before Termination Details), the system still
creates a new Job Information termination record corresponding with the data in the Termination Details import.
It does it with sequence number = null, which results in the system determining the next sequence number and
using that. For example, if you import Job Information termination record first, it gets the sequence number "1".
Then when you import Termination Details, the system creates the Job Information record to correspond with the
Termination Details import, but since a Job Information record already exists on that date, the system will create
the new Job Information record with the sequence number "2".
Note
Do not terminate managers and their direct reports in the same import. This leads to data issues and errors.
When a manager with direct reports is terminated, the system processes the termination of the manager
immediately, Then the system initiates two scheduled jobs, one to process the changes to direct reports and
another to process the changes to Job Relationships. A notification is sent to the user who performed the
action once the job is run successfully.
• Document Attachment
• Workflow Configuration
• Business Rules
• onSave rules with Job Information as source element and Job Information as the target element are
supported with the Trigger onSave Rules for HRIS Elements rule scenario.
• onChange rules for Job Information as source element are supported only with the Trigger onChange Rules
for HRIS Elements rule scenario.
For more information about each configuration/process, refer to the Related Information section.
The system allows for the termination of multiple employments for an employee in same import file.
For terminating the main employment of employees with concurrent employment, you must ensure that you also
set one of their active employments as their new main employment.
The effective start date of concurrent employment 1 is July 1, 2019 whereas the effective start date of
concurrent employment 2 is December 1, 2019.
You can terminate the main employment of John only if you assign him new main employment. In this case,
you must assign concurrent employment 1 as the new main employment as its effective start date is before
concurrent employment 2.
To assign a new main employment, you must enter the user ID of an active concurrent employment under the <New
Main Employment> column in your import file template.
Position Management
• For cases where transferring direct reports or the to-be-hired adaptation is required, we recommend using the
Termination Details import instead of the Job Information import since event reasons that change employment
status are not triggered in Job Information imports.
• For some events such as Furlough and Suspension, no transferring of direct reports or to-be-hired adaptation
is triggered when used in imports or the Job History UI.
Data Validation
Ther are several validations to ensure data consistency. Termination Details imports are validated for the following
scenarios:
Related Information
Context
After you’ve downloaded the import file template and prepared the import data, you must upload the file to
Employee Central for creating or updating employee records as applicable. Once the file is successfully uploaded,
data is added or updated in the system accordingly.
Note
Association validation for effective-dated entities and non effective-dated enties are performed during
employee data import.
Depending on the selected entity, you can configure how the data must be imported based on your requirement.
Employee Central provides two options:
Full Purge Adding new data, or replacing existing data with new data.
Note
These options are also available while importing data using OData APIs.
Overwrite existing employee data with information in your import file by uploading data in full purge mode.
Prerequisites
• You've updated the import file with the required data to upload.
• There are no dependencies on the import data. For instance, before importing data for an employee, data for
the employee's manager and HR representative must be existing in the system.
• The data import settings are properly configured.
• You've reviewed the configuration of the respective HRIS element on the Business Configuration UI.
Note
During data imports, the base configuration of the HRIS element takes precedence over its country/region-
specific and person type configurations. For instance, if <custom-string 1> is not a mandatory field
in the jobInfo HRIS element but mandatory in its contingent worker person type configuration, it is a
non-mandatory field for Job History data imports. If such fields are present in your import file without a
value, or not present in your import file at all, a null value is automatically added to them.
Context
Full purge imports are recommended when you’re importing data for the first time, or want to completely overwrite
existing information in the employee data files. You also have the option to upload multiple files together. For more
details, refer to the Related Information section.
Procedure
By selecting a proper file encoding format, you can represent data in different languages as necessary.
8. (Optional) To select the file locale other than English (United States), choose from the File Locale dropdown.
The options appearing in the dropdown correspond to the language packs selected during Employee Central
configuration. File Locale allows you to choose the language and locale for the data you’re importing, which
is especially important for date, number, and picklist fields, as the format can change based on the locale.
Selecting a file locale wouldn’t affect your default logon language or the language of your UI components.
9. (Optional) Depending on the selected entity, the Real-Time Threshold field is shown. Enter an appropriate
value.
You can change the Real-Time Threshold to specify a value lower than the default value shown. The user-
defined threshold value can’t exceed the default value. A default value is automatically displayed for the
selected entity.s If the number of records in your import file is greater than the Real-Time Threshold value, data
imports happens asynchronously. Otherwise, data is synchronously imported.
Note
The mode of importing data (synchronous or asynchronous) is automatically determined depending on the
number of records in your import file.
10. (Optional) Select Validate Import File Data to check if there are any discrepancies in the data provided in the
template.
Run this check to ensure that there are matching import headers, and the CSV file you're uploading contains
valid data. If the entity for which you’re importing data supports Centralized services, and has a parent or child
entity, then data associated with the parent or child entity is also validated.
Example
When importing Personal Information, the associated Global Information is also validated and vice versa.
When importing Compensation Information, the associated Pay Component Recurring information is also
validated and vice versa.
Results
The data is validated and the import process is initiated. After the import process is complete, email notifications
are generated. The email notifications also include details about any errors encountered during the process.
If you are importing data for an entity that has related entities, the related entity records are implicitly copied
over. For instance, Personal Information entity is related to Global Information, and they share a parent-child
Note
Next Steps
Tip
If some batches fail because of picklist unavailability, reimport the failed records.
If a job isn’t showing any progress, it can be because the UI isn’t displaying updated results. In such cases,
note the time stamp and report a case.
Recommendation
At any given time, you can execute a maximum of three Employee Data Import jobs, simultaneously. If a
new job is submitted during this time, the new job is added to the queue until one of the ongoing jobs has
completed its execution.
Related Information
Selectively update existing employee data by uploading data in incremental load mode.
Prerequisites
• You've updated the import file with the required data to upload.
• There are no dependencies on the import data. For instance, before importing data for an employee, data for
the employee's manager and HR representative must be existing in the system.
• The data import settings are properly configured.
• You've reviewed the configuration of the respective HRIS element on the Business Configuration UI.
Note
During data imports, the base configuration of the HRIS element takes precedence over its country/region-
specific and person type configurations. For instance, if <custom-string 1> is not a mandatory field
in the jobInfo HRIS element but mandatory in its contingent worker person type configuration, it is a
non-mandatory field for Job History data imports. If such fields are present in your import file without a
value, or not present in your import file at all, a null value is automatically added to them.
Context
Importing data in the Incremental Load mode is beneficial when you want to update specific information while
retaining a majority of existing employee data. You can also perform what is known as a Partial Import.
To perform a partial import, you must enter &&NO_OVERWRITE&& against fields in your import file, whose value you
want to retain.
Note
• You cannot enter &&NO_OVERWRITE&& against fields that are business keys.
• Business rules consider updating fields marked as not to be overwritten, if you’ve the Enable execution of
rules against NO_OVERWRITE permission.
Procedure
Based on your selection, additional options appear on the page. If you’ve selected Basic Import, select More
Options to further customize your import. For information about configuring advanced options with basic
imports, refer to the Related Information section.
By selecting a proper file encoding format, you can represent data in different languages as necessary.
7. (Optional) To select the file locale other than English (United States), choose from the File Locale dropdown.
The options appearing in the dropdown correspond to the language packs selected during Employee Central
configuration. File Locale allows you to choose the language and locale for the data you’re importing, which
is especially important for date, number, and picklist fields, as the format can change based on the locale.
Selecting a file locale wouldn’t affect your default logon language or the language of your UI components.
8. (Conditional) Define the Real-Time Threshold value.
You can change the Real-Time Threshold to specify a value lower than the default value shown. The user-
defined threshold value can’t exceed the default value. A default value is automatically displayed when you
select an entity. If the number of records in your import file is greater than the Real-Time Threshold value,
the system imports the data asynchronously. Otherwise, data is imported synchronously. Partial imports are
always asynchronous.
Note
The mode of importing data (synchronous or asynchronous) is automatically determined depending on the
number of records in your import file.
9. (Optional) Select Validate Import File Data to check if there are any discrepancies in the data provided in the
template.
Run this check to ensure that there are matching import headers, and the CSV file you're uploading contains
valid data. If the entity for which you’re importing data supports Centralized services, and has a parent entity,
then data associated with the parent entity is also validated.
Example: Global Information supports Centralized services, and is the child of Personal Information. Hence,
when data is validated as part of Global Information imports, the corresponding Personal Information is also
validated.
10. Select Import.
Results
Fields not having any values in your import file are substituted with default values. The default value can be:
If you are importing data for an entity that has related entities, the related entity records are implicitly copied
over. For instance, Personal Information entity is related to Global Information, and they share a parent-child
relationship. When you're importing Personal Information on a future date, the Global Information associated with
the existing Personal Information is implicitly copied over and vice-versa.
After the import process is complete, email notifications are generated. The email notifications also include details
about any errors encountered during the process.
Next Steps
Tip
If some batches fail because of picklist unavailability, reimporting the failed records resolve the problem. If
a job is taking unusually long time to complete, it’s possible that the UI isn’t displaying updated results. In
such cases, capture the time stamp and report the case.
• Repeat the procedure to upload another import file. Generally, you upload a single file to import data
corresponding to a respective entity. However, you also have the option to upload multiple files together. For
more details, refer to the Related Information section.
Recommendation
At any given time, you can execute a maximum of three Employee Data Import jobs, simultaneously. If a
new job is submitted during this time, the new job is added to the queue until one of the ongoing jobs has
completed its execution.
Related Information
Only certain fields in your import file template support Partial Imports.
If you want to perform a Partial Import while importing data in Incremental Load mode, you cannot update
information for:
• ADDRESS
• The default Address entity does not support Partial Import.
• Location and Emergency Contact Foundation Objects also refer to this address. Therefore these two
columns will not support Partial Import as well.
• WORK_PERMIT_INFO
• PAY_CALENDAR
• JOB_RELATIONSHIPS
• JOB_FAMILY
• DYNAMIC_ROLE*
• WF_CONFIG*
• WF_CONFIG_CONTRIBUTOR*
• WF_CONFIG_CC*
Note
Prerequisites
You have selected More Options while importing basic employee data.
Context
You can configure specific tasks to be performed with basic user imports such as manager transfer, document
removal, and so on.
Automatic Completed Document Copy to New Manager Move all the documents from the old manager's Completed
folder to the new manager's Completed folder.
Automatic Inbox Document Transfer To New Manager Move all the documents from the old manager's Inbox to the
new manager's Inbox.
Automatic En Route Document Transfer To New Manager Move all the documents from the old manager's En-Route
folder to the new manager's En-Route folder.
Automatic insertion of new manager as next document Make the new manager a part of the review process and
recipient if not already remove the old manager from accountability henceforth.
Remove Inactive Employees' In-Progress Documents Remove all in progress documents of inactive employees.
Remove Inactive Employees' Completed Documents Remove all completed documents of inactive employees.
Note
To find options related to Automatic Manager Transfer and Automatic Document Removal, the Effective
Dated fields in Basic Import setting must be enabled in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
Next Steps
Compress multiple import files together and upload them simultaneously as a Zip file.
Prerequisites
• You've updated the import file with the required data to upload.
• There are no dependencies on the import data. For instance, before importing data for an employee, data for
the employee's manager and HR representative must be existing in the system.
• The data import settings are properly configured.
• You've reviewed the configuration of the respective HRIS element on the Business Configuration UI.
Note
During data imports, the base configuration of the HRIS element takes precedence over its country/region-
specific and person type configurations. For instance, if <custom-string 1> is not a mandatory field
in the jobInfo HRIS element but mandatory in its contingent worker person type configuration, it is a
non-mandatory field for Job History data imports. If such fields are present in your import file without a
value, or not present in your import file at all, a null value is automatically added to them.
Context
Generally, you upload a single file to import data corresponding to a respective entity. However, if you have multiple
files to upload, you can compress them together into a Zip file and upload the single compressed file. The order in
which data is imported is as follows:
• Basic Import
• Biographical Information
• Employment Details
• Personal Information Import
• Job History
If you’re creating new employee records, ensure that your Zip file contains the following import files:
• Basic Import
• Biographical Information
• Employment Details
• Full Purge, if you’re importing data for the first time or you want to completely overwrite existing data with
information included in the import file.
Caution
With this option, all existing employee data (historic data included) corresponding to the selected
entity is removed. To keep existing information, you must include the same in the import file or create a
data backup, as required.
While importing data, if the system encounters fields without any value, null values are added to them.
• Incremental Load, to update employee data already existing in the system.
You can also use this option to perform a Partial Import, when only the required employee data is updated
and all the other details remain unchanged. To partially import data, you to enter &&NO_OVERWRITE&&
under those fields in your import file, whose value you don't want to modify. As a result, after you upload
the file in the system, except business keys, and fields having a value &&NO_OVERWRITE&&, all other
values will be updated with values provided in your import file.
While importing data, if the system encounters a field without any value, &&NO_OVERWRITE&&, or null
value is added to it depending on whether it supports partial import or not.
5. Select Browse to attach the Zip file from your computer.
6. (Conditional) Depending on the entity you’ve selected, the Import Description field appears. Enter a brief job
description.
7. (Optional) To select a data encoding format other than Unicode (UTF-8), choose from the File encoding
dropdown.
By selecting a proper file encoding format, you can represent data in different language as necessary.
8. (Optional) To select the file locale other than English (United States), choose from the File Locale dropdown.
Selecting a file locale wouldn’t affect your default logon language or the language of your UI components. It
corresponds to the locale for which data is being imported. The options appearing in the dropdown are based
on the language packs selected during Employee Central configuration.
9. (Conditional) Define the Real-Time Threshold value.
You can change the Real-Time Threshold to specify a value lower than the default value shown. The user-
defined threshold value can’t exceed the default value. A default value is automatically displayed when you
select an entity. If the number of records in your import file is greater than the Real-Time Threshold value,
the system imports the data asynchronously. Otherwise, data is imported synchronously. Partial imports are
always asynchronous.
Note
The mode of importing data (synchronous or asynchronous) is automatically determined by the system
depending on the number of records in your import file. During an asynchronous import, the system
validates the first 10 records. Discrepancies, if any, results in errors. After you rectify the errors and upload
the file again, the import process will continue.
Tip
When importing data for new employees, validations like 'Person-ID-External' is invalid can appear. You can
ignore these validations and proceed with the import.
Results
The data is validated, and the import process is initiated. After the import process is complete, email notifications
are generated. The email notifications also include details about any errors encountered during the process.
Next Steps
After uploading the Zip file, you can monitor the import job by selecting Monitor Job.
Related Information
Monitor execution details and other statistics of each submitted job associated with employee data import.
Prerequisites
To monitor jobs submitted by you, you need the Administrator Permissions Admin Center Permissions
Monitor Scheduled Jobs
To monitor jobs submitted by others, you need the Administrator Permissions Manage User Allow users to
view all the jobs permission to use this feature.
With this permission enabled, in addition to viewing the jobs, you are able to access the Download Report link
from View Details in the Scheduled Job Manager.
Context
If the number of records in your import file is greater than the specified real-time threshold value, data imports
happen asynchronously. A job is submitted for further processing.
Procedure
The Scheduled Job Manager page opens in a tab in the browser, This page lists all the jobs (of type Employee
Data Import) that you have submitted.
3. Identify the job that you want to monitor.
4. Choose Actions View Details Download Status corresponding to your job, if you want to download a
copy of the job execution report.
The status file in a CSV format is automatically downloaded to your local system.
5. Choose Close to return to the Scheduled Job Manager page.
Related Information
5.3.6 Troubleshooting
Information about typical error or warning messages that you may encounter during the data import process along
with their cause and resolution.
Error/Warning Message While Importing... What's the Cause? What's the Resolution?
1. Manager’s data
2. HR Representative data
3. Employee data
- The data in your import file Verify they the data you're
Internal system
error encountered is invalid/incorrect, or the importing is correct/valid.
while importing existing data is invalid.
record.
The Date of Birth Biographical Information For some countries, the Verify the import data, make
does not correspond National ID information corrections as necessary and
to the National ID includes date of birth record import the data again.
The Gender does Personal Information For some countries, the Verify the import data, make
not correspond to National ID information corrections as necessary and
the National ID includes gender data of a import the data again.
Personal information Personal Information This message most likely • When <person-id-
does not exist appears when a user's external> record for
<person-id-external> or a dependent does
a dependent's <person-id- not exist, import
external> is invalid. This biographical information
can happen due when: to define a new
person in the system.
• <person-id-
Thereafter, import
external> record
person relationship
for a dependent
information to associate
does not exist.
the dependent with the
• <person-id-
respective employee.
external> for a
dependent exists,
• To import personal
information of a
but not associated
user whose <person-
with an employee.
id-external> does
• You’re importing
not exist, you must
personal information for
import their basic
a user whose <person-
user information to
id-external> does not
create a <person-
exist.
id-external>, and
then import personal
information.
Failed to perform Job History This error message most Depending on the cause of
country specific likely appears when: the error, you can change
the file locale or associate a
validation. Please • You’re importing job
ensure this record country with the legal entity
history of employees,
and try importing the data
is associated with a and the dates do not
again.
valid Company record match the date format
for given dates associated with the file
locale selected during
import.
Example
If your file locale is
set to English (United
States) and you’d like
to import job history
for employees from
Germany. While
Germany supports
the date format
DD.MM.YYYY, the
data you upload
is in the format
MM/DD/YYYY
(corresponding to
the English (United
States)) file locale.
If you specify dates
in the German date
format, the system
will generate an
error.
Country is a • Emergency Contact This error message most Update the import file
required field and • Consolidated likely appears when you template with an appropriate
cannot be blank Dependents leave the <Country> field value in the <Country> field
blank while filling out and try importing the data
the <Address> field while again.
importing emergency contact
or dependents information.
<Country> is now a
mandatory field when the
visibility of the is-address-
same-as-person HRIS field is
set to No
User has already Termination Details This error message most • You must terminate the
likely appears when: active employee before
been rehired. You
can delete the • You’re importing a rehire initiating the rehire
record for an active process.
existing rehire
employee. • You must delete the
record or terminate
• A rehire record already existing rehire record
the employee to exists for the user. before importing the new
proceed with this rehire record.
rehire record
• While investigating the failed records in your import file, check if any records have the <End Date> field. If
the <End Date> in your import file does not correspond to the <End Date> value present in the system, the
import process will not be successful. While the system calculates the end dates based on the hierarchy of
event start dates, you can remove this field value, and try importing the file again.
• Ensure all the effective-dated records are in a single CSV file, when importing multiple batches in parallel.
• After a Basic Import, if you are unable to find users in your system, it is most likely that the list of users in the
“Everyone” group isn't refreshed. To fix this, re-import basic user information for a few users (not all). This will
refresh the list and you should be able to find the users now.
• After a Biographical Information import, if you face an error after clicking on Employee Information in an
employee profile to review the Biographical information, it is most likely that the import hasn't been completed.
Ensure that the import process is complete before reviewing the data in employee profiles.
• After importing dependents information, if you find that:
• A single dependent is shared by two employees, it might be due to an incorrect <Person-ID-External>
association. To resolve this, create one <Person-ID-External> for each dependent respectively.
Thereafter, associate this information with the corresponding employees using the Person Relationship
template.
• Dependents are converted to employees, remove the National ID of the dependent and re-import data with
a new <Person-ID-External>. This will avoid the validation check on the National ID.
Assignment ID can be used to define a relationship between the personnel (temporary or permanent) and your
company.
With Employee Central imports, there are 2 ways to define assignment ID for new employees who have multiple
employment records:
Prerequisite: The required HRIS elements must be configured to add the <assignment_id_external> field.
To check the existing configuration, go to Admin Center Manage Business Configuration Employee Central
HRIS Elements .
Note
While importing data, if the system finds records where you haven’t entered an Assignment ID, a value is
automatically assigned from the corresponding <users_sys_id> field.
2. Configuring the system to auto-generate an assignment ID: Create an employee setting configuration to
generate assignment ID during the data import process.
Related Information
Auto-generate assignment IDs while importing data by creating an employee setting configuration.
Context
To automate the process of generating the Assignment IDs for new employees, you must create an employee
settings configuration and attach a business rule to auto-generate assignment IDs. The business rule is executed
during the data import process.
Procedure
Note
Ensure that you create a business rule instance belonging to the Generate Assignment ID External scenario
in Employee Central Core rule scenario category.
8. After creating the business rule, select the new rule from the Rule to generate Assignment ID External
dropdown.
9. Save the changes.
Results
Your employee settings configuration is activated. With this configuration in place, the system generates an
assignment ID while importing Employment Details or Global Assignment information of employees.
Note
• The assignment ID generated by the system can be different from the user ID of employees.
• Assignment ID from the import file takes precedence over the assignment ID generated by the system.
Related Information
Data validation and error management are part of Centralized services to help ensure data imports and the
resultant changes are accurate and consistent.
General Validations
Data imports for all entities supported by Centralized services are validated. The validations include:
• Association validation: To check the associations between the various Foundation Objects (FOs) and Generic
Objects (GOs), and verify if the data you’ve entered in the import file is correct.
Note
Inactive FOs and GOs are not considered during imports. During an import, if an event reason status is
specified as Inactive in the import file, then the Event Reason is considered for the import. However, if the
Event Reason status is configured as Inactive in Manage Organization, Pay and Job Structures, then the
event reason is not considered for import.
• Derived entity validation: To check if there are any issues arising as a result of cross-entity transactions with
data imports. Cross-entity transactions can arise out of cases such as rule execution, forward propagation, and
so on.
Example
While importing Job Information, if a cross-entity rule is executed to update Compensation Information,
the corresponding record is validated. Compensation Information is the derived entity in this case.
Note
The Job History import fails in Initial Load Mode if there is a cross-entity rule from Job Information to
Recurring Pay Component if there is no corresponding Compensation Information record.
• Duplicate record validation: To check if there are multiple records in your import file with same data.
• Forward data propagation validation: To validate existing data in all the target records qualifying for forward
propagation.
Example
Personal Information and Global Information share a parent-child relationship. When you're importing
Personal Information on a future date, the Global Information associated with the existing Personal
Information is implicitly copied over. If the existing Global Information has inconsistencies, they must be
resolved before importing Personal Information.
Similarly, with Global Information imports, the associated Personal Information record is validated for
consistency.
Example
When a new record for Compensation Information is imported, the recurring pay component is also copied
over from the previous record. During the copy, validation is completed for both Compensation Information
and the recurring pay component. To reduce data inconsistency.
• Sequence number validation: To check if a sequence number is entered for all the records having same
business keys. Applicable for data imports with entities that support addition or modification of multiple
records (having same business keys) on the basis of a sequence number, such as Job Information.
• Earliest date validation: To check if the effective date of changes is before the earliest date of the record and
that the logged in user has permission to make those changes. This validation is for all effective-dated entities
supported on Centralized services.
Error Management
Raise Message: While configuring business rules, you can add conditions to handle exceptional scenarios with a
warning or an error message using the Raise Message action. For more information about creating a rule that raises
a message, refer to the Related Information section.
Related Information
End date correction is a part of Centralized services to manage end dates of effective-dated employee records.
When you’re importing data for effective-dated entities, end date correction is required to keep data consistent in
the following scenarios:
For employees who have multiple records, end date correction is a method to determine whether there can be date
gaps between different records or not.
Note
A 'date gap' is the time frame between the end date of one record and the start date of the next record.
Based on whether you import the data in Incremental Load or Full Purge mode, the end dates are adjusted
automatically.
Note
Only for data imports in Full Purge
mode.
Adding new records without providing end date values in the import file
Prerequisites: You’re importing data in Incremental Load mode.
Example
Result: End dates are automatically calculated. Date gaps, if any, between records are adjusted accordingly.
The end date of the last record is also adjusted.
Adding new records with end date values provided in the import file
Example
Let' suppose you’re importing address information of an employee, with end dates provided for all records.
Result: Existing address information of employees is deleted and replaced with data from your import file. If
there are any date gaps between records, they are not corrected.
Example
The start date here means before the start date of an existing record.
You add two new records without providing an end date in the import file:
Result: The end date of the existing record is corrected in accordance to the new records added. The end dates of
the new records are also calculated.
Adding new records with start dates coming before the start date of the initial record
Example
You add a new record without providing an end date in the import file:
Result: The end date of the new record is corrected in accordance to the existing record.
Adding a new record with start date between the start dates of existing records
Example
Now, you add a new record between the start date of the first record and the end date of the last record.
Result: The end date of the first record is corrected according to the start date of the new record. Also, the end
date of the new record (second record) is corrected according to the start date of the third record. The reason
being, the end of any record can’t be greater than the start date of the subsequent record. This process of end date
calculation holds true even if you don’t provide an end date value in your import file template.
The principles of end date correction as demonstrated in the these use cases are also applicable to other effective-
dated entities.
Deleting employee data is a consequential process, and must be dealt with utmost importance. Data once deleted
can be difficult or impossible to recover.
Using data imports, you can choose to delete employee data associated with a single entity, or multiple entities as
applicable.
We can also create a DRTM purge request to purge a specific type of data and not the entire set of user accounts
with all their records. However, if you want to do a full purge of inactive users, use the DRTM Master Data purge
instead. For more information, refer Related Information
Related Information
Prerequisites
• To delete country/region-specific information, the corresponding HRIS elements must be properly configured.
To review your existing configuration, go to the Admin Center Manage Business Configuration page.
• To delete Job History, ensure that the Company field is available in your import file template.
• To delete a leave of absence Job History record, you have Enable Leave of Absence Editing enabled in the Time
Management Configuration MDF object.
Context
Employee data can be deleted in Incremental Load mode, by choosing an operation type between DELETE and
DELIMIT. The operation type depends on the entity you select for importing data.
You cannot delete and add data at the same time. To do so, you must create two separate jobs.
Note
Business key changes are considered as a 'Delete' of existing business keys and 'Insert' of new business keys.
This applies whether the business key is changed or if the same business key is deleted and re-added.
Key considerations with Job Information, Recurring Pay Component and Compensation
Information deletion:
• You must provide the sequence number with the records in your import file template.
• You can delete data associated with multiple employee records.
• For a given employee, if your import file contains data to be deleted and added at the same time, Recurring Pay
Component associated with the Compensation Information (to be deleted) is automatically linked with the new
Compensation Information (to be added).
• Deleting Compensation Information results in the deletion of the corresponding Recurring Pay Component
information also.
Procedure
Addresses DELIMIT
Results
Related Information
Consolidate and import employee data with multiple entities for deletion.
Prerequisites
The Admin Center Manage Employee Central Settings Enable Compound Deletion setting is enabled. This
setting is required to download the required import file template.
• Addresses
• Biographical Information
Note
You can only delete the Biographical Information of employees if there is no existing Employment
Information and/or Dependent information for them.
• Email Information
• Job Relationships
• National ID Information
• Pay Component Non-Recurring
• Phone Information
Context
You can use a dedicated import template to group together employee data belonging to different entities for
deletion.
Note
Data deletion with multiple entities always happens in Full Purge mode.
Procedure
To find the HRIS element identifier, go to Admin Center Manage Business Configuration page.
Results
Related Information
Forward propagation is supported for data deletion of all effective-dated entities. If forward propagation is
applicable, data changes are cascaded to subsequent records of the employee.
Note
When you delete data with imports, make sure that the start date falls within the date range of records existing
in the system.
Examples
Example
The following example shows how data changes are forward propagated when you delete data.
Result: The Billing records with the start date on January 1, 2013 and January 1, 2015 are deleted.
Example
The following example shows how data changes are forward propagated until there's a data gap when you
delete address information.
Existing Address information of an employee:
2020
After the import, the home address record effective from May 5, 2020 is deleted. The home address record
effective from Oct 4, 2020 is not deleted because of a data gap before it. The forward propagation stops when
there's a gap.
2020
Example
The following example shows how the data change is forward propagated until the data in a future record is
different from the original value of the previous record.
104003 Home January 1, 2020 April 30, 2020 278 Shadow USA 924
Brook Street
Ada, OK 74820
104003 Home May 1, 2020 September 30, 278 Shadow USA 924
2020 Brook Street
Ada, OK 74820
104003 Home October 1, 2020 December 31, 111 Brook Street USA 333
2020
104003 Home January 1, 2021 December 31, 111 Brook Street USA 333
9999
After the import, the home address record effective from January 1, 2020 is deleted. The home address record
effective from October 1, 2020 is not deleted because the address record is different from the original data of
the previous record.
104003 Home October 1, 2020 December 31, 111 Brook Street USA 333
2020
104003 Home January 1, 2021 December 31, 111 Brook Street USA 333
9999
Related Information
A method to exclude duplicate records and redundant information while importing data.
Multiple records with same data for a person indicate inaccurate or stale data. It can cause issues with reporting,
analyzing metrics leading to performance inefficiency. Identical record suppression enables you to keep data
consistent and avoid duplicate records from getting imported.
Entities that support identical record suppression with data imports are:
• Addresses
• Biographical Information
• Employment Details
• Email Information
• Emergency Contacts
• Personal Information
• Global Information
• Phone Information
• Social Account Information
• Job History
• Job Relationships
• National ID Information
• Compensation Information
• Personal Documents Information
• Entities supported only with Centralized Services
• Compensation Information
• Recurring Pay Component
• Person Relationship Information
• Termination Details
Example
For effective-dated entities, if you're importing data that matches existing data for an employee on a given date,
the record isn’t imported. However, if you're importing data that matches existing data but on a different date,
the record is imported.
For non-effective dated entities, if you're importing data that matches existing data for an employee, the record
isn’t imported.
If you import Job History and Compensation Information with or without providing a seq-number in the import file,
the import data is validated against the existing employee data on the same effective date. If there’s no information
available as of that effective date, the system validates the data you’re importing with the previous effective dated
record. If there are still no changes, the data isn't updated.
Identification of records to suppress isn’t a part of the validation process. However when you do an import, the
business rules are executed to see if there are any changes to the import data. If there are changes then the
record is saved. If the records are still identical even after rule execution then the record is suppressed.
Related Information
Notable scenarios related to identical record suppression with employee data imports.
Case 1: You add a new record by importing data in Full Purge mode.
Result: All existing records are deleted and replaced with the new record. The audit details are updated.
Case 2: You add a new record identical to one of the existing records by importing data in Full Purge mode.
Result: The first record is not imported as the data is identical to an existing record, however, the audit details are
updated. The second record is deleted.
Case 3: You add a new record by importing the data in Incremental Load mode.
Result: The new record is added to the existing set of employee records. The audit details are updated.
Case 4: You add a new record identical to an existing record by importing the data in Incremental Load mode.
Result: The record is not imported. However, the audit logs are updated.
Case 1: You add a new record by importing data in Full Purge mode.
Result: The first record is imported as there is a change to Job Title. However, second record is deleted and the
audit details are updated..
Case 2: You add a new record identical to an existing record by importing data in Full Purge mode.
Result: The first record is not imported as the data is identical to an existing record (suppressed) and the audit
details are not updated. However, the second record is deleted.
Case 3: You add a new record identical to an existing record by importing the data in Incremental Load mode.
Result: The record is not imported ('suppressed'). The audit details are not updated.
The following use cases are from the perspective of Compensation Information imports, but the behavior is the
same with Recurring Pay Component imports.
Note
For Compensation Information imports, if identical records are included with the import, the system executes
rules for Compensation Information to check if there are any changes to both Compensation Information and
Recurring Pay Component data. If there are changes to the data, the updated data is saved in the system.
However, for Recurring Pay Component imports, the system checks for changes only specific to Recurring Pay
Component data to Recurring Pay Component data included in the import file.
Importing Compensation Information with Centralized services enabled and Identical record
suppression not enabled
Let's suppose an employee has existing Compensation Information as follows:
You add the following records by importing data in Incremental Load mode.
Result:
• The first record is not imported ('suppressed') as the data is identical to an existing record. The audit details
are updated.
• The second record is imported successfully. But its sequence number is corrected to match the order.
Result: The record is not imported ('suppressed') as the data is identical to an existing record, and a warning
message is displayed on the UI. The audit details are not updated.
Forward propagation is a process by which data changes are cascaded or "propagated" to future dated records.
Forward propagation is supported only in Incremental Load mode.
Forward propagation of data is useful when modifications to employee data are applicable to corresponding
effective dated records in the future.
For records where forward propagation of data occurs, the system runs the validation checks against all the fields.
Currently, forward propagation of data is supported with the following data imports on Centralized services:
• Addresses
• Personal Information
• Global Information
• Job History
• Job Relationships
• Compensation Information
• Recurring Pay Component
• Termination Details (Job Information)
The following sections describe the behavior of these supported entities on Centralized services.
Note
Forward propagation of data when field values in the records subsequent to the newly added record match with the
values in the first record.
Example
These examples of forward propagation apply to Job History and Personal Information imports as well.
Example
Start Date Custom String 1 Custom String 2 Custom String 3 Custom String 4
April 1, 2019 X A O -
June 1, 2019 X B O -
September 1, 2019 Y C O -
Start Date Custom String 1 Custom String 2 Custom String 3 Custom String 4
May 1, 2019 Z B Q E
Start Date Custom String 1 Custom String 2 Custom String 3 Custom String 4
April 1, 2019 X A O -
May 1, 2019 Z B Q E
June 1, 2019 Z B Q E
September 1, 2019 Y C Q E
For the record with <Start Date> as June 1, 2019, the values in the Custom String 1, Custom String 3 and
Custom String 4 match with the values in the record with <Start Date> as April 1, 2019, but the value in
Custom String 2 doesn't match. Hence, data is propagated only to Custom String 1, Custom String 3, and
Custom String 4. Also for the record with <Start Date> as September 1, 2019, forward propagation happens
on Custom String 3 and Custom String 4 as the value was same as previous record. However, for Custom
String 1 the propagation stops as the value is different from previous record.
Restriction
Example
These examples of forward data propagation apply to both Recurring Pay Component and Global Information
imports.
Note
When you add records of child entity to a record without existing child entity records, forward propagation of
data happens if the field value in the futures is empty.
Example
In continuation with the above sample database output, if you delete an existing record by importing the
following data:
In this case, only the pay component data is deleted while all the other data remains intact.
Example
Result:
No forward propagation happens, because the number of children starting from January 1, 2017 is "0" and is
different from the original value of the previous record, which is empty.
Example
Result:
The updated field value "1" is forward propagated and stops on December 31, 2018, because there's no record
of global information from January 1, 2016 to December 31, 2016 and the value of <Number of Children> in
the future record is empty.
Result:
Forward propagation works when you add or modify (update or delete) data. The following examples apply to Job
Relationship imports as well.
For use cases about forward propagation in data deletion, see related information.
The following examples show how data change is propagated when you add address information.
Example
Example
The updated field value "30000" is forward propagated to the future record that starts from January 1, 2021
because before import, the future record has the same postal code as that of the previous record.
The updated field value "30000" is forward propagated and stops on December 31, 2020 because there's no
record from January 1, 2021 to April 30, 2021.
Currently, if you insert a same entity type within an existing record, the data isn't propagated forward. Insertion of
only a different entity type within an existing record is propagated forward.
Example
The newly added home address information is propagated to future dated records with other details in the
import file template like Address 1 and Attachment ID.
Example
Person ID External Address Type Start Date End Date Address 1 Country/Region
Here, when a same entity type (in this example, entity Address Type is Home), within an existing record is
imported, forward propagation of data doesn't take place.
Example
For the same Sample Address Information mentioned in the previous example, import data with Address Type
as Others.
Person ID External Address Type Start Date End Date Address 1 Country/Region
Here, when a different entity type (in this example, entity Address Type is Others), within an existing record is
imported, forward propagation of data takes place.
Example
Forward propagation is supported only when a new type of import is inserted between the records. See the
following example for Job Relationship imports.
1 1.1.2000 HR Manager
Now, insert a record with 1.3.2000 and Custom Manager between record 1 and 2. The result is that forward
propagation doesn't take place because Custom Manager exists in the future records already.
1 1.1.2000 HR Manager
3 1.3.2000 HR Manager
Now, in the following example, insert a record with 1.3.2000 and Matrix Manager between record 1 and 2.
1 1.1.2000 HR Manager
1 1.1.2000 HR Manager
3 1.3.2000 HR Manager
Related Information
Sequence number generation and correction is a part of Centralized services to differentiate between employee
records having identical information.
Example 1
Example
Let's suppose you’re importing four new records for a user without providing a sequence number in your import
file.
These records are not imported because the system is unable to determine the first record of the sequence.
Example 2
Example
Lets suppose there are three records existing for a user as follows:
You add a new record by importing data in Incremental Load mode, without providing a sequence number in
your import file.
After the data is imported, the new record is assigned a sequence number to match the order of the existing
records.
Let's suppose you import another record on the same date in Incremental Load mode. This time you provide a
sequence number in your import file.
Since the business keys (Event Date, User ID, and Sequence Number) match with an existing record, the
corresponding record is updated.
Database Output
Event Date Sequence Num-
(dd/mm/yyyy) User ID Country/Region Supervisor Job Title ber
Remember
Sequence number is a part of the key that identifies an employee record. Therefore, to update an existing
record and not add a new one, you must ensure to provide the sequence number of the respective record.
Example
Result: The sequence number of the subsequent records are automatically corrected to keep the order.
Sequence number generation and correction for records created by business rules
Example
Lets suppose an employee has the following Job Information, Compensation Information, and Recurring Pay
Component Information:
You add a new record by importing data in Incremental Load mode. You also have a business rule associated
with the Job Information entity to create new Compensation and Recurring Pay Component records.
Result: When the new records created by the rule are associated with the Compensation Information and the
Recurring Pay Component Information of the user, a sequence number is automatically adjusted.
Example
You add three new records by importing Job Information in Incremental Load mode, without providing a
sequence number:
Result: These records are not imported because the system is unable to determine the first record of the
sequence.
• Example
Case 1: You add a new record by importing data in Full Purge mode, without providing a sequence number.
Result: During the import process, a sequence number "1" is generated and assigned to the new record.
Data in the new record is matched against existing data. Since the business keys (Event Date, User ID,
Sequence Number) don't match with the existing record, the existing record is updated. The audit logs are
updated as well.
Case 2: You add a new record identical to the existing record by importing data in Full Purge mode, without
providing a sequence number.
Result: During the import process, a sequence number "1" is generated and assigned to the new record.
Data in the new record is matched against existing data. Since the business keys (Event Date, User ID,
Sequence Number) match with the existing record, the data is not imported ('suppressed'). The audit logs
are not updated.
• Example
Case 1: You add a new record by importing data in Full Purge mode, without providing a sequence number.
Result: During the import process, a sequence number "4" is generated and assigned to the new record.
Case 2: You add a new record by importing data in Incremental Load mode, without providing a sequence
number.
Result: During the import process, a sequence number "1" is generated and assigned to the new record.
Data in the new record is matched against existing data. Since the business keys (Event Date, User ID,
Sequence Number) match with an existing record, the existing record is updated.
Case 3: You add multiple records by importing data in Full Purge mode, without providing a sequence
number.
Result: The records are not imported as the system is unable to determine the first record of the sequence.
• Example
01/01/2021 jdoe 1 4 1
01/01/2021 jdoe 2 4 2
You add recurring pay component information by importing data in Incremental Load mode, without
providing a sequence number.
Result: The Compensation Information record with sequence number 1 doesn't have any recurring pay
component records linked to it, whereas, the Compensation Information record with sequence number 2
has two recurring pay component records linked.
01/01/2021 jdoe 1 4 1
01/01/2021 jdoe 2 4 2
Persons who have previously worked at your company can be rehired if the required qualifications are met.
If a terminated employee is eligible to be rehired in your company, you can initiate the rehire process in Employee
Central. However, if a significant number of persons are to be rehired, you can use Employee Central Imports to
save time and manual effort.
There are two ways of rehiring former employees using employee data imports:
1. Rehire with the existing user ID of their previous employment. In this case, the previous employment data is
visible in the system.
2. Rehire with a new user ID. In this case, the previous employment data isn’t visible in the system. For more
information about rehiring users with a new user ID, refer to the Related Information section.
Rehiring Former Employees with an Existing User ID (New Employment) [page 158]
Create a fresh employment record by rehiring former employees with new user ID.
Retain and refer the old employment data by rehiring former employees using their existing user ID.
Prerequisites
Context
When you rehire former employees using their existing user ID, their old employment data is visible in the system,
and referred directly wherever possible. It is also possible to rehire former no-shows.
1. To rehire users with the same user ID, download a copy of the Job History import file template.
2. Prepare data to import.
Along with other information, enter the user ID of their previous employment under <User ID> column and
the event reason for rehiring users (as configured for your company) under <Event Reason> column. For
information about other notable points with Job History imports, refer to the Related Information section.
3. Upload the file to import the data.
Results
If successful, users are rehired on a new employment with their existing user ID.
During the import, most of the termination-specific fields are removed. Fields such as Benefits End Date as well as
custom strings are deleted. Only the Ok to Rehire, Eligible for Salary Continuation, and Regret Termination fields are
kept.
If the user had a global assignment, then all specific fields for global assignment details such as Payroll End Date
and custom fields are also removed.
Related Information
Create a fresh employment record by rehiring former employees with new user ID.
Prerequisites
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
• You must have the Manage Hires Rehire Inactive Employee with New Employment permission.
Context
When you rehire former employees by assigning a new user ID with their new employment, their old employment
data isn’t visible in the system. It is also possible to rehire former no-shows.
Note
If a user being rehired already has an active employment, a concurrent employment is created. This means
that concurrent employment must be enabled in your system, since the rehire process would fail if concurrent
employment isn’t enabled.
Procedure
Along with other information, enter the person ID of the user in the <Person ID External> column, and the
new user ID in the <User ID> column. The <Is Rehire> field must have a value relative to Yes.
Note
The <isRehire> field is a transient field used during import when a user is created for the Rehire with New
Employment scenario. This value is not saved to the database, it is only needed for the import.
To set the value as Yes, you can enter: Y, Yes, T, True, 1. By default, the system considers the field value as
1.
For information about other notable points with Employment Details imports, refer to the Related Information
section.
3. Upload the file to import the data.
Along with other information, enter the new employment's user ID under <User ID> column and the event
reason for rehiring users (as configured for your company) under <Event Reason> column. For information
about other notable points with Job History imports, refer to the Related Information section.
If successful, the HRIS sync job is scheduled. After the job is completed, the new user accounts are activated.
Related Information
Update employee records with supporting information by attaching a digital copy of documents with import file
templates.
Prerequisites
• The <attachment-id> field must be enabled and configured as visible in the HRIS elements of Emergency
Contact and Person Relationship entities.
Note
The <attachment-id> field appears by default in the import templates of all other supported entities.
• To download country/region-specific import file templates, the corresponding HRIS element must be
configured.
To check the existing configuration, go to Admin Center Manage Business Configuration page.
Context
You can attach various types of documents such as Address Proof, Job Info Letter, and so on, with employee
records as a source of verification to claims made by employees. Document attachment is supported with the
following data imports:
Documents in the following formats are supported: *.doc, *.docx, *.pdf, *.txt, *.htm, *.ppt, *.pptx, *.xls, *.xlsx, *.gif,
*.png, *.jpg, *.jpeg, *.html, *.rtf, *.bmp, and *.msg.
Procedure
Example: mhoff_visa.pdf
4. Create a new file in a text editor program and save the file as import.properties.
5. Edit the import.properties file, and add the parameter <importFileName> to hold the name of your import
file template.
Example: Let's suppose that you’ve downloaded the template to import Job History. Then the value for
<importFileName> must be the same as the actual file name of the CSV import file.
6. Compress the import file, the required documents, and the import.properties file into a zip file.
7. Upload the file to import the data.
Results
If successful, the documents included in your zip file are linked with corresponding employee records.
Related Information
Use business rules to perform predefined tasks with different events during data imports.
Prerequisites
• You have set the Permission Settings Administrator Permissions Employee Central Import Settings
Enable Business Rules for selected entities permission.
• You have set the Permission Settings Administrator Permissions Employee Central Import Settings
Enable execution of rules against NO_OVERWRITE permission.
Context
You can apply business rules with data imports to execute supplementary tasks when data is changed or saved .All
HRIS entities are supported.
Business rules can be linked to an HRIS element or any of its fields. Rules are triggered based on the event type
selected. For imports, HRIS elements support onSave, onPostSave event types, whereas HRIS fields support only
onChange event type.
Note
When deleting records, business rules are not triggered except for onSave rules for workflows. For those workflow
rules to be triggered, only the SET operation on the same type or record for deletion is supported.
Restriction
onChange business rules attached to fields of an HRIS entity are triggered during the import process,
irrespective of whether there are any changes or not. However, this restriction isn't applicable to HRIS entities
supported by Centralized Services as onChange rules attached with such entities are triggered only when there
are changes to the corresponding field values.
In cross-entity rules, forward propagation on a target entity (for example, Compensation Information) is applied if
the source entity (for example, Job Information) has permission for forward propagation.
Procedure
The Rule ID automatically picks up the value entered in the Rule Name field. However, it can be changed.
5. Select a Base Object.
The base object must correspond to the name of the HRIS element. Here are a few suggestions:
You can handle exceptional conditions with the Raise Message action. However, this feature is supported
only for specific entities supporting Centralized Services. For all other entities, you can handle exceptional
conditions by:
1. Assigning the error or warning message to a custom string in the corresponding HRIS element.
2. Monitoring the custom string to catch any exceptions.
7. Save the configuration.
Next Steps
Assign the business rule to the desired HRIS element or field. You can do this either from the Configure Business
Rules UI or navigate to the Business Configuration UI to assign it to the HRIS element.
Related Information
Apply the business rule by assigning it to the corresponding Employee Central entity.
Prerequisites
If you’re assigning a business rule to an Employee Central entity optionally supported by Centralized services,
and you want to apply a rule context criteria with the rule, ensure that the Centralized services settings for the
optionally supported entities are enabled from Company System and Logo Settings page.
Context
You can apply a business rule to an HRIS entity or one of its fields. You can also define a rule context to prevent
unnecessary rule triggers.
Note
Rule contexts are applicable for onChange and onSave business rules only.
Procedure
Note
If you have Centralized services enabled, onSave business rules are executed with data imports only if
Imports setting is enabled.
Note
Note
If you have Centralized services enabled, onChange business rules are executed with data imports only
if Imports setting is enabled.
Results
Notable scenarios related to business rule execution with different data imports.
Cross-entity rules configured to create new records are now regulated by Centralized services. Newly created
records are added only if they’re unique and don't match with existing records. Otherwise, the existing record is
updated.
The following example explains the behavior from the perspective of Job Information, but the logic is applicable to
cross-entity rules between Compensation Information and Recurring Pay Component entities.
You add a new Job Information record by importing data in Incremental Load mode.
An onSave rule is attached to the Job Information entity that creates a new Job Relationship record.
Rule-Created Data
Event Date (dd/mm/
User ID Relationship Type yyyy) Related User ID Notes
Result: This record is not added as the business keys (User ID, Start Date, Relationship Type) match with an
existing Job Relationship record.
Add a new Job Information record by importing data in Incremental Load mode.
Rule-Created Data
Event Date (dd/mm/
User ID Relationship Type yyyy) Related User ID Notes
Since the business keys of the new record match with an existing record, the existing record is updated with
new information instead.
onChange rules configured at the field level of an HRIS entity can update the value of other fields in your import file
only if the field value is null or empty.
Example
You add new Personal Information by importing data in Incremental Load mode.
Now, if there are onChange rules configured on <Custom String 1>, <Custom String 2> such that,
Sample Code
//Rule 1
if(Personal Information.custom-string1 == Personal Information Type C) THEN
SET Personal Information.custom-string2 == Personal Information Type E
//Rule 2
if(Personal Information.custom-string2 == Personal Information Type E) THEN
SET Personal Information.custom-string3 == Personal Information Type F
During the data import process, Rule 1 is evaluated as true. But since <Custom String 2> has a value in the
import file, it takes precedence over the rule evaluation. As a result, Rule 2 is evaluated as false.
Result:
Example
You add new Personal Information by importing data in Incremental Load mode.
Now, if there are onChange rules configured on <Custom String 1>, <Custom String 2> such that,
Sample Code
//Rule 1
if(Personal Information.custom-string1 == Personal Information Type C) THEN
SET Personal Information.custom-string2 == Personal Information Type E
//Rule 2
if(Personal Information.custom-string2 == Personal Information Type E) THEN
SET Personal Information.custom-string3 == Personal Information Type F
//Rule 3
if(Personal Information.custom-string3 == Personal Information Type F) THEN
SET Personal Information.custom-string1 == Personal Information Type G
During the data import process, Rule 1 and Rule 2 are evaluated as true. Therefore, the values of <Custom
String 2> and <Custom String 3> are updated accordingly. But since <Custom String 1> has a value in
the import file, it takes precedence over the rule evaluation. As a result, Rule 3 is evaluated as false.
Result:
ecsl Personal Information Type C Personal Information Type E Personal Information Type F
onChange rules configured on the field level of an HRIS entity don't update fields excluded in your import file or
marked as not to be overwritten. But if you’ve the Enable execution of rules against NO_OVERWRITE permission,
the rules are able to update fields not present in your import file, or fields marked as not to be overwritten.
Example
Prerequisites: You don't have the Enable execution of rules against NO_OVERWRITE permission.
You add new Personal Information by importing data in Incremental Load mode.
Sample Code
Result: When the data import job is in progress, Rule 1 is evaluated to be true. But <Custom String 2> is not
present in the import file, and has a definite value in the existing record. Therefore, its value isn’t updated.
You add new Personal Information by importing data in Incremental Load mode.
Sample Code
Result: When the data import job is in progress, Rule 1 is evaluated to be true. But <Custom String 2> is
not present in the import file, and doesn’t have a definite value in the existing record. Therefore, its value is
updated.
Example
Prerequisites: You have the Enable execution of rules against NO_OVERWRITE permission.
You add new Personal Information by importing data in Incremental Load mode.
Sample Code
Result: When the data import job is in progress, Rule 1 is evaluated to be true. <Custom String 2> is not
present in the import file, and has a definite value in the existing record. But since you've the Enable execution
of rules against NO_OVERWRITE permission, its value is updated.
You add new Personal Information by importing data in Incremental Load mode.
Result: When the data import job is in progress, Rule 1 is evaluated to be true. <Custom String 2> is
not present in the import file, and doesn’t have a definite value in the existing record. Therefore, its value is
updated.
When you're adding multiple records for a user in a chronological order, rules configured to evaluate based on the
previous value of a field (in correspondence with a given date) will refer to the existing records of the user, and not
the records in your import file. If there are no existing records present, the rule will evaluate against a null value.
Example
You add new Job Information for the same user by importing data in Full Purge mode.
You have the following rule configured with the Job Information entity for an onSave event.
Sample Code
When this rule is triggered for the record with event date (01/02/2020), the previous value of the <Business
Unit> is obtained as Business Unit DB, and not Business Unit Import.
For example, we have a onSave rule that calculates the ratio between the previous value of a salary Salary
Amount and the current value of it, and it copies it into Salary Change % custom field.
01/01/2022 1 A
01/02/2022 1 B
You insert a new Compensation Information record for the same user on an existing date by importing data in
Incremental mode. Without a sequence number specified, the import is considered as an insert, the same way
as if we specify it to be the second one.
This new Compensation Information record contains C as the Salary Amount value on the 01/02/2022 with
sequence number 2.
01/02/2022 2 C
When the rule is triggered for the record with event date (01/02/2022), the previous value of the Salary
Amount is obtained as B. As a result, the value of Salary Change % is a ratio between B and C.
01/01/2022 1 A -
01/02/2022 1 B -
01/02/2022 2 C B
Example
For example, we have a onSave rule that calculates the ratio between the previous value of a salary Salary
Amount and the current value of it, and it copies it into Salary Change % custom field.
01/01/2022 1 A
01/02/2022 1 B
This new Compensation Information contains C as the Salary Amount value on the 01/02/2022.
01/02/2022 1 C
When the rule is triggered for the record with event date (01/02/2022), the previous value of the Salary
Amount is obtained as B, and not A. As a result, the value of Salary Change % is a ratio between B and C.
01/01/2022 1 A -
01/02/2022 1 C B
A rule which uses the CREATE command to update an existing PCR causes the PCR to be updated with the rule
result. The values of any fields which are not filled explicitly by the rule, are taken from the existing record. The audit
fields (creation date and user) show that the PCR was updated.
Example
Let's suppose a user has the following Recurring Pay Component saved in the system:
Sample Output
Start
Date
(dd/mm/ Pay Com- Fre- Custom Created Created Changed Changed
yyyy) ponent Amount Currency quency String By On By On
01/09/20 Base Sal- 1000 USD Monthly existing- admi- 2022-09- admi- <time at
22 ary Value nUser1 01 nUser2 execu-
12:00:00 tion>
When another user inserts a new Compensation Information using the createPCR rule, the following result is
seen:
Sample Output
Start
Date
(dd/mm/ Pay Com- Fre- Custom Created Created Changed Changed
yyyy) ponent Amount Currency quency String By On By On
01/09/20 Base Sal- 1000 USD Monthly existing- admi- <time at admi- <time at
22 ary Value nUser2 execu- nUser2 execu-
tion> tion>
Prerequisites
• You've enabled Enable Business Rules for Workflow Derivation setting in Provisioning.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact
your implementation partner or Account Executive. For any non-implementation tasks, contact Product
Support.
A workflow is a sequence of connected steps that allows you to notify and seek approval from the concerned
stakeholders before importing data. You can configure a workflow with the following data imports:
• Job History
Remember
All workflows are triggered when data is imported in Incremental Load mode only.
Remember
A workflow is created for each record resulting in multiple records being processed for a user. Without
Centralized services setting enabled, a workflow would be created for only one user record. The remaining
records for the user will not be processed.
• Termination Details
When terminating the employment a user on a global assignment, the system now uses the End Global
Assignment workflow rather than the Termination workflow as was previously done.
Procedure
Tip
Ensure that the base object of your business rule corresponds to the Employee Central entity with which
you want to attach the workflow.
You've successfully configured and assigned a workflow. When the workflow is triggered, a task to review the import
data is created and assigned to each recipient defined in the workflow. Email notifications are generated to inform
the recipients about the actions. Based on their approval, data is imported.
Related Information
Govern who can, and who can't import basic user information by managing access to basic user import.
Context
Previously, admins having access to the Import Employee Data page could import all sorts of employee data,
including basic user information. With Q4 2019, you have the option to restrict access to basic user information
imports with the help of role-based permissions. You can configure your system so that admins having the Basic
User Import permission can import basic user information.
Note
If you choose not to configure role-based access to basic user information imports, admins can import basic
user information as long as they can access the Import Employee Data page.
Procedure
Next Steps
Grant the Basic User Import permission to the required user roles.
Related Information
Prevent complications arising out of legal entity-related changes with Job Information imports.
Context
While importing Job Information of employees rehired on a new employment, you must ensure that the legal
entity associated with their new employment doesn’t match with the legal entity associated with their previous
employment. Because employees can only be associated with one legal entity, a new employment must be
associated with a new legal entity.
Procedure
Create a recurring job to import a specific type of employee data to your Employee Central instance.
Prerequisites
You have an SFTP account. For security reasons, we recommend that you use SAP SuccessFactors hosted SFTP
server. In case you don't have an SFTP account, contact your Partner or SAP Product Support.
Context
If you want to automate the employee data import process through an SFTP/FTP Job Scheduler, you can create a
recurring job request in Admin Center Scheduled Job Manager Job Scheduler Create Job Request .
For import jobs that contain any sensitive data, if you download the file from the Scheduled Job Manager log, then
this is added to the Read Audit log. For more information, refer to the Scenarios Supporting Entering Download
Reasons topic.
Procedure
1. In the Admin Center, go to Scheduled Job Manager Job Scheduler Create Job Request .
2. Create a job definition by providing a Job Name and assigning a Job Owner.
3. From the Job Type dropdown, select Employee Data Import (For Employee Central Only).
4. Define the job parameters by selecting the type of data you want to import
If you don’t have the up-to-date import template, select Download a blank CSV template.
For Basic Import, there are additional options available. Select the options as applicable.
Some import types provide the option of choosing how you want to import data. Select:
• Full Purge, if you want to delete or overwrite existing employee data with the data in your import template.
• Incremental Load, if you want to retain existing employee data and update only the required data present in
your import template.
If you want to encode the template in a format other than Unicode (UTF-8), select an encoding format from
the File encoding dropdown.
Note
The options appearing in the dropdown correspond to the language packs selected during Employee
Central configuration. File Locale allows you to choose the language and locale for the data you’re
importing, which is especially important for date, number, and picklist fields, as the format can change
based on the locale. Selecting a file locale wouldn’t affect your default logon language or the language of
your UI components.
Tip
7. Select FTP Passive Mode if applicable, only after confirming with the Operations team.
8. Keep the SFTP Protocol checkbox selected and test the connection. Also test whether the file put permission is
working as required.
9. Provide the file path where your import file can be located by the job scheduler.
10. Enter the name of your import file template, and select a date format and the file encryption method of your
choice.
If the job scheduler is unable to find your import file on the server, your scheduled job is not executed and
marked with status as "Skipped".
11. Set the job recurrence criteria as applicable.
12. Enter the date and time when you would like the job to start.
13. Enter the email addresses of users who must be kept notified about events related to the job execution.
14. Select the checkbox to send an email notification when the job starts, if necessary.
15. Complete the job request creation by selecting Submit.
Results
If successful, your job request is created and listed under the Manage Scheduled Jobs page.
Next Steps
• Manually initiate the job for the first time. To do so, select Actions Run It Now against your job on the
Manage Scheduled Jobs page.
• Create a new job request for importing another type of employee data.
From the performance standpoint and resource utilization perspective, actively running data import jobs are
interrupted or canceled upon failing to meet the required criteria.
Data import jobs are now internally monitored in reference to metrics such as time taken since inception, memory
consumption and so on. If a job fails to meet a certain pre-defined threshold, it’s interrupted or cancelled as
applicable. The intention is to help reduce the overhead cost of scheduling and running jobs with errors, and enable
job owners to manage failed jobs more effectively.
This process of automatic interruption or cancellation applies to import jobs initiated from the UI or API, as well as
scheduled jobs in Provisioning.
If a job is canceled, the respective job owners are notified accordingly with the help of email notifications.
All interrupted jobs are recovered after some time with the same job request ID, and their status is set to
"Recovered".
A data import job is interrupted and recovered only twice. Thereafter, the job runs uninterrupted till its completion.
You can monitor the progress of your jobs at Admin Center Monitor Jobs page.
Learn how you can keep the personal data of your employees secure and private with SAP SuccessFactors.
Data protection and privacy features work best when implemented suite-wide, and not product-by-product. For this
reason, they’re documented centrally.
The Implementing and Managing Data Protection and Privacy guide provides instructions for setting up and using
data protection and privacy features throughout the SAP SuccessFactors HCM suite. Please refer to the central
guide for details.
Note
SAP SuccessFactors values data protection as essential and is fully committed to help customers complying
with applicable regulations – including the requirements imposed by the General Data Protection Regulation
(GDPR).
By delivering features and functionalities that are designed to strengthen data protection and security,
customers get valuable support in their compliance efforts. However, it remains each customer’s responsibility
to evaluate legal requirements and implement, configure, and use the features provided by SAP SuccessFactors
in compliance with all applicable regulations.
Related Information
Identify which data purge function in the Data Retention Management tool meets your data protection and privacy
requirements.
The Data Retention Management tool supports two different data purge functions: the newer data retention time
management (DRTM) function and legacy non-DRTM function.
Remember
We encourage all customers to stop using the legacy purge function and start using data retention time
management (DRTM) instead. To get started using this and other data protection and privacy features, refer to
the Data Protection and Privacy guide.
If you already use the legacy data purge function as part of your current business process and you are sure that it
meets your company's data protection and privacy requirements, you can continue to use it, as long as you aware
of the differences between the two.
Note
If you are using the legacy data purge function, you can only purge a calibration session when there is at least
one facilitator assigned to the session.
Restriction
Be aware that the legacy data purge function may not meet your data protection and privacy requirements. It
doesn't cover the entire HCM suite and it doesn't permit you to configure retention times for different countries
or legal entities.
In the longer term, we recommend that you also consider adopting the newer solution. In the meantime, to use
legacy data purge, please refer to the guide here.
Related Information
To get an Off Cycle Event Batch up and running is a three step process.
Creating a Business Rule for an Off Cycle Event Batch [page 190]
Identify the employee records to be processed by the Off Cycle Event Batch by creating a business rule.
Creating a User or Group of Users for an Off Cycle Event Batch Job [page 192]
You can create user groups for an off cycle event batch job. This allows you to run the rule in off cycle event
batch, only for the users of that off cycle event batch user group.
Adding Off Cycle Event Batch Object to a Transport Bundle [page 194]
Adding Off Cycle Event Batch Object to a transport bundle.
Off Cycle Event Batch is a native Employee Central feature that offers key automation capabilities to help you
manage your employee data better.
An Off Cycle Event Batch is a Metadata Framework (MDF) object that you can use to create an automated process
for modifying the following employee data:
• Job Information
With the help of an Off Cycle Event Batch, you can configure your system to execute a customized set of
instructions in the background for transactions that are recurring. As a result, the periodic requirement for
manually updating the employee records is practically eliminated.
What are these "recurring transactions" an Off Cycle Event Batch can handle?
Recurring transactions are events that periodically occur in your company, and therefore can vary accordingly.
Some of them include, but not limited to:
A fully configured Off Cycle Event Batch typically performs the following set of tasks:
Note
This task is executed only if you've configured your Off Cycle Event Batch to generate a list of employee
records to update. Otherwise, all the employee records in the system are considered.
2. Apply filter criteria as defined in your Off Cycle Event Batch object.
3. Execute the attached business rule to update the corresponding user records.
4. Import the data into the system.
Note
Job Information data is forward propagated to future effective dated records with Off Cycle Event Batch
Job if the Enable Forward Propagation during Incremental Import permission is enabled.
Related Information
Before you schedule a job to execute on a periodic basis, you must have an instance of Off Cycle Event Batch
configured in place.
Prerequisites
A business rule applicable to the process is created. For more information about creating a business rule, refer to
the Related Information section.
Context
The primary requirement to schedule a rule processing job in Provisioning is to have an Off Cycle Event Batch
object. This helps the system to identify the batch and execute it according to schedule. It will pick all the records
that match the filter criteria defined in the Off Cycle object.
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product Support.
Procedure
Field Action
Code Enter a label to identify your Off Cycle Event Batch job.
Base Object Select a target base object or HRIS element from the drop-
down.
Note
If you create an Off Cycle job for the base object Job
Information, the job considers only the current active
Note
For executing rules on job information or employ-
ment information, you are required to enable
Administrator Permissions Employee Central
Import Settings Enable Business Rules for
selected entities .
Off Cycle Event Batch User Group (Applicable for Job Select a user group from the dropdown to specify a target
Information and Employment Details base objects.) population.
Remember
Select a user group only if you want to run the off cycle
batch for a particular group of employees. To include all
employees, skip this field.
Include All Matched Records in Every Run (Applicable for Select Yes if you want to update user records with recurring
Job Information, Employment Details, and Work Order base event information. To update non recurring event informa-
objects.) tion, select No.
Example
• Events like changes to an employee's pay scale
are recurring and happen periodically. Select Yes in
such cases.
• Events like employee eligibility for company assets
like a car are non recurring and happen only once.
Select No in such cases.
Include Inactive Records (Applicable for Job Information and Select Yes, if you want to process inactive user records.
Employment Details base objects.)
Note
This schedule will take precedence over the schedule
defined in Provisioning.
Day of Execution (Appears when Frequency is selected as Select the preferred day of the week or month, as applicable,
Weekly or Monthly) for executing the job.
4. OPTIONAL: Set a filter criteria to further streamline the list of user records to process.
If you create multiple filters, the system will filter records matching each filter criteria, and add them to the final
list of matching records.
Field Description
Filter Field Select the field from the dropdown. Example: Start Date.
Note
Applicable for Job Information and Employment Details
base objects.
Offset Enter a number appropriate for the offset unit. This means,
how much time after the begin date should the batch run.
Offset Unit Depending on the selected base object, the offset may differ
or may not be supported at all.
For the Filter field, you can filter all the configured date fields for both standard and custom fields in Business
Configuration for the selected base object in the Off Cycle Event Batch.
The filter lists all the active date fields for jobInfo (Job Information base object) and employmentInfo
(Employment Detail base object). The type of HRIS Field should be Date and enabled Yes in the Business
Configuration. Additionally the filter also lists fields that are preconfigured for jobInfo (Job Information base
object) and employmentInfo (EmploymentDetail base object) that don’t need to be configured in Business
Configuration.
startDate startDate
positionEntryDate serviceDate
jobEntryDate seniorityDate
companyEntryDate benefitsEligibilityStartDate
locationEntryDate
departmentEntryDate
payScaleLevelEntryDate
hireDate
terinationDatem
leaveOfAbsenceStartDate
leaveOfAbsenceReturnDate
Note
If the date field isn’t part of the Business Configuration and the additional filters list, you can’t save
the Off Cycle Object. We’ve included a validation message to notify you to either update the Business
Configuration or delete the filter fields in Off Cycle Event Batch.
Example
The Off Cycle Event Batch also filters records by Last successful run date, which is configured in job details
and is recommended for jobs meant for daily execution. The Off Cycle Event batch will only filter records whose
[last successful run date minus offset]< date field. The last successful run date could be the last successful job
run date or a specified date.
To check the progress of the job, go to Admin Center Monitor Jobs page.
Next Steps
Remember
As a customer, you don't have access to Provisioning. To complete tasks in Provisioning, contact your
implementation partner or Account Executive. For any non-implementation tasks, contact Product Support.
Related Information
Identify the employee records to be processed by the Off Cycle Event Batch by creating a business rule.
Prerequisites
In case you have an existing Basic business rules in place for Off Cycle Event Batch, use the Check Tool checks
under Application Employee Central Core to identify, and in some cases migrate your existing Basic rules to
Off Cycle rule scenario specific rules.
With business rules, you can implement comprehensive sets of instructions to identify and process the exact
employee records.
Procedure
Note
• New rules created for Off Cycle Event Batch Processing Job, should use the new rule scenario Trigger
Rule for Off Cycle Event Batch rule scenario.
• New rules created for Work Order Off Cycle job, should use the new rule scenario Trigger Notification for
Work Order Expiration.
3. Under Employee Central Core, select Trigger Rules for Off Cycle Event Batch.
The Trigger Rule for Off Cycle Event Batch rule scenario is preconfigured..
4. Enter the Rule Name, Rule ID, and select a Start Date.
The Rule ID automatically picks up the value entered in the Rule Name field. However, it can be changed.
5. Select a Base Object.
A Base Object corresponds to the data objects available in the system, and provides you with inputs for
defining the rule. You must select the same Base Object while creating an Off Cycle Event Batch event
definition.
6. Click Continue.
7. Set up the business rule according to your requirement.
Note
• The SET action is supported only for the HRIS employee entities (and not MDF entities) and the rule
will modify only a single type of employee entity. Note that updating multiple types of entities in a single
rule is not supported. The effective date of the modified record will be set to the date on which the Off
Cycle Event Batch processed this record. Providing effective date as user input is not supported.
• Ensure that you don't add 'CREATE' conditions of your rule, as Offcycle Event Batch doesn't support
rules with 'CREATE' conditions.
Note
The rules created with the Basic rule scenarios will continue to work. We recommend that you move your
rules to the off cycle scenario which has more guardrails and helps avoid manual configuration errors.
You’ve successfully created a business rule to work with your Off Cycle Event Batch.
Next Steps
Related Information
You can create user groups for an off cycle event batch job. This allows you to run the rule in off cycle event batch,
only for the users of that off cycle event batch user group.
Prerequisites
You have the Administrator Permissions Manage Workflow Manage Workflow Groups
Procedure
Related Information
After creating a business rule and an off cycle event batch object, setting up a scheduled job is the final step in the
process.
Prerequisites
Context
By setting up a job schedule, you can configure your system to process the off cycle event batch object on
a periodic basis. Off Cycle Event Batch Processing Job is a primary handler for all the jobs scheduled in your
application. It picks up all active off cycle event batch records and executes them. However, you can’t preferentially
select which offcycle events batches be picked up by the job.
You can use Scheduled Job Manager in Admin Center to create, manage, and monitor Off Cycle Event Batch
Processing job type.
Note
The BizX Daily Rule Processing Batch job is now renamed to Off Cycle Event Batch Processing Job.
Related Information
Prerequisites
You have the Administrator Configuration Transport Center Access to Transport Configurations .
Context
Bundles are artifacts in the Configuration Transport Center that contain configurations of your system. You can
use bundles to transport the configuration of a source system to a paired target system so that you don't need to
manually configure it.
Procedure
1. Go to Manage Data page and select Off Cycle Event Batch in the Search dropdown.
2. Choose Take Action Add to Transport Bundle to add the current object instance to the bundle. A list of
available transport bundles are displayed.
• Cannot exclude references: The entire Off Cycle Event Batch object will be transported. You cannot
exclude references (like filters) during transport.
• Only allowed in Full Purge mode: If you are transporting an Off Cycle Event Batch object named
oceb_obj from one instance to another, and the destination already has an Off Cycle object with the
same name oceb_obj, then the existing Off Cycle Event Batch in the destination will be replaced by the
one from the source. The existing object in the destination instance will be overwritten with the new
object during the transport process, ensuring that the destination instance has the source version of
the Off Cycle Event Batch object.
3. Select the bundle you want to add the configuration to and choose Save.
Your configuration is successfully added to the transport bundle. A success message is displayed.
4. Choose Close.
Results
Related Information
Here are some important actions and settings that you can perform based on your requirement.
Related Information
Improve the performance and reduce the job execution time by modifying your off cycle event batch object
settings.
Your off cycle event batch taking a long time to complete may be most likely due to your existing configuration.
Here are a few checks and configuration settings that you can make to improve performance.
• User Record Filtering: Verify that your off cycle event batch is properly filtering user records. If user records
are not filtering appropriately, its possible that the batch is executing business rules against all the records in
the system.
• Dynamic User Group: If your requirement is to process a specific group of user records, you can create a
dynamic user group. A dynamic user group ensures that only the required user records are process and invalid
user records are are filtered out accordingly.
• Include all matched records in every run: Use this option very carefully. Enable this option while configuring
your off cycle event batch object only if you want to process all the records in every run. If not, the disable this
option so that only newly matched records are processed during each run.
• Add inactive records: Set this option to No, while configuring your off cycle event batch object unless you want
to process inactive user records for a specific use case.
• Business Rules: Business rules can be a major contributing factor when a job is reporting a longer execution
time. Please consider the volume of rules and the complexity of the rule configuration while configuring your off
cycle event batch.
Tip
Setup your business rule so that If condition has the most possible matching criteria first followed by the
least possible matching criteria. This reduces the rule iteration time on each user record.
• Scheduling Off Cycle Event Batch Processing Job: Find a suitable time slot to schedule your job when the
server load is less. For an optimum performance, avoid scheduling all the jobs in that instance at same time,
unless it is specifically required.
Check the execution details and other statistics of each job associated with your primary batch.
Context
After you have created and initialized your primary batch, you can monitor each job to track its status and
download the job execution report if required.
Procedure
The Scheduled Job Manager page appears, which list all the jobs (of type Off Cycle Event Batch Processing
Job) that you have submitted. For each product in your SAP SuccessFactors HCM suite, there will be a job
available.
Example
This job will pick all active Off Cycle Event Batch records and execute them. You cannot select which off cycle
events batches should be picked up by the job as they are all collectively picked up and executed in the order of
creation. The system will trigger the rule for each Off Cycle Event Batch record, and log the last successful run
for each rule triggered.
2. Identify the job that you want to monitor.
3. Click Download Status corresponding to your job, if you want to download a copy of the job execution report.
A popup window appears prompting you to save the status file in a CSV format.
4. Save the file on your computer.
Related Information
Here are some possible use cases for Off Cycle Event Batch processing.
Following examples illustrate how you can configure your Off Cycle Event batch to address some of the most
common business scenarios.
Example
Example
Anniversary
Example
Example
Manager Change
Configuring a rule to execute changes related to Job Information of employees. In this case, we are considering a
change of an employee's manager.
Example
New Hire
Configuring a rule to execute changes in the Job Information for new hires.
Example
Wage Progression
Configuring a rule to execute changes related to the automatic transition of employees to the next higher pay level.
Example
Restriction
As of Q3 2019, updating termination details of employees using Offcycle Event batch is not supported.
1. Creating an business rule. The business rule has to be created using the Job Information element, and in the
If condition you can check for termination event reason and set the fields accordingly. For more information
about creating business rules, refer to the Implementing Business Rules in SAP SuccessFactors guide on the
SAP Help Portal.
2. Associating the rule with the Job Information HRIS element with an onSave event. For more information about
configuring HRIS elements, refer to the Setting Up and Using Business Configuration UI (BCUI) guide on the
SAP Help Portal.
As a result, when the Off Cycle batch is in progress and Job Information entity is accessed, the business rule is
executed.
Learn about changes to the documentation for Mass Changes in Employee Central in recent releases.
1H 2024
Changed Dependents Import is universally sup- Centralized Services for Employee Data
ported by Centralized services. Imports [page 39]
Changed We have changed the text "Consolidated Centralized Services for Employee Data
Dependents (to be deprecated)" to "Con- Imports [page 39]
solidated Dependents" in the Unsup-
ported Entities list in the related topic.
Changed Created a new topic about adding Off Adding Off Cycle Event Batch Object to a
Cycle Event Batch Object to a Transport Transport Bundle [page 194]
Bundle.
2H 2023
New Dependents import now supports busi- Configuring Business Rules for Data Im-
ness rules. ports [page 162]
Changed We moved the Change History to the end Overview of Mass Changes in Employee
of the guide. Central [page 4]
Changed Updated the topic about Off Cycle Event Creating a Business Rule for an Off Cycle
Batch Processing Job. Event Batch [page 190]
New Created a new topic about creating a user Creating a User or Group of Users for an
or group of users for an Off Cycle Event Off Cycle Event Batch Job [page 192]
Batch Job.
Changed Updated the topic about the new job Setting Up an Off Cycle Event Batch Job
name. The BizX Daily Rule Processing [page 193]
Batch job is now renamed to Off Cycle
Event Batch Processing Job.
Changed Updated the note about the new rule Creating a Business Rule for an Off Cycle
scenario Trigger Rule for Off Cycle Event Event Batch [page 190]
Batch
Hyperlinks
Some links are classified by an icon and/or a mouseover text. These links provide additional information.
About the icons:
• Links with the icon : You are entering a Web site that is not hosted by SAP. By using such links, you agree (unless expressly stated otherwise in your agreements
with SAP) to this:
• The content of the linked-to site is not SAP documentation. You may not infer any product claims against SAP based on this information.
• SAP does not agree or disagree with the content on the linked-to site, nor does SAP warrant the availability and correctness. SAP shall not be liable for any
damages caused by the use of such content unless damages have been caused by SAP's gross negligence or willful misconduct.
• Links with the icon : You are leaving the documentation for that particular SAP product or service and are entering an SAP-hosted Web site. By using such links,
you agree that (unless expressly stated otherwise in your agreements with SAP) you may not infer any product claims against SAP based on this information.
Example Code
Any software coding and/or code snippets are examples. They are not for productive use. The example code is only intended to better explain and visualize the syntax and
phrasing rules. SAP does not warrant the correctness and completeness of the example code. SAP shall not be liable for errors or damages caused by the use of example
code unless damages have been caused by SAP's gross negligence or willful misconduct.
Bias-Free Language
SAP supports a culture of diversity and inclusion. Whenever possible, we use unbiased language in our documentation to refer to people of all cultures, ethnicities, genders,
and abilities.
SAP and other SAP products and services mentioned herein as well as
their respective logos are trademarks or registered trademarks of SAP
SE (or an SAP affiliate company) in Germany and other countries. All
other product and service names mentioned are the trademarks of their
respective companies.