Professional Documents
Culture Documents
On How To Improve The Initial Load by Row Id Approach
On How To Improve The Initial Load by Row Id Approach
NUMREC: The number of records to be processed per access plan work process. Lets say that
we have 108,000,000 records and we want to use 5 access plan (ACC*) jobs on SLT which
corresponds to 5 access plan (MWB*) jobs on ECC. Entering 20,000,000 in NUMREC will result
in 6 jobs, 5 of them handling 20M records each, the last one dealing with the remaining
8M records. So you could also specify 22,000,000 as value, in order to have a more even
distribution among only 5 jobs. Cluster Tables: NUMREC refers to the record count from the
physical cluster table (RFBLG, for example), not the logical cluster table (BSEG, for example).
NUMREC should be chosen in such a way that, if the delimitation must be done using the primary
key (DB other than Oracle, or cluster or pool table), not more than around 8 subsets (access
plans) will be created. Keep in mind that in this case, each job will be a full table scan of the
complete table, which is not very efficient if done by a large number of jobs. On the other
hand, in case of Oracle ROWID parallelization, you can allow more parallel jobs, because each
job will only process its specific ROWID range, rather than doing a real full table scan. This table
also includes 3 KEYFIELD columns. These can be ignored.
SLT: Add entry to table IUUC_PERF_OPTION
parall jobs: Number of load jobs. This is the maximum number of jobs that might run
in parallel for this table, both in the access plan calculation, and in the load.
Sequence
number: Controls the priority of the jobs (lower number higher priority). Usually 20 is a good
value.
Reading Type: Cluster Table: Type 4 -> INDX CLUSTER (IMPORT
FROM DB)
Transparent Table: Type 5 -> INDX CLUSTER with FULL TABLE SCAN
NOTE: The labels change between DMIS releases/patches.
HANA: Select table for REPLICATION in Data Provisioning
ECC: Review Transaction SM37
Job DMC_DL_<TABNAME> should be running (Oracle transparent tables only). Job
name for all other tables/databases would start with /1CADMC/CALC_, followed by
the respective table name. Once this job is finished, one MWBACPLCALC_Z_<table
name>_<mt id> should be running.
Starting from DMIS 2011 SP5, the job names will start with /1LT/IUC_CALCSACP and
also include mass transfer ID and table name.
SLT: Review Transaction SM37
Job ACC_PLAN_CALC_001_01 is running (Starting from DMIS 2011 SP5, the job names will
be /1LT/MWB_CALCCACP_<mass transfer ID> )
Ideally, more than 1 ACC_PLAN_CALC_001_0 job should be run as soon as the first job
preparing the precalculation (DMC_DL_<TABNAME>) has finished. But only the number
of access plan jobs you specified in the configuration will be started automatically. Starting
more, up to the intended parallelization value, has to be done manually.
Assuming that one 1 ACC job is already running, the screen below shows how you would run a
total of 6 jobs concurrently. In the older Support Packages, we normally provide only
one job for the access plan calculation step. Starting with DMIS 2011 SP4, or DMIS
2010 SP9, a more flexible control of the calculation jobs is available. You can use transaction
MWBMON - Steps Calculate Access Plan to schedule more jobs. In the ADRC example,
with five parallel jobs, the screen would look as below. However, you need to make sure that
the value in field TABLE_MAX_PARALL in table DMC_MT_TABLES is set accordingly, to
allow this degree of parallelization, by providing this value in field parall_jobs when
maintaining IUUC_PERF_OPTION, as shown above. You can check which access plans are
being processed in table DMC_ACS_PLAN_HDR (OWNER guid field of the corresponding records
point to field GUID of DMC_PREC_HDR, the OWNER guid of which in turn points to
DMC_MT_TABLES-COBJ_GUID).
SE38->DMC_DELETE_CLUSTER_POINTER
The conversion object is the same as is listed in DMC_COBJ->IDENT on the SLT server.
Access plan ID is listed in select statement above. Run this command for 00001-00005.
Select 'delete cluster data'
This is a techical expert guide provided by development. Please feel free to share your feelings and feedback.
Best,
Tobias
8277 Views Tags: sap_hana, replication, provisioning, slt, sap_lt_replication_server
Kalyan Dittakavi
Nov 8, 2015 8:43 PM
Hi Tobias
Post Tech Ed 2015, we upgraded our SLT systems to SP9 ( DMIS 2011_1_731 and SP09).
One of the main reason for doing this was to get the Filter option in the Initial Load. But surprisingly what i
notice is the Fields for Filters are only the Key Fields of the Table.
In my scenario, I am trying to use MARD Table where we have 600+Million Records and I want to replicate
only from 2014 using the LFGJA ( Fiscal Year of Current Period). But as mentioned above, I am not seeing
LFGJA in filter screen.
Is there any special configuration i need to do to replicate the data based on "Non Key Fields" of a Table.
Mohamed Youssef
Jul 23, 2015 4:47 PM
on SP8 i am getting precalc is not supported for cluster tables.
Charlie Gil
May 6, 2015 3:37 PM
Hello Tobias
I have a question about improve the initial load.
In the below post, the initial load can be done with 'Performance optimzed' option with additional batch jobs.
How to enable parallel replication before DMIS 2011 SP6
Does it use parallel processing with the 'Performance optimized' option without any other manual steps? If not,
what other manual steps are involved? Same as the old method?
Our DMIS version is DMIS 2011_1_731 SP8 which is the latest.
The following post shows how to configure parallel processing for initial load and it seems it is still valid. Is it
also valid on DMIS 2011 SP8?
http://scn.sap.com/thread/3689563
Thanks
Mahesh Shetty
Feb 5, 2015 4:58 PM
Tobias
I have defined Parallel process for the tables and started the replication. The table is very big and taking time to
load. We have still some WPs available and would like to increase the parallel jobs.
1. How do we increase the jobs when the table is in the process of initial load.
2. Do we need to pause the load and then make changes and then resume the load.
Mahesh Shetty
Sachin Bhatt
Dec 6, 2014 6:32 PM
Tobias,
I am going to attempt this next week for one of the big transparent table (800GB) in source (DB2).