Professional Documents
Culture Documents
BW PPT
BW PPT
BW PPT
Infosys Limited
Bangalore
Declaration
We hereby declare that this document is based on our personal experiences. To the
best of our knowledge, this document does not contain any material that infringes the
copyrights of any other individual or organization including the customers of Infosys.
Authors:
Haritha Molaka, Lakshmi Muni Nallu
Project Details:
- Project involved : PNABIHYP
- S/W Environment : SAP BI (7.0)
- Project Type : Development
2
Summary
• This document describes about BW Data Archiving and its methods. It shows
the Implementation of ADK method of archiving of DSO in detail and
Archiving of write optimized DSO. Also explains the various issues faced
while archiving the data and the relevant SAP notes implemented.
• References:
http://help.sap.com
http://scn.sap.com/welcome
3
Table of contents
4
BW Data Archiving
• Below are the challenges which caused the need for data archiving.
5
Benefits of Data Archiving
• Reduction in the main memory and the CPU consumption which saves the
cost of administration.
6
Methods of Data archiving
Three methods are available for removing data from the SAP BI database.
7
Methods of Data archiving
ADK method :
• Using the Archive Link interface, these ADK files can be passed to any
storage medium or content repository .
• ADK files are not readable for BW Queries. For Query access, they have to
be reloaded.
8
Methods of Data archiving
NLS Method:
• BW data from DSOs and Info cubes can be transferred to an external nearline
system via the SAP nearline storage interface.
• Direct access to the archived data for the queries without reloading the data.
• This is used usually while migrating from ADK method to NLS method.
• First the ADK archive is created during the write phase of the data archiving
process. Then the archive data is transferred to the external nearline system
in the subsequent verification phase.
9
Methods of Data archiving
• Archiving Methods in SAP BI 7.0. In SAP BW 3.x, only ADK-based archiving is possible
ADK
1
3 ADK + NLS
BI DAP
2
NLS
NLS Interface
SAP BI DB
Third Party
Nearline system
10
Methods of Data archiving
11
PBS, a Third Party Solution Provider
• PBS provides add on named CBW for the Data archiving solution.
• PBS includes two nearline storage interface archiving solutions for the new
SAP NetWeaver Business Warehouse 7.x.
12
PBS, a Third Party Solution Provider
• To have access to the archived data, the solution operates with special
archive indexes and archive aggregates that are stored in the archive system
along with archived data.
• Queries can have access to the archived data by having virtual provider and
multiprovider on top of it known as nearline providers.
13
PBS, a Third Party Solution Provider
• The only prerequisite for using PBS CBW NLS IQ is successful SAP’s purely
nearline-based data archiving process in place. The archive data is then
generated directly in Sybase IQ.
14
PBS, a Third Party Solution Provider
DB + Archive
PBS NLS
Interface
PBS aggregate
SAP BW
Database Sybase IQ
PBS Index
SAP Archive
15
Archiving procedure for ADK method
• Storage of archive files: The newly created archive files can then be moved
to a storage system.
• Delete from the database: In the delete phase, the delete program reads the
data from the archive files and then deletes it from the database.
16
Archiving procedure for ADK method
Here are initial steps to be done before creating Data archiving process:
• Logical path and logical file to be created to store the generated ADK files in
the file system.
17
Initial settings to be maintained in T code SARA
• Click Customizing
18
Initial settings to be maintained in T code SARA
• Technical Settings under “Cross Archiving Object Customizing”
19
check & Delete Settings
20
Initial settings to be maintained in T code SARA
21
Logical Path and Logical file creation in T code ‘FILE’
22
Logical Path and Logical file creation in T code ‘FILE’
23
Logical Path and Logical file creation in T code ‘FILE’
• Physical file format is
<PARAM_1>_<PARAM_3>_<DATE>_<TIME>_<PARAM_2>.ARCHIVE
Where as
• PARAM_1 Two-digit application abbreviation for classifying archive data in the system.
This value is taken from the definition of the associated archiving object.
• PARAM_2 Single digit alphanumeric numbering (0-9, A-Z). The value is assigned by the
ADK when creating a new archive file, and numbers are assigned consecutively in
ascending order.
• PARAM_3 Multiple digit alphanumeric character string. The name of the archiving
object is entered as a value at runtime. In archive management, this indicates the
nature of the data content, and also enables storage of archive files according to
archiving object.
24
Content Repository creation in T code OAC0
25
Content Repository creation in T code OAC0
26
Content Repository creation in T code OAC0
27
Archiving procedure for ADK method
28
Creating an archiving object
29
Creating an archiving object
• ADK-Based Archiving’ option will be checked by default. Archiving object
name would be generated by the system for DSO.
• For a standard DSO, ‘Time Slice Archiving’ will be selected by default. For
write optimized DSO, ‘Request-Based Archiving’ will be selected by default.
30
Creating an archiving object
• In the selection profile tab, Primary
Partitioning Characteristic and
Additional Partitioning Characteristics
have to be selected.
• Here, ‘Characteristic for Time slices’
gives in the drop down, the list of all
date fields that are present in the DSO.
These date fields may be key or non
key fields in the DSO.
• If the selected time characteristic is not
in key fields of DSO, there are two
options again.
- It is still to be used as primary
partitioning characteristic
- time-correlated key characteristic is
to be used instead for partitioning.
• Under ‘Additional Partitioning
Characteristics’, only key fields present
in the DSO will be listed on the right
side. You can drag them to the left.
31
Creating an archiving object
• In ‘Semantic Group’ tab, you can
find the all the fields of DSO on
the right side. You can drag the
required fields to the left side.
• You can form semantic groups to
archive the data in a sorted way.
• The system reads from the
database after sorting according
to grouping characteristics (the
order is important). Records with
the same specifications in the
grouping characteristics are
written to the archive as one data
object.
• If no characteristic is selected, the
storage in the archive is not sorted
and technical criteria are used for
classification (fixed size) into data
objects.
32
Creating an archiving object
33
Creating an archiving object
34
Creating an archiving object
• Click Customizing
35
Creating an archiving object
36
Scheduling the archiving session
• After creating and maintaining an Archiving Object for a DSO, next job is to
archive the data.
• Archive Administration (SARA) is the central starting point for most user
activities in data archiving, such as the planning of write and delete jobs,
storing and retrieving archive files.
37
Scheduling the archiving session
• Go to SARA
• Click write
38
Scheduling the archiving session
39
Scheduling the archiving session
40
Scheduling the archiving session
• Save the variant and execute the write job. Click
on the job overview.
• Write job archives the data from the active table
(for DSO) and from the fact table (for InfoCubes)
based on the given restrictions and generates the
archive files
• Once the write job is done, In AL11, you can see
the archive files generated in the archive
directory i.e.., /archive/<SYSID>/pbsarchive
41
Monitoring and statistics of the archiving session
• In the job log, you can see the restrictions, number of records archived.
42
Monitoring and statistics of the archiving session
43
Monitoring and statistics of the archiving session
44
Monitoring and statistics of the archiving session
• Go to SARA
• Click Delete
45
Monitoring and statistics of the archiving session
• Delete job first verifies the stored files in the storage system and then deletes
the data from DSO.
• If there are ‘n’ archive files generated in the write job, ‘n’ parallel delete
processes will be triggered.
• For n parallel delete processes in the background, n-1 delete processes in the
delete phase are completed with 0 deleted data records. The process that
verifies its data package last starts the collective delete phase for all data
records in test mode or in production mode.
46
Monitoring and statistics of the archiving session
• Job log and spool
information for the delete
job are shown here.
47
Monitoring and statistics of the archiving session
• From RSA1, In the manage of DSO, you can see the archiving tab added
48
Monitoring and statistics of the archiving session
49
Reloading archived data
• Go To SARA
50
Reloading archived data
51
Reloading archived data
52
Reporting Archived Data
53
Reporting archived data
• PBS provides solution i.e.., PBS CBW NLS which operates with special
archive indexes and archive aggregates and enables the queries to have
access to the archived data along with online data.
• Using CBW Administration Cockpit, you can maintain and generate the
indexes and aggregates upon the archived data
• You can generate the virtual infoprovider (VXXXX) which contains the
archived data.
• You can also generate multiprovider (MXXXX) which consists of both virtual
infoprovider and the actual infoprovider. So multiprovider contains both
archived data and online data
54
Reporting archived data
• In this way queries will get access to both archived data and online data.
• It also provides SAP archive file browser through which you can browse the
archived data.
55
Archiving of Write optimized DSO
Archived Requests
56
Archiving of Write optimized DSO
57
Archiving of Write optimized DSO
• When Property is set (i.e. uniqueness need not to be checked) for the DSO
and Request loaded date is used in the DAP, selection profile of DAP looks
like below.
58
Archiving of Write optimized DSO
59
Archiving of Write optimized DSO
• When property is not set (i.e. uniqueness needs to be checked) for the DSO,
selection profile of DAP looks like below.
• An absolute time characteristic or a time-correlated partitioning characteristic
from the semantic key must be chosen.
60
Archiving of Write optimized DSO
61
Archiving of Write optimized DSO
62
Using process chain to schedule archiving
• Below is the process chain created to schedule the archiving and deleting
processes using the process type ‘Archive Data from an Infoprovider’
63
Using process chain to schedule archiving
• Below is the archiving process variant maintained using the process type
‘Archive Data from an Infoprovider’
64
Using process chain to schedule archiving
• Below is the delete process variant maintained using the same process type
‘Archive Data from an Infoprovider’
65
Using administration view of the Infoprovider for scheduling
archiving
66
Using administration view of the Infoprovider for scheduling
archiving
• To start the delete
job, double click on
the request.
67
Deleting/Changing Archiving Object
68
Deleting/Changing Archiving Object
• So If you want to change the primary/additional partitioning characteristic in the DAP for
a DSO which is already transported and archived in the target system, following has to
be done to migrate the new changes through transports:
• Make sure all the archiving sessions for the DSO are cleaned up in both the source
and the target systems.
- If there are completed archiving sessions (green requests), reload the archived
data into the DSO
- If there are incomplete archiving sessions (red/yellow requests), invalidate the
requests so that there will not be any locked data areas.
• Delete the old DAP in the source system, collect it in a transport request and migrate
it to the target system so that old DAP will be deleted in the target system.
69
SAP Notes Implemented
Below are the SAP notes that we have implemented as and when we faced the
issues while archiving the data.
70
Issues encountered while archiving the data
Issue:
• If there are ‘n’ archive files generated in the write job, ‘n’ parallel delete processes will
be triggered.
• In production mode, n-1 parallel delete processes terminate with message RSDA 213
and only the process that successfully verifies its data package last, starts the delete
phase.
Resolution:
• Below SAP note is applied to resolve this issue.
• For n parallel delete processes in the background, n-1 delete processes in the delete
phase are completed with 0 deleted data records. The process that verifies its data
package last starts the collective delete phase for all data records in test mode or in
production mode.
71
Issues encountered while archiving the data
72
Issues encountered while archiving the data
Issue:
• While running process chain for archiving, archiving process goes fine but the delete
process fails with below error message. When the failed process is repeated, it goes
fine.
73
Issues encountered while archiving the data
Issue:
• If requests of a write-optimized Data Store object (DSO) are archived, you can enter a
date as a selection. All requests with a loading date earlier or the same as this selection
should be archived.
However, currently only requests with an earlier loading date are archived. The
requests that are loaded on the selection day are not archived.
Resolution:
• SAP Note 1484606 - P25: DSO: Archiving WO DSO and datamart update
74
Issues encountered while archiving the data
75
Observations while archiving the data
• It is not recommended to start new archiving session when there is existing incomplete
archiving session.
• As long as there are incomplete archiving sessions, there is the risk that data is
archived more than once as not all archived data was processed by the relevant delete
program
• Here Request
2,990,501 is the
incomplete archiving
session for the
selection condition
FISCVARNT = 'Z1'
AND FISCPER =
'2011012‘
• The data area will be
locked once the write
job is completed.
76
Observations while archiving the data
• Once the data is archived for a particular selection, the data area will be locked for any
further changes. You cannot archive the same data area again.
• Here is write job log details showing that ‘selected area already completely archived’
when the same data is archived again i.e.., FISCPER = 2011012’ in this case
77
Observations while archiving the data
• You also have the option of ‘Request Invalidation’ for the incomplete archiving session,
if you have archived the wrong data or if you want to archive the same data area again.
• Once the request is invalidated, the request turns red and the particular data area will
be unlocked so that the same data area can be archived again.
• Select the request and double click on it. Select ‘Set Invalid’ option and execute the job
in dialog or background.
78
Observations while archiving the data
• Once the job is done, here is the request 2,990,501 turned red and the lock removed for
the selection condition ‘FISCVARNT = 'Z1' AND FISCPER = '2011012‘’ so that this
particular data area can be archived again.
• ‘Request invalidation’ option is available only for incomplete archiving sessions (yellow
requests) but not for completed sessions (green requests).
• To resolve this issue, the archiving request has to be invalidated so that data can be
loaded into the infoprovider.
79
Observations while archiving the data
80
Observations while archiving the data
• Based on the size of the infoprovider, selection conditions, number of records and
memory space allocated to user id with which archiving is done, the jobs may get
cancelled throwing below memory dumps after running for long time.
' EXPORT_TOO_MUCH_DATA'
• In such case, you may have to reduce the selection criteria/number of records for
archiving and run it in chunks.
• For example, for a huge DSO (contains huge data in each period) if 0FISCPER is used
as primary characteristic for archiving, write job for even one period may get cancelled
throwing memory dump.
• In such a scenario, we may have to change the primary partitioning characteristic from
0FISCPER to some other time characteristic (for e.g.., Posting date) as system is not
able to accommodate archiving of 1 fiscal period at a time.
• Otherwise Secondary index can be created for the DSO on primary partitioning
characteristic so that performance of write and delete jobs improve.
• Now using the other time characteristic (for e.g.., Posting date), data can be archived in
chunks (lesser number of records) by reducing the selection criteria (For e.g.. 7 to 15
days)
81
Observations while archiving the data
• As mentioned earlier, archiving for Write optimized DSOs follow request based archiving
as opposed to time slice archiving in standard DSO. This means that partial request
archiving is not possible; only complete requests have to be archived.
• Based on the selection conditions, only the complete requests that fall under the
selected data area will be archived. Even if the partial request falls under the selected
data area, it will not be archived.
• You can archive only the requests that were retrieved from all data targets (by data
mart). If there is no data mart status for a request, you cannot archive it.
82
Thank You