DevelopmentGuidelines BW On HANA V2

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 107

INTERNAL BI Team 1 / 107

Our date Rev.

2016-07- Draft
06

Your date

30 June
2016
GTMTo f

All BW (HANA) modellers/developers <Type Copy to>

BI modelling governance document (V2)


Background and basic principles

BW modelling guidelines

HANA modelling guidelines

Naming convention

HANA and ABAP development guidelines

Authorizations

Yara International ASA

Document owner: BI Architect

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx

Yara International ASA


Yara Finance
Postal Address Visiting Address Telephone Registration No.
P.O.Box 343, Skøyen Drammensveien 131 +47 24 15 70 00 986228608
Telefax
N-0213 Oslo N-0277 Oslo
+47 24 15 70 01 www.yara.com
Norway Norway
INTERNAL 2016-07-06 2 / 107

Contents
1 Background / history................................................................................................3

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 3 / 107

1.1 Basic information.........................................................................................................5

2 BW (on HANA) modelling standards.......................................................................5

2.1 Basics guidelines.........................................................................................................5

2.2 Handling historical data...............................................................................................7

2.3 Block concept.............................................................................................................. 8

2.4 Local versus Global reporting......................................................................................9

2.5 Business content.......................................................................................................12

3 Transport Governance............................................................................................12

3.1 General..................................................................................................................... 12

3.1.1 Current BW (DW7/TW7/WPR)..................................................................................12

3.1.2 New BW (DB1/QB1/PB1)..........................................................................................12

3.1.3 YSAP (D07/T07/PRO)...............................................................................................13

3.2 Checks before moving transports..............................................................................13

3.2.1 Compounding (Program needs to be adjusted to also cover LYP_SOURS etc?)......13

3.2.2 Texts......................................................................................................................... 14

3.2.3 Texts of Nav Attr........................................................................................................14

3.2.4 Obsolete and irrelevant attributes..............................................................................14

3.2.5 0UPD_DATE............................................................................................................. 14

3.2.6 Avoid duplicated header and item data.....................................................................14

4 Masterdata............................................................................................................... 14

4.1 General..................................................................................................................... 14

4.2 Specific objects......................................................................................................... 15

5 Authorization (Whole chapter needs review – what authorization do we still need in


BW/HANA?, To be replaced with HANA DB authorization?).....................................................15

5.1 Analytical Block Authorization...................................................................................15

5.2 Data content Authorization........................................................................................16

5.3 Menu / Role authorization (=composite roles)...........................................................17

5.4 User Type (technical) Authorization...........................................................................19

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 4 / 107

5.5 Special Authorization.................................................................................................19

6 Data acquisition (and SLT usage in Native HANA)...............................................19

7 Data maintenance....................................................................................................22

8 Reporting / Mapping tables....................................................................................22

9 SAP Finance Project – Profit Center (YSAP relevant only)..................................23

10 Modelling & naming convention............................................................................23

10.1 InfoAreas................................................................................................................... 23

10.2 InfoObjects................................................................................................................24

10.3 InfoObject Catalogs...................................................................................................28

10.4 InfoProviders............................................................................................................. 28

10.4.1 InfoProvider in a PSA block.......................................................................................28

10.4.2 InfoProvider in a corporate memory block.................................................................30

10.4.3 InfoProvider in a data block.......................................................................................32

10.4.4 InfoProvider in an Information/Analytical block..........................................................34

10.5 Non nameable objects (Transformation, DTP’s and IP’s)..........................................34

10.6 Datasources (&Source systems)...............................................................................35

10.7 Transformations........................................................................................................ 36

10.8 InfoSources............................................................................................................... 36

10.9 Process chains and process variants........................................................................37

10.10 Planning objects........................................................................................................39

10.11 Queries (Whole subchapter not needed any more = Alteryx)....................................40

10.11.1 Queries-Key figures...................................................................................................41

10.11.2 Queries-Structure......................................................................................................42

10.11.3 Check list................................................................................................................... 42

10.11.4 General Setting.........................................................................................................42

10.11.5 Filters........................................................................................................................ 42

10.11.6 Variables................................................................................................................... 43

10.11.7 Rows/Columns/Free Characteristics.........................................................................43

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 5 / 107

10.11.8 Queries-Characteristics.............................................................................................43

10.12 Variables................................................................................................................... 44

10.13 Conversions (on Key Figures)...................................................................................45

10.13.1 Quantity Conversions (RSUOM)................................................................................45

10.13.2 Currency Conversion types (RSCUR).......................................................................45

10.14 Transports................................................................................................................. 46

11 HANA modelling standards...................................................................................46

11.1 Basics........................................................................................................................ 46

11.2 Packages.................................................................................................................. 46

11.3 Calculation view........................................................................................................47

11.3.1 Naming convention....................................................................................................47

11.3.2 Development principles.............................................................................................48

12 Processing Event triggers (Transaction SM64)....................................................56

13 APPENDIX A – Information source (module) and Aris Process..........................56

14 APPENDIX B - Role Naming convention................................................................61

15 Appendix C – Source System abbreviations.........................................................63

16 Appendix D – Common Input Parameters.............................................................63

1 Background / history

In 2016 Yara decided to start building a new greenfield BW system (on HANA), mean reasons
being:

Future proofing:

- Stable SAP reporting environment, current hardware do not meet the business requirements.

- Prepare system to be ready for future (enable multiple SAP source systems), there came more
requests in the pipeline to combine data from non YSAP sources as (PSAP, BRASAP).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 6 / 107

Performance improvements:

- Improve end user reporting performance. Resulting in shorter waiting times.

- Reduce the time needed to do data uploads into the SAP BW data warehouse. Ensure that we
get more time during the night to upload new/additional information areas.

In BW projects in BW versions before 7.5, often significant efforts had to be undertaken to ensure
that the data warehouse layered architecture was implemented correctly. This to combine/merge
data. As of BW 7.5 (on HANA), a simplified layered architecture is proposed. This is expected to
result in lower maintenance cost on the long term.

Performance optimization is costly, with HANA it can also be assumed that less resources need to
be spend time for activities as building specific KPI cubes, aggregates, indexes etc.

The greenfield BW system will act as the Yara global central data warehouse and will initially be
installed as a BW 7.5 edition for HANA. Which basically is equal to ‘BW/4HANA Starter add-on’.

In this way we make the switch to BW/4HANA easy (as no non-Hana objects will be allowed).

The reporting strategy shall be closely aligned towards the GSCC (Global SAP Competence
Centre) future’s direction with regards to ERP. In the future when ERP will run on HANA (or we go
to S/4HANA) the strategy has to be re-evaluated. Probably operational reporting as we currently
have in the sales area can move back to the transaction platform. Today there are tools as HANA
live (business content information view), who can already be installed ‘out of the box’.

However as Yara has the need for a global (multiple source system wide) reporting environment
there will still be a need for a central reporting platform.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 7 / 107

The aim is to keep developments simple and flexible. Instead of staging the (same) data several
times for data cleansing and structuring, the aim shall be to stage it only (once or) twice, and build
virtual data models.

Yara’s HANA runtime licence is forcing us to use BW extractors to get data in the BW (on HANA)
environment.

This is a version 2 (V2) document which is adjusted to serve the new Yara BI ‘self-service’
reporting tools being Alteryx and Power BI, which are being introduced in Q1 2018. High level
architecture would be:

1.1 Basic information

All modelling (changes) and developments in Yara’s BW (on HANA) platform shall follow the
guidelines of this document, and have to be approved by the Yara BI architect.

If anything is missing or incomplete, please contact the owner of this document.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 8 / 107

2 BW (on HANA) modelling standards

2.1 Basics guidelines

In SAP BW (on HANA), only the following Hana optimized (InfoProvider) objects may be used:

- Info Objects

- Advanced DSO objects

- Composite providers (only in combination with Open ODS view)

- Datasources

- Open ODS view (which will be used for SLT, moving replicated tables in BW-HANA)

- InfoSources (only if needed)

Aim is to permanently store the same data only twice, data combinations shall possibly be done
virtually.

If possible build the virtual data model using BW application technology (composite provider). In
case the data combination is complex, requires business logic and thus impossible to do in BW,
we should do this in (native) HANA. See chapter for those details.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 9 / 107

LSA++

The new Layered Scalable Architecture LSA++ concept is to be used as described in this chapter.

The key principle of LSA++ is equal to its predecessor, LSA. However it is optimized for running
BW on HANA.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 10 / 107

Data Acquisition Layer, this is the PSA layer, and data is only stored temporary, mainly to
evaluate (data load) errors and issues.

The data is to be cleaned as described:

- Transaction data, delta loads, delete after 3 months

- Transaction data, full loads, delete after 1 week

- Master data, delta loads, delete after 1 month

- Master data, full loads, delete after 2 days.

Note that in LSA++ this layer is also known as ‘Open Operational Data Store Layer’, here we are
planning to use classical staging technology, that is, a DataSource with traditional PSA objects.
This by using InfoPackages, called BW managed. With HANA there are now other options as
using replication servers to push data directly in HANA, but currently our licenses do not allow that.

Datasources should not have append structures. For example all data from LIS structures shall be
loaded in aDSO ‘A’, if this need to be combined with data from any other table there need to be a
separate Datasource and aDSO ‘B’ for the new data.

Datasources should be normalized;

- Same data shouldn’t come from multiple datasources

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 11 / 107

- Avoid duplicate data

Corporate memory, layer can be used this as an archive:

- Data shall be equal as retrieved from source system. No business logic shall be applied.

- Shall be based on field based DSO’s.

- Has to be put in the ‘warm data’. Check impact on dataloads + activiation.

It should be evaluate for each model is corporate memory is of added value. We see usage mainly
in delta setup’s, like logistical cockpit, but also for long running full loads (with different selections).

Corporate memory can be used to reload data from ‘archive’ to the regular or any new dataflows.

Corporate memory is only to be used for delta loads, and not applicable for full loads.

EDW (enterprise data warehouse) propagation layer, In our setup we would load this from the
‘Corporate memory’, this would allow us to put some business logic between corporate memory
and EDW propagation layer, without having to stage the data one more time. It can be useful to
put some logic as for example determination of profit center.

This layer serves the different business applications. The idea is to keep data flexible and
separated (silo-based), so it can be re-used again and shared amongst different ‘applications’.

Data shall only be combined in the business transformation layer.

The EDW Transformation layer, can have logic to harmonize the data.

Business transformation layer, this layer is to prepare/structure the data for reporting
purposes/applications.

The layer is optional, in case data from different sources only needs to be combined, this can be
done directly in the virtual reporting layer.

To transform the data, two options are possible

- Virtual; this can be done by using Composite providers, or HANA information view. In the last case
the data from the data propagation layer has to be made available in HANA, and logic can be
applied by HANA methodology, note that the advantage here is that data isn’t staged in an extra
layer.

- In case of virtual data combining, we shall think about (and document) the consequence if one
area is loaded successfully, but the other area failed, impact shall be described.

- For complex business logic, it can be decided to use architecture data marts (ADSO), and thus
stage the data physically in an extra layer.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 12 / 107

Virtual reporting layer, this layer is mandatory and the only way for reporting users
to retrieve data for the reports. The only object allowed here is Calculated view.

2.2 Handling historical data

During business discussions it shall be documented what the requirements are related to data
age. This should open the discussion on how to handle Data Lifecycle management.

What data do we really need in our reporting?

- 5 years history?

- Based on what date?

In the previous chapter it was explained that we use a corporate memory to keep full history, so
that the regular dataflow object only contains the information which is necessary for reporting.

At the start it should already be included in the process chain, which data from the regular dataflow
objects can be (selective) deleted at which point in time.

Historical data can ‘always’ be retrieved again from the corporate memory.

2.3 Block concept

For all modelling a conceptual building block shall be used across BI. Each block has his own
function. The block concept is derived and aligned with the LSA++ concept from SAP.

Blocks are used to:

- Handle authorizations

- Guide on naming convention of objects inside the block.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 13 / 107

- Group objects together

Each block is uniquely identified by a 5-letter abbreviation.

Format xxxxx

For PSA, Corporate Memory and Data Block (5 Characters):

- First character can be P (for PSA), C (for Corporate memory) and D (for data block

- 2nd and 3rd character represent the source (module) (see appendix A)

- 4th and 5th character represent the submodule or data type (see appendix A)

Example DFIAR (= Data Block, Finance, Account Receivable)

For Information block (5 Characters):

- First character always I (for information block)

- 2nd till 5th character depends:

o If data from the same area is combined (for example order header/item), we follow:

 2nd and 3rd character represent the source (module) (see appendix A)

 4th and 5th character represent the submodule or data type (see appendix A)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 14 / 107

o If data from different areas is combined (for example Billing and FI data), we would follow
the process based approach:

 2nd till 4th character represent process (Aim is Aris 2nd level) (see appendix A)

 5th character is a free character to differentiate different block in the same process

For Analytical Block (5 Characters):

- First character always A (for analytical block)

- 2nd till 4th character represent process (Aim is Aris 2nd level) (see appendix A)

- 5th character is a free character to differentiate different block in the same process

Example ASOP1 (= Analytical Block, Sales Order Processing for all ‘SOP’ users), or ASOP2 (=
Analytical Block Sales Order Processing for KPI reporting, dedicated users)

If there are composite providers which can be used in several analytical blocks (like QM), it shall
go to the most logical place. For example QM is to be used in SOP and YOP Aris process, but
belongs most natural to QSC.

Reporting users will only have access to the Analytical block

All blocks must have an owner.

Documentation is also created per block, including documentation on how to reconcile the data in
the block.

With regards to change management the best is to prevent having objects from different blocks in
the same transport.

For each block, an information area has to be created.

2.4 Local versus Global reporting

There are 3 different scenario’s regarding Local vs Global, when it comes to reporting on data from
source systems.

1. Reporting on data from only one source system = ‘Local Reporting’

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 15 / 107

2. Reporting on data from multiple source systems of which the masterdata is somehow harmonized
or at least handled in a controlled matter = ‘Cross Application reporting’

3. Reporting in a certain area on global Yara data. In here there need to some activities performed for
masterdata harmonization – making it global = ‘Global Reporting’.

Local reporting means that ‘non harmonized’ data from one or more source systems can be
stored to be used in reporting. Therefore the data is compounded with the source system id
(0SOURSYSTEM for SAP, or a reference object of 0SOURSYSTEM for non-SAP systems). So
that all related data (master and transactional) is unique for the respective source system(s). For
master data we would use the typical 0-objects (or L-objects in case of non-standard SAP fields).

Source system can only be 2 characters, and it is agreed to take the first two characters, for
example:

- YSAP (YaraSAP) = ‘YS’

- PSAP (PSAP) = ‘PS’

(SAP source systems always end on ‘S’).

See appendix C.

There has been an evaluation if data from SAP systems could all go the same aDSO’s; on the
positive side, it would mean that data from SAP systems would always have the same structure,
and it would reduce the number of BW-Objects (simplifications), but on the negative side, it would
make the handling of load errors more complicated and dependent (for example PSAP load error
means that YSAP data can’t be loaded). So it is concluded that we will have separated aDSO’s for
each source system, this also keeps us flexible (for example if we want to have PSAP
development done from Colombia in the future).

Same goes for non-SAP systems, separated aDSO’s for each source system.

Challenge for the global reporting is that there is currently no master data harmonization
between Yara’s different source systems.

When the need for global reporting arises there needs to be a process to harmonize the data on a
certain (to be agreed) level, from the different sources into global objects (with common master
data). We would build new InfoObjects (G-objects).

For translating a local object to a global object, there two possibilities:


00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 16 / 107

1. Each Local object should have a ‘G-object’ as (navigational) object This attribute of the Local
object can be filled during the loading based on a conversion table (it is not defined yet where
these conversion tables shall be maintained). For masterdata delta loads, this gives the historical
view.

2. The G-object can be filled based on a mapping table (current view), here it would be required to
work with Calculated views.

The (Navigational) G-objects can then virtually be combined.

See more details on the naming convention and usage of 0-, L- and G- object in chapter 10.

For example 0MATERIAL should have a Navigational attribute like ‘GMATERIAL’, in this case for
GMATERIAL there should be a harmonization table for each SAP source system, in which the
local material is converted to the global material.

The main question remains who will maintain this.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 17 / 107

Ideally the global reporting can be done virtual, as described above. In case of complex scenario’s
or ‘non’ equal key figures staging is possible. It then should be documented why.

This would then look as:

In some cases there can be the need to do some reporting between multiple source system, which
cannot be categorized as global reporting, this we call Cross application reporting.

One example could be combing data from a leading system (for example YPPM), to which some
data from a second system need to be joined (for example YSAP).

In this case in YPPM there is an object which can be joined with the WBS element of YSAP data.

YPPM is built of L-Objects, while in YSAP is consist of 0-Object. But here there are no common
objects (apart from WBS element).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 18 / 107

As explained above all (masterdata related) SAP InfoObjects will be compounded with
0SOURSYSTEM, but for the non-SAP (L)InfoObjects they need to be compounded with a
reference object of 0SOURSYSTEM.

(this is needed because 0SOURSYSTEM itself can only exists once in a composite provider).

Therefore for each non-SAP source system there shall be a L-source system object with the
following naming convention: L<xx>_SOURS, in which xx is the 2-letter source system
abbreviation as per appendix C. For example LYP_SOURS for YPPM.

This cross application reporting can be used in some exceptional cases.

2.5 Business content

For all new developments, SAP BW Business content shall be evaluated (new SAP BW for HANA
optimized content).

Especially the business content data sources are powerful and should be used. However if the
datasources do not include logic, we shall evaluate SLT.

Only DataSources and InfoObjects (local reporting) are to be used directly from business content.

Other business content can be activated (migrated) and copied.

If InfoObjects are activated from business content all attributes which are not applicable for Yara
can be removed (and the attribute-infoObject deleted, if not used elsewhere).

For transactional data related business content, we keep the business content only in the
development system, not to be transported.

If you also would like to use queries from business content, do remember that authorization is put
on the naming convention, in such a case the business content has to be copied.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 19 / 107

For local reporting (see also chapter on local/global reporting), we shall use the 0-objects. In case
they hold masterdata they have to be compounded with the source system id.

For global reporting new G-objects have to be created.

3 Transport Governance

3.1 General

The governance of moving transports is different per landscape, see in following chapters.

Naming convention can be found in the later chapter on Naming conventions.

3.1.1 New BW (DB1/QB1/PB1)

Before moving any transport, some checks must be executed – see chapter 3.2.

Transports in DB1 can be released, subsequently they are imported in QB1, by means of a
background job. Transports into PB1 need to be requested towards Capgemini so the CCB
process is followed.

Transports should always be based on change requests (currently NSSR’s or Serivce Requests in
servers now). Naming convension: “AMS:BW:W:<change request number>:<free text>”, example:
“AMS:BW:W:NSSR0008397:GC ISAP – HOTO".

3.1.2 YSAP (D07/T07/PRO)

IBM is moving the transports over the landscape. Here we are using the ‘SAP Request Control’
notes database. For all transports there must be a CQ ticket (note that in CQ it is important to
categorize tickets as SAP BW, even it is contains a datasource), technical documentation in
UDD’s and test document in a UNE. For transport into the productive environment there is the
CCB process managed by IBM.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 20 / 107

3.2 Checks before moving transports

Before transporting to test environment, the following checks need to be executed:

3.2.1 Compounding (Program needs to be adjusted to also cover LYP_SOURS etc?)

For the compounding check, query

ACSI1_GTR1_MST_INFOOBJECT_COMP in BW on HANA environment can be executed. This query


checks on transports containing InfoObjects not correctly compounded (with 0SOURSYSTEM, or
L<xx>_SOURS).

The query will give on the fly a list of transports & InfoObjects where the compounding is not set , taking
into account the exceptions in de masterdata of infoobject GIOBJEXC.

This InfoObject GIOBJEXC contains the InfoObjects that do not need to be compounded.

When adding an InfoObject in the masterdata via Edit Masterdata GIOBJEXC, the InfoObject will not
popup anymore in the report.

Some Local InfoObjects do not need to be compounded, for example:

- Date/Time objects

- Document number (if they don’t have attributes, like 0DOC_NUM)

If there is any doubt, the object shall be compounded, because once loaded it becomes very hard
to compound.

After executing the query, the following columns appear:

 AS4USER             UserId in the task of your transport request

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 21 / 107

 TRREQ                 Workbench Transport Request (this is not the task, but the higher level)

 AS4TEXT              Description of the transport request

 IOBNM                  Infoobject

The following free characteristics are available :

 MDCHECK          X             Masterdata check in RSD1/Eclipse is checked


                            #             Masterdata check in RSD1/Eclipse is not checked

3.2.2 Texts
Every InfoObject description should contain first letters capitals, except for binding words like of, from, …

3.2.3 Texts of Nav Attr


According to master data attributes naming convention, it must be clear to which object the
attribute belongs.

Therefore, the naming of master data attributes ‘MD’ should be included at the end of the
navigational attribute description, f e ‘Country of Company Code MD’.

Table RSDATRNAVT can be used to check.

3.2.4 Obsolete and irrelevant attributes


InfoObjects and navigational attributes starting with 0ACN, 0CRM, 0GN_, 0APO, 0RT, 0RF, 0CM_
should be removed from aDSO definitions and InfoObject attributes.

But also objects as ‘Length’, ‘Color’ etc can be removed if they are not relevant for Yara.

3.2.5 0UPD_DATE
This InfoObject is added to PSA aDSO, Corporate Memory aDSO and Data Block aDSO.

0UPD_DATE is always filled with a formula to have current date:

in the transformation from DataSource to PSA aDSO,

In the transformation from PSA aDSO to Corporate Memory aDSO,

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 22 / 107

In the transformation from Corporate Memory aDSO to PSA aDSO,

In the transformation from PSA to Data Block Adso.

3.2.6 Avoid duplicated header and item data


When creating or activating DataSources related to header and item data of the same domain, it
must be avoided to extract the same data at header and item level. It must also be avoided to
extract data from header level when the field originates from item data, or vice versa.

For example, field VBTYP is available in header and item DataSource of sales orders. But as this
fields originates from table VBAK (contains sales order header data), it should therefore only be
transferred from the header DataSource.

The same logic applies for combinations with schedule lines, conditions,….

Off course, in exceptional cases, where the same field is needed in different flows (for example
units or for derivations at higher level like company code for the derivation of profit center….),
duplication is allowed.

4 Masterdata

4.1 General
Masterdata can be grouped in one or several analytical blocks.

In the naming of masterdata navigational attributes it shall be clear:

- Of which object it is an attribute

- Have the indication MD in the end of the description.

For example ‘Country of Company Code MD’ .

Hierarchies, in case are planning to use time dependency in hierarchies, we should use the option:
‘entire hierarchy is time dependent’. This because it is easier for maintenance. Global profit center
hierarchy shall not have time dependency as it is used in authorizations.

4.2 Specific objects

GPROF_CTR is authorization relevant. It is loaded from 0PROFIT_CTR datasources from YSAP.


There will be a (entire) time dependent hierarchy which is maintained (and also loaded) from
YSAP (AGRIYARABU).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 23 / 107

It is agreed that profit center shall be existing in each datasource.

GCOST_CTR is compounded with source system, should have length equal to longest costs
center (Sun versus SAP).

The hierarchy will be based on attributes (Segment/Country/DHSite), as the same nodes exists
multiple times with different leafs. Technically this is not possible with classic hierarchy. The
attributes do need to be maintained with WebDynpro solution.

Hierarchical view can be obtained in Query designer.

5 Authorization (Whole chapter needs review – what authorization do we still


need in BW/HANA?, To be replaced with HANA DB authorization?)

Authorization is block based. End users shall only have access to analytical blocks.

End-users need a combination of following roles:

- Analytical Block Authorization (which InfoProviders) (analytical authorization)


- Data Content Authorization (what data can the user see)
- Menu/Role Authorization
- User Type Authorization (technical, what can the user do)
- Optional some special authorization (maintain masterdata / IP / etc)

All (end-user) authorization shall reside in BW. In the chapter describing the BW modelling, it is
mentioned that we may use HANA information views to ‘virtually’ combine data. However HANA
(and design studio) shall only be used for a select group of developers. From native HANA, the
data has to be brought back to the reporting layer (analytical block) on which the authorization is
build.

In short, no end user shall have access to HANA information views, in order to keep the
authorization maintainable and simple residing in BW only.

Also BO folder authorization is to be controlled from the BW end, by means of menu (dummy)
roles. In this way we do maintain all authorizations from the BW viewpoint.

0ProfitCenter & 0CostCenter will be flagged as authorization relevant objects (discussed in


meeting 9th of May 2017, cost center no longer, decided 28/29 Sept 2017)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 24 / 107

SSO based on activity directory (Kerberos) will be implementing for all our end users to access the
reporting (BO) environment.

5.1 Analytical Block Authorization

End users will only get access to so called ‘Analytical Blocks’, and respective source system (or global).
These blocks consists only out of composite providers. Each Analytical block represents an (authorization)
area, like:

- Sales General (holding Contract/order/delivery/billing/QM notifications composite providers)


- Sales KPI’s (holding specific KPI’s which require more strict authorization = managers only)
- Etc

In Analytical block authorization (roles) users are given authorization to a (group of) composite provider(s).

In the role setup we have to take into consideration local/global reporting. Which is part of the naming
convention of the composite providers (and thus queries).

Naming convention (see appendix B), for analytical block role:

YRBA_<block_(SAP_source_system)>_<free text><_W (optional for write access)>;

Example YRBA_ASOP1_Y_GENERAL and YRBA_ASOP1_Y_KPI

Note that SAP source system is optional, depending on needs. See appendix C for abbreviation.

Naming convention linked analytical authorization:

 Analysis autorisations: (12 characters):

o Example: AASOP1_Y_R_01

o 1         A       analysis authorisation

o 2-7      ASOP1_Y     block_source system

o 9 R read/write

o 11-12    01             sequence number

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 25 / 107

It seems so far they are created as: AAASOP1_YR01; so we need to follow:

o 1-2       AA      analysis authorisation

o 3-9      ASOP1_Y     block_source system

o 10 R read/write

o 11-12    01             sequence number

For each block there has to be Analysis authorization for reading (this is to be part of the ‘normal’
Analytical block role, as ‘YRBA_SOP1_Y_GENERAL’. It there is a need for having write enabled providers
in the block, then there has to be a second Analysis authorization for writing, and also a separate single
role example: YRBA_ASOP1_Y_GENERAL_W (write). The roles for writing will be assigned separately as
single roles to users. See also chapter 4.5.

5.2 Data content Authorization

Data content authorization identifies which data a certain user is allowed to see.

For this we are planning to use an aDSO (enabled for planning) (as we have in our current BW
environment PAUXXX99.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 26 / 107

Take into account the following guidance :

 Back Cells : Key of masterdata object


 Yellow Cells : Text or attribute of masterdata object
 Above Infoobject names are dummy names ,correct names according to the naming convention
should be used.

It is agreed (in meeting 30 Aug. 2017), that we keep the approach simple. For each user we generate a
line for object 0PROFIT_CTR (and if needed another line for 0COST_CENTER), and fill in ‘XXXXX’ for
AnalyticalBlock, it means that the user has the same access for all reporting areas (sales, procurement,
etc). Only for users having access to contribution a second line is needed with ‘ACOM2’, to set a perhaps
more strict authorization for that area (could be more areas in the future).

Users who can also write data (through IP), need double lines. One with IP field unflagged for their ‘read’
access, and another with IP field flagged for their ‘write’ acces.

If we fill hierarchy node ‘ALL’ it should take all masterdata entries, including those which are not in the
hierarchy.

Note that it is agreed (in meeting 17 Oct. 2017), that we are nog going to use time dependency in the
profit center hierarchy, as this would over complexity the maintenance of the aDSO. In case there are
major changes to the hierarchy this should run as a separate project.

Profit Center (including Profit Center hierarchy) and Cost Center (including Cost Center hierarchy) will be
setup as authorization relevant objects. Overview could be as follows:

5.3 Menu / Role authorization (=composite roles)

This authorization is needed to arrange the access to folders in SAP Business Objects. Roles however
have to be created and assigned into BW, and mapped into BO.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 27 / 107

In BO there shall be two types for folder/menu authorization:

1. Role based reporting. As the name indicates you can only see the role based folder(s) according to
your role in the organization. In this folder, the super user shall have write access to the folder level
4; so super user can store and maintain the reports for the reporting users. Example level 4: 01.
Role Based – 03. Crop Nutrition – BU Europe – 02. Commercial Director. So a super user
authorized for this, can further create subfolders and organize the reports accordingly. Reporting
users only have read access.
2. Analytical block(s) which is used to store/save template reports created by the BI team, display
access for all users.

For the roles based reporting (menu) role we need composite roles, as this groups together all necessary
authorizations, so access to role based folder, but also analytical blocks, user type, etc.

Those roles can be created as per example matrix overview below:

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 28 / 107

Composite roles are needed to be created in BW to group Analytical Block authorization, menu role (level
one) authorization, and technical (user type) authorization in one single role.

Naming convention (see appendix B), for role based reporting roles:

JRBP_<Area_Role>

So for example JRBP_BU_EUR_CUST_SERV_SPEC and


JRBP_BU_EUR_CUST_SERV_SPEC_SUPER.

5.4 User Type (technical) Authorization

This authorization is needed to give permission on what the user can do:

- Basic user authorization (what all users can do)


- BI Team user

Naming convention:

YRBU_<user type>, in which user type can be: Basic (all users), BITeam , example: YRBU_BASIC.

All users can change/create reports, only difference is that super user can store reports in ‘Role based’
folder; Reporting user can only store reports in their own folder. The BI team can only store folders in
‘Information Area’ folder. All users can schedule reports.

Only BI Team and can create (Bex) Queries.

At a later stage the BI Team authorization will be split as:

YRBU_BI_Team_Analyst

YRBU_BI_Team_Global_Developer

YRBU_BI_Team_Regional_Developer

YRBU_BITEAM, has been created for the BI Analysts

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 29 / 107

5.5 Special Authorization

Maintain Masterdata (InfoObject), through Webdynpro.

Single roles, assigned directly to dedicated users who should maintain certain infoObject
masterdata. One role per infoArea in which the InfoObject for which a user can maintain MD shall
be grouped.

Example: YRBU_MDMAINT_SD

Write access to Analystical Block for IP Solutions.

Singe roles, assigned directly to dedicated users who should have write access to (IP)
infoProviders in a certain analytical block.

Shall follow naming convension as Analytical block authorization roles (chapter 4.1).

Example: YRBA_YSOP1_Y_GENERAL_W (note that _W indicates write access).

6 Data acquisition (and SLT usage in Native HANA)

Yara is restricted in its way of moving data into BW on HANA. With the (8%) runtime license we
may only use standard BW methods of bringing data into the data warehouse.

In 2017 Yara increased to (15%)runtime license which will also bring SLT into the picture to be
used in BW.

In 2019, Yara bought HANA Enterprise edition – which allows for bringing SLT in HANA Native
(ongoing); of course with this, sizing needs to be kept under control.

General

Advantage of SLT, is that it generate its own delta mechanism; and can be used real-time.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 30 / 107

Usage

Usage of SLT can serve three purposes.

- Direct usage of the tables in Alteryx (for specific functions)

- Temporary usage (Audits / POC’s)

- Replication of tables for usage in BW

Direct usage of the SLT replicated tables in Alteryx is only allowed for restricted group of people. Mainly
expert functions, like internal control, group accounting.

This is because not authorization can be set.

SLT is organized in ‘source system schema’s’ and thus in current setup limited possibilities to arrange
authorizations. All SLT users can see all replicated tables.

SLT can be used for temporary usage, in case of prototyping or audits.

We can also use SLT to replicate tables for use in BW.

In most of the cases the data should go through BW because SLT replicated tables are only available for
a limited group of users.

If we use SLT tables in calc.views, we can add the tables directly, however this should only be done if
there is attribute or text to the field (and also not likely it will be required in the future). In any other case
we need bring the replicated table in BW and map is against an infoObject. This is further described in
chapter 10.

Rules

If we get requests for replication it shall first be evaluated if the same data if not already available in BW.

In case it is in BW already, we need to communicate the calc.view.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 31 / 107

In case not in BW, we shall investigate if it is to be used by one of the specific functions (see above). In
that case we can replicate the table.

Else we need to bring it into BW:

- Is business content datasource does exists, we use that.

- If no business content:

o For small (Masterdata/Z-) tables), we can create generic datasources (full loads) to avoid
contact replication.

o For large (transactional) tables we flag the table for replication and bring it into the BW
model via an OpenODS.

For each replication case we need to evaluate if all data is requested or column/row filtering is needed in
SLT (LTRS transaction)

In case of replicated tables are needed in BW, we have to replicate from source into DB1, QB1 and PB1.

In case we don’t need it in BW, we can only do towards QB1/PB1.

Procedure

Requests comes in, and evaluation is performed via rules above.

In case of table replication; we first replicate into DB1 (in case of BW) and next into QB1 (always).

With the initial load statistics we need to approach source system basis team coordinator to get approval
and timeslot to perform replication to PB1.

Find contact person:

YSAP: Yop Kuraesin Yop.Kuraesin@yara.com

PSAP: Gesman Machado Mendoza gesman.machado@yara.com

BSAP: Emilsom Pinho emilsom.pinho@yara.com

ISAP: Naveen Pareek naveen.pareek@yara.com

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 32 / 107

Side note

In BW SLT can’t be used for MD which are having attributes:

Virtual attributes/texts on infoObjects, based on SLT tables (via calc.view).

Not really possible since you can’t flag them navigational – thus you can’t have them in the datablock
aDSO.

So basically we shall use SLT:

- Transactional data, where there are no business content datasources available (with logic)

- Transactional data (for fields not included in std extractor)

- Masterdata text (without attributes?) (but will still put pressure on source system).

- Ad-hoc reporting?

SLT for client in-dependent tables (include filter on MANDT) – configurations are not setup as single client
(unfortunately).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 33 / 107

7 Data maintenance

In some business requirements there is a need for generating or adjusting data in BW. This direct
data maintenance can be done by means of ‘planning’ info providers.

Mainly because of traceability this is preferred above creating content in Excel or csv files and
uploading these in BW.

Decided in team meeting 26/27 Sept, to see if we can open a HANA schema per region for some
dedicated people to maintain data?

Seems now we go to SharePoint lists and combine this with Alteryx?


00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 34 / 107

8 Reporting / Mapping tables

The following points shall be considered to decide where and how to maintain mapping/master
data related to BI:

- Who maintains the data (central- or local-user)?

- Does the data already exists somewhere? Or do we need to create is from scratch?

- Is the data purely needed for BW or also used in other interfaces?

- We shouldn’t have all masterdata maintenance in BW as that is not the intention of the system.

There are objects which are only used for reporting purposes. For example data from table
ZMR01. Currently this is maintained in Z-tables in the YSAP system. There are several options
where to put the (reporting) master data and/or mapping tables. See overview below.

Masterdata maintenance via Webdynpro

(ABAP/JAVA stack?)

Plus:

- Quick implementation

- Non existing ‘objects’ values

- Link in BI Launchpad

- Navigational attributes directly available in composite provider

Minus:

- Not possible to see change history

- Non existing ‘objects’ values

- Authorization possible on field basis? Or all attributes?

Use case: masterdata only, selective group of authorized persons; i.e. Region.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 35 / 107

Integrated planning

Plus:

- Same user interface

- Copy paste from Excel

- Based on existing object values

- Last changed / created by possible to track

- Possible to block

Minus:

- Based on existing object values

- Must include a keyfigure

- Need to be included via calculated view (not directly as MD attribute)

Use case: all kinds of data (probably better for transactional), can be done by super users; i.e. crop
reporting parameters.

Z-tables in ECC?

- Data dedicated to only one source system

- Users who are mainly ECC users.

General rule, in case it concerns masterdata, we maintain via WebDynPro, and accept the fact that invalid
values can be put.

In case of transactional it shall be IP.

There can always be exceptions for good reasons.

WHAT ABOUT SHAREPOINT LISTS?

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 36 / 107

9 SAP Finance Project – Profit Center (YSAP relevant only)

In 2014/2015 the SAP Finance project was ran in Yara. Goal was to switch from company code
oriented segmentation to SAP profit center oriented. So the project introduced the usage of profit
centers, which wasn’t really used in the past.

The target was to have a correct opening balance per profit center on 1/1/2015, which only could
be achieved if 2014 balance sheet items had the correct profit center assigned.

Additionally all logistical transactions open on 31/12/2014 needed to get the correct profit center.

So data as of 01/01/2015 will have a profit center in YSAP.

For data older than 2015, a profit center determination has to be done in BW? Determination logic
has to be copied from the current BW environment.

Note that for most of the areas, the SAP SLO tool was used in ECC to fill profit center for open or
even historical transactional data. See PCA conversion manual.

Very simplified, the logic for profit center should be to copy from ECC if transaction was created in
2015 or later, and to apply the determination in case the transaction was < 2015.

The following information areas where corrected by the finance project:

- Contribution reporting

- Inventory

- Sales

- Costing

- FI-AR (just reload?)

- FI-GL (+ installation of new business content)

10 Modelling & naming convention

10.1 InfoAreas

For each block there needs to be an InfoArea.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 37 / 107

Yara

Corporate memory

<block>

CSDOR (Corporate memory block Sales Orders)

PSA Block

<block>

PSDOR (PSA block Sales Orders)

Data Block

<block>

DSDOR (Data block Sales Orders)

Information Block

<block>

ISOP1 (Information block Sales Order Processing)

Analytical Block

<block>

ASOP1 (Analytical block Sales Order Processing)

Format: XYYYY

X Block Identification C , P , D , I or A

YYYY APPENDIX A :
Information source (module) and Aris Process

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 38 / 107

Length: 4

Just the block name

10.1.1 InfoObjects

FOR NON-SAP objects it is not always necessary to have InfoObjects. We can create field based
aDSO’s for quick developments. Naming convention would be field_<source>

(new rule added March 2018)

InfoObjects can be stored in the MD InfoArea, here we don’t split per ‘block’:

10.1.2 Characteristics
For characteristics, there is a split between global and local objects.

10.1.2.1 Local InfoObject

In case no requirements for global reporting, start all developments as local, as such we stay
flexible. When the need for global reporting in the specific area comes up, it can always be built on
top.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 39 / 107

Local object for SAP source system are the normal 0-objects, and they need to be ‘source system
id’ (0SOURSYSTEM) compounded.

For non-SAP source systems we create L-objects as L<xx>_yyyyy in which xx is 2-letter source
system abbreviation (appendix c), and yyyyy should represent an abbreviation of the meaning; for
example LYP_GOLDT (= YPPM ‘Go-Live date’).

Most InfoObjects need to be compounded (see chapter 2.4), even those as 0COMP_CODE and
0PROFIT_CTR, although there is today reporting on global level on these object, the masterdata
source tables (as T001) are not the same.

Example YSAP vs PSAP:

Some ‘Local’ InfoObject do not need to be compounded, for example:

- Date/Time objects (Allowed to use the 0-object, but only if they have useful names (as
0Createdon, 0Postingdate. Else, THESE AUTOMATICALLY should be created as G-Object!)

- Document number (if they don’t have attributes, like 0DOC_NUM)

If there is any doubt, the object shall be compounded, because once loaded it becomes very hard
to compound.

User related InfoObjects, like 0Changedby, 0Createdby shall be avoided (because this object is
not referencing to 0USER). We will create Luser instead, and flag it for text (and attributes). Then
the 0-user object can be replaced by LCHANGEDB and LCREATEDB, which have to be created
as reference object to LUSER.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 40 / 107

Business content is leading, if it exists, please use and extend. Following details on business
content:

Format: 0xxx…x

Length: ∞

Example: 0EMPLOYEE

After activating business content masterdata object, the attributes shall be checked, and all non-
relevant attributes may be removed as attributes, and if not used anywhere else, the can also be
removed from the system. Examples are 0ACN objects, related to AC Nielsen, but also if material
has objects as ‘Length’, ‘Color’ etc, they can be removed as it is not relevant for Yara.

The same logic applies for 0CRM, 0RT, 0RF, 0APO, 0GN_, 0CM_ objects/attributes.

In case there is a need to add ‘many’ source system related objects as (nav) attributes to a 0-
object then it is advised to create a separate L-object for those ‘source system’ related attributes.

Let me illustrate, 0MATERIAL has many standard 0-object attributes. In case we expect for the
YSAP source system many (nav) attributes which are only existing in YSAP (Z-objects), then we
will create LYS_MATER which will also be loaded with the material and carry the YSAP (Z-
objects). We expect this to be the case for objects as: Material, Customer, Vendor, Cost Center,
GL-Account, etc.

The LYS_MATER object will be put in the composite provider. Disadvantage is that we would have
2 object having the material number, but we should know the purpose and how to handle it (users
do not create queries)

In case business content doesn’t exist, local objects are created as:

For SAP source system:

Format: Lxxxxxxxx

Length: 9

Example: LEMPLOYEE

They should start with ‘L’, and characters 2-9 may be anything which describe the object.

For SAP source system, dependent attributes (see above):

Format: Lss_xxxxx

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 41 / 107

Length: 9

Example: LYS_MATER (material field to carry material attributes specific for YSAP)

First Character: L

Character 2-3: Source system abbreviation as per ‘source system IDs maintenance’.

Fourth Character: _

Character 5-9: Free text to describe the object

For non-SAP source system:

Format: Lss_xxxxx

Length: 9

Example: LEV_TERML (terminal field for Evocon source system)

First Character: L

Character 2-3: Source system abbreviation as per ‘source system IDs maintenance’.

Fourth Character: _

Character 5-9: Free text to describe the object

They should start with ‘L’, and characters 2-9 may be anything which describe the object.

Note that L-objects must be compounded. Few exceptions can be read above.

10.1.2.2 Global InfoObjects

In global reporting there should only be global InfoObjects. But it is allowed to use global InfoObjects also
in local reporting (if the mapping exists and is available).

Global InfoObjects cannot be business content, and are not compounded with source system ID.

In the transformation, there shall be data harmonization activities.

Format: Gxxxxxxxx

Length: 9

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 42 / 107

Example: GEMPLOYEE

They should start with ‘G’, and characters 2-9 may be anything which describe the object.

The description of a global infoObject shall always end with ‘G’; for example ‘Profit Center G’, so it is clear
that it concerns a global object.

For some source systems which are supposed to be global (i.e. YPOD, Synergi, Evocon), it is allowed to
map the L-object directly to G-objects in the calculated view. In this way we create a 1:1 mapping, and
thus make the local object global.

10.1.2.3 Navigational Attributes

In case infoObjects are holding materdata, the navigational attributes should have clear names, so it
becomes clear that these display the ‘current’ view. In this case it shall have MD in the end of the
description. For example if 0COMP_CODE has the attribute 0COUNTRY, it should have description:

- Long (60) ‘Country of Company Code MD’


- Short (20) ‘Country Comp Code MD’

Main words should start with capitals.

Abbreviations should not have dots in between.

This way of using descriptions should be valid for all InfoObjects where you need to adjust standard.

It is allowed to use transitive (=virtual) InfoObjects to handle attributes of navigational attributes. For
example 0MATERIAL__GDHCODE_V (will have product group, product group1, product group 2 etc).

10.1.2.4 Texts of the InfoObject

Make sure the longest text available is used by default in the ‘BI Clients’ settings tab.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 43 / 107

10.1.3 Key Figures

For key figures there is no split between global and local. It is important to take of objects are
0UNIT and 0CURRENCY fields. They shall be used in global reporting, and if coming from non-
SAP systems there should be looked for a way to get them filled.

Also for key figures, business content is leading, if it exists please use and extend. Following
details on business content:

Format: 0xxx…x

Length: ∞

Example: 0AMOUNT

In case business object doesn’t exist, we just use free text (in uppercase):

Format: xxxxxxxxx

Length: 9

Example: OPEN_DQSU

Use text describing the purpose of the key figure, above for example is Open Delivery Quantity in Sales
Unit.

Underscore can be used for readability, but not mandatory.

Note this currently is mainly set of SAP keyfigure, we still have to see how this behaves in case we start
working with other systems (as SUN) since perhaps don’t always use the same units, which are linked to
key figures.

10.2 InfoObject Catalogs


All InfoObjects used in a block should be added to the InfoObject Catalog of that Block.
00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 44 / 107

Naming convention can be equal to the name of the block.

Format: xxxxx

Length: 5

Example: DSOPO

InfoObject and Key Figure glossary – how and where?

10.3 InfoProviders

We only allow ‘Hana optimized objects’ which are following objects:

- aDSO

- Composite Provider (was used for BO, and now still for SLT)

- InfoSource

- Open ODS view (for usage with SLT)

The description of an InfoProvider shall always start with the block.

For example ‘DSDOR Sales Orders’, This would be an InfoProvider in the Datablock SDOR, which
is used for Sales Orders.

In General we should avoid having InfoObjects in an InfoProvider which do not have a clear
meaning, for example 0CALDAY, it’s better to have 0CREATED_ON. Time objects shall be
aligned/named as in source system; it is clear for the end user what date is being represented.

10.3.1 InfoProvider in a PSA block


In the PSA blocks we typically would find aDSO’s (and OpenODS views (field based) – for SLT).

Format: xxxxx_xxx

Length: 9

Example: PSDOR_YI1 (in authorization we can use the first 7 characters)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 45 / 107

- Character 1 – 5 represents the block name.

- Sixth character is always underscore

- Seventh character represents the source system, for example ‘Y’ for YSAP, ‘P’ for PSAP, ‘S’ for
SUN, ‘B’ for BSAP (see appendix C).

- Eight character represents a subdomain, for example ‘I’ for items, or ‘H’ for headers, ‘C’ for
combined, or ‘X’ if nothing needs to specified.

- Ninth character is a sequential number

Description should start with <block><space><2 char source system>:<description>, for example:

“PSDOR YS: Sales order Items”

PSA Block aDSO’s often are created based on a datasource (template). Therefore, before creating
the PSA Block aDSO, first in the respective datasource, in the field overview, you should maintain
the transfer field (see datasource, see 9.6). In this way, when creating the PSA aDSO, the non-
relevant fields will not be mapped. Note that in the PSA block all fields (which are flagged as
transferred) from the datasource should be in.

In PSA blocks, we should always have one aDSO for each datasource; meaning not combining
multiple datasources into one aDSO.

We should only have fields on the correct level, in other words, no header fields in an item aDSO
etc.

The fields are grouped automatically under ‘Data Fields’ (name = FIELDS).

The InfoObject ‘0UPD_DATE’ should be added to each PSA aDSO and will be part of group
‘Characteristics (BW InfoObjects)’ (name = IOBJ). Note that 0UPD_DATE is always filled with
formula!

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 46 / 107

In a PSA aDSO, no modelling properties (located in the general tab) should be set.

You can also see in the same properties:

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 47 / 107

10.3.2 InfoProvider in a corporate memory block


In the corporate memory blocks we typically would find aDSO’s. Corporate memory is not
applicable for SLT loaded data.

Format: xxxxx_xxx

Length: 9

Example: CSDOR_YI1 (in authorization we can use the first 7 characters)

- Character 1 – 5 represents the block name

- Sixth character is always underscore

- Seventh character represents the source system, for example ‘Y’ for YSAP, ‘P’ for PSAP, ‘S’ for
SUN, ‘B’ for BSAP (See appendix C).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 48 / 107

- Eight character represents a subdomain, for example ‘I’ for items, or ‘H’ for headers, ‘C’ for
combined, or ‘X’ if nothing needs to specified.

- Ninth character is a sequential number

Description should start with <block><space><2 char source system>:<description>, for example:

“DSDOR YS: Sales order Items”

We will only have fields which are part of the underlying PSA aDSO, that will be linked 1:1, not
combining multiple PSA aDSO in one corporate aDSO.

In corporate memory blocks, we should always have one aDSO for each PSA aDSO; meaning not
combining multiple PSA aDSO’s into one aDSO.

The sequence of the keys in the aDSO’s should be correct. For example source system should
always be the first field.

The other fields are grouped under ‘Data Fields’ (name = FIELDS).

InfoObjects are put in the group ‘Characteristics (BW InfoObjects)’ (name = IOBJ).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 49 / 107

In a corporate memory aDSO, the following modelling properties (located in the general tab)
should be set: ‘Activate Data’.

You can also see in the same properties:

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 50 / 107

10.3.3 InfoProvider in a data block


In the data blocks we typically would find aDSO’s (and Composite Provider for usage with SLT,
note that source system need to be added here, and mapped!. In case infoObjects exists map to
InfoObjects so you can use text directory in the calc.views on top).

Format: xxxxx_xxx

Length: 9

Example: DSDOR_YI1 (in authorization we can use the first 7 characters)

- Character 1 – 5 represents the block name

- Sixth character is always underscore

- Seventh character represents the source system, for example ‘Y’ for YSAP, ‘P’ for PSAP, ‘S’ for
SUN, ‘B’ for BRASAP or ‘G’ for Global (See appendix C).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 51 / 107

- Eight character represents a subdomain, for example ‘I’ for items, or ‘H’ for headers, ‘C’ for
combined, or anything else, or ‘X’ if nothing needs to specified.

- Ninth character is a sequential number

InfoProvider in the datablock shall be created based on the respective PSA aDSO.

Also in datablocks, we should always have one aDSO for each datasource; meaning not
combining multiple datasources into one aDSO.

The sequence of the keys in the aDSO’s should be correct. For example source system should
always be the first field.

In a datablock aDSO, the InfoObjects have to be grouped in certain categories, which provides a
better overview:

In a datablock aDSO, the following modelling properties (located in the general tab) should be set:
‘Activate Data’ and ‘Write change log’.

In the same properties you can also see:

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 52 / 107

10.3.4 InfoProvider in an Information/Analytical block


In the Information block you would typically see calculated view to combine the data, on exception
it could be aDSOs. In Analytical blocks we would only see Calculated views Composite providers.

Format: xxxxx_xxxx

Length: 10

Example: ASOP1_YCS1 (in authorization we can use the first 6 characters)

- Character 1 – 5 represents the block name

- Sixth character is always underscore

- Seventh character represents the source system, for example ‘Y’ for YSAP, ‘P’ for PSAP, ‘S’ for
SUN, ‘B’ for BRASAP and G for ‘Global’ (See appendix C).

- Eight and ninth character specify the data in the view,

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 53 / 107

for example ‘ DL’ Deliveries.

- Tenth character is a sequential number

(See chapter 11 for details on calculated views)

10.4 Non nameable objects (Transformation, DTP’s and IP’s)

Some objects cannot be given a technical name, the GUID is given by the system and you can
only change the description.

Transformation: These are given a very descriptive name. Retain as is, do not make changes.

DTP’s: The system generated name contains already the source and target of the DTP, extend the
name by using the following parameters as prefix of the standard name:

PC_ prefix, shall be put in case the DTP is used in a process chain.

LT, can be FL (for full load) or DL (for delta load)

Format: <PC>_<LT>_<SystemGeneratedName>

Examples:

PC_FL_0DISTR_CHAN_TEXT / HAE -> 0DISTR_CHAN

PC_DL_2LIS_13_VDITM / HAE -> PSDBL_YI1

PC_FL_0COMP_CODE_ATTR / HAE -> 0COMP_CODE

PC_DL_PSDBL_YI1 -> DSDBL_YI1

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 54 / 107

InfoPackages seems to be obsolete with ODP-ODQ, however we need to further investigate this
topic. Still needed for MD

InfoPackages: Name is to be given fully, as per details below.

Format: <PC>_<LT>_<DataSource>_<LoadOption>_<SourceSystem>

PC prefix, shall be put in case the IP is used in a process chain.

LT, can be FL (for full load), DL (for delta load) or IL (for initial load).

DataSource, the technical name of the datasource.

LoadOption, REPAIR (for repair full loads), NO_DATA (for initial loads without data).

SourceSystem, for example PSAP/YSAP

10.5 Datasources (&Source systems)

Business content is leading, if this exits we activate and extend as required and keeping its original
name. For loading from SAP systems, we only use ODP-ODQ datasources, as with BW/4HANA
the classic SAP source system will be removed.

Alternatively we can use SLT.

Other possible datasources are based on DBConnect or (Flat)File. Note that we always need to
create a dedicated source system for each system of which the masterdata is not harmonized.
This is also valid for (Flat)File loads.

In all Datasources, in the field overview, we should maintain the transfer column correctly so we
will only have relevant fields in the data warehouse. In this way, when creating the PSA aDSO, the
non-relevant fields will not be picked.

Definition of ‘non-relevant’ is of course not black&white, but to give some examples:

- Item datasource should not have header related fields.

- Non relevant Z-fields which are to be cleaned out should be unflagged (as we not planning to copy
stuff from the user exit – see chapter 5)

Of course standard SAP fields (on the right level ‘header-item’) should stay flagged, as perhaps
they will needed later and it would be great to have them in corporate memory in such cases.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 55 / 107

It is preferred to not have logic in the user exit (see chapter 6)!

For the most important datasources, like 0Material_Attr, 0Customer_Attr, etc we should manually
check and compare the existing datasource with the new business content. New business content
fields shall be added.

For generic datasources:

Transactional:

Format: Z_<Block><_FreeText>

Z_, prefix

<Block>, name of the block in which the data is loaded

<_FreeText>, free text to indicate the type of data which is being loaded.

Masterdata, attributes:

Format: Z_<InfoObject>_ATTR<n>

Z_, prefix

<InfoObject>, technical name of the target InfoObject

_ATTR, to specify it concerns attributes

<n>, only to be used in case there are more master data attribute datasources for the
same InfoObject.

Masterdata text:

Format: Z_<InfoObject>_TEXT

Z_, prefix

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 56 / 107

<InfoObject>, technical name of the target InfoObject

_TEXT, to specify it concerns text.

Exception: Datasources having both attributes and text should be called:


Z_<InfoObject>_TEXT_ATTR.

10.6 Transformations

Overall the aim is reduce logic in transformation till minimum; avoiding currency/unit conversions.

This should be done virtually (in BEx query, or calculated view).

10.7 InfoSources

Business content is leading, if this exists we activate and extend as required, with keeping its
original name.

InfoSources are optional, and only to be used in exceptional cases. For example if you want to
assign additional fields in a start routine.

In case you decide for using an info source it shall be described why!

Transactional data:

Format: <Technical name of datasource>

Master data:

Select the InfoObject, the technical name and description will be assigned from the
InfoObject.

10.8 Process chains and process variants

This chapter describes how to organize the loading of data into BW from any source system. This
includes transactional, master data and system maintenance (i.e. delete PSA, for those data
sources where still applicable).

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 57 / 107

For transactional data (and ‘critical masterdata’, example 0PM_ORDER), there shall be one
datablock (BK) process chain for each datablock. BK chains have to be placed in a DataFlow
chain (DF). A DF can consist out of one or many BK chains.

BK and DF chains are never scheduled directly, but they are placed in frequency (FREQ) chains.

FREQ chains are scheduled, based upon a frequency.


MAIL chains are scheduled to check if a certain chain is executed correctly and on time.

BK Chain:

 There has to be one BK Chain for each block (per source system).

 BK chains are used to handle dependency between loads in the same block.

 BK chains have to be put in a DF chain (per source system).

 BK chains are never scheduled.

DF Chain:

 Can consist of many block chains as necessary for control, but there can only be one main
process chains that triggers the load of the particular data flow, typically grouping per one or more
modules.. This to handle dependencies.

 DF chains are grouped together in a Frequency (per source system) chain.

 DF chains are never scheduled.

FREQ Chain:

 There is one FREQ chain per frequency (HOUR, DAY, WEEK, MONTH).

 There can be other frequencies, for example, first 3 days of the month.

 If a frequency does not exist yet, it can be created.

 FREQ chains must include the ‘prevent more than one instance of the chain to run’ in order to
avoid overlapping processing of the same chain.

 FREQ chains can be transported, BUT also maintained in production if a certain area does not
need to be loaded for maintenance reasons.

MAIL Chain :

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 58 / 107

 There is one MAIL chain per frequency (HOUR, DAY, WEEK, MONTH).

 There can be other frequencies, for example, first 3 days of the month.

 If a frequency does not exist yet, it can be created.

 MAIL chains must include the ‘prevent more than one instance of the chain to run’ in order to avoid
overlapping processing of the same chain.

Format for process chains:

BK chains: BK_<Block>_<SourceSystem> for example BK_DFIAR_YS

DF chains: DF_<Module(s)/Group>_<SourceSystem>_<FreeText> for example DF_SD_YS

FREQ chains: FREQ_<Frequency>_<StartTime>_<SourceSystem> for example


FREQ_DAY_0100_YS
MAIL chains: MAIL_<Frequency>_<StartTime>_<SourceSystem> for example

MAIL_DAY_1130_CP or MAIL_DAY_XXXX_CP to indicate irregular times

-start time is in 24hr notation, and system time (not local time).

-source system is needed to be flexible if some of the source systems would be out for
maintenance.

Frequency can be: HOUR, 4HOUR, 6HOUR, DAG, WEEK, MONTH.

But be aware that a frequency chain with ‘Day’ can be scheduled 1 or more times per day, that is
why the start time needed to be added. So the same dataflow chain can be put in a ‘Day’ chain
with start time 0900 and 1300. That particular dataflow chain will then run daily at two time slots.

Start process in the process chain may have the same name as the process chain, followed by
‘_START’ for example BK_DFIAR_YS_START

Other Process variants shall be defined by naming as below.

Format: <Block>_<SourceSystem>_<Activity>

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 59 / 107

Or <InfoProvider>_<Activity>

Block, is the name of the block to which the corresponding chain belongs to.

<Activity>, can be:

ACTIVATE_<aDSO> for activation of an aDSO, here, aDSO should be the technical


name of the aDSO.

DEL_PSA, for PSA deletion.

ABAP_<program>, to execute an ABAP. <program> should be the program name.

for example DFIAR_YI1_ACTIVATE

For Masterdata, there is no link to a block, but to module. Naming convention as follows:

MC_<module>_<type>_<likelihood of change>_<SourceSystem>

Module: See appendix 1

Type: can be ATTR, TEXT or HIER

Likelihood of change: can be High, Medium, Low, (Full).

Depending on how often masterdata changes are expected. High means here the masterdata is
expected to change often. For development and quality, we are creating process chains that loads
all data with 'Full' loads to be sure all master data is available after a system refresh of the source
system. For these chains, the likelihood of change 'F' is used.

Example: MC_SD_ATTR_H_YS, MC_CA_TEXT_L_YS.

The idea is that masterdata chains with likelihood H are being moved in a DAY frequency chain.
Master data chains with likelihood L are being moved in a MONTH frequency chain, for example
domain texts as Sales Doc Cat, Account type,….

‘Daily’ masterdata loads can directly be included in the ‘transactional loads’?

‘Weekly’ masterdata load are to be put in a local MD chain, and grouped in the weekly frequency
chain.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 60 / 107

For some masterdata it might be needed per area to have another frequency (for example
monthly), this are those based on domains for example. Naming convention for these: ‘Master
Data Domain Based Text’ (MC_DOMAIN_TEXT)

Process Chains to include notification if failure

Maintenance in process chains, there are some process chain types which perform maintenance
of the system, for example PSA maintenance.

PSA maintenance: most of the datasources don’t use standard PSA any longer since we use
ODP-ODQ now, but for those we have included an aDSO PSA layer in the design. This also
requires maintenance. In general we keep PSA data as follows:

- Delta loads (50 days)

- Full loads (25 days)

Standard process chain variant ‘Delete PSA request’, used for classical PSA - naming convention:

<(sub)module>_<SourceSystem>_<What>_CLEAN_PSA

What: ATTR, TEXT, HIER, TRANS

Example: CA_YS_ATTR_CLEAN_PSA

As mentioned, also the data in the ‘PSA block’ aDSO’s need to be cleaned accordingly. Process
chain variant: ‘Cleanup old request in Datastore Object (advanced)’ is used for this – naming
convention:

<aDSO tech name>_CLEAN_REQ

Example: PSDBL_YH1_CLEAN_REQ

10.9 Planning objects

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 61 / 107

As such we are not using Integrated planning objects, however in some scenario’s. Mainly for
allowing data changes in BW directly (for example instead of maintaining and loading cvs files), we
are using those objects.

The planning aDSO for data storage shall reside in a datablock. On top there always need to be
(minimum) one Composite provider towards which the data entry is performed. This Composite
provider exists in the respective analytical block to which the users get authorizations.

Aggregation Level

Format: A<block>_xN

Length: 9

First character, A is fixed for Aggregation level

Character 2 – 6, name of the block

Seventh character, is underscore

Eighth character is free text

Ninth character is a free number

Filter on Aggregation Level

Format: F<aggregationlevel>_<freetext>

First character, F is fixed for filters on aggregation level

<aggregationlevel>, the technical name of the aggregation level

<freetext>, Freetext to explain the filter can be added

Planning Functions

Format: PF_<aggregationlevel>_<type><NNNNN>

Character 1-3, always PF_

Character 4-12, the technical name of the aggregation level

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 62 / 107

Thirteenth character is underscore

<type>, can be:

FM Formula

CP Copy

CI Copy Ignore Empty Records

RP Repost

RC Repost on basis of character relation ships

DR Distribution by reference

DK Distribution by key

RV Revaluation

DL Delete

DI Delete Invalid Combinations

FC Forecast

GN Generate Combinations

CT Currency Translation

SK Set Keyfigure Values

UC Unit Conversion

<NNNNN>, sequential number.

Planning Sequence

Format: PS_<block>_<freetext>

Character 1-3, always PS_

Character 4-8, the name of the block

Ninth character, underscore

<freetext>, add some free text.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 63 / 107

10.10 Queries (Whole subchapter not needed any more = Alteryx)

So far we don’t use business content queries, this shall be avoided due to authorizations on
naming conventions – however we can take copies if required.

BEx queries should always be based on composite providers.

The query description should be a meaningful description, it should be agreed with business and
should always include the Aris second level name (i.e. SOP)

 Master query

Technical name format: M_ <Composite Provider>_<Free text>

o Character 1-2: always M_

o Character 3-12: the technical name of the composite provider on whch the query is based
on

o Character 13: always _ (underscore)

o Character from 14: free text

Example:

Object Type 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Master Query M _ A S O P 1 _ Y D L 1 _ D E L I V E R I E S

 Standard query

Technical name format: S_ <Composite Provider>_<Free text>

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 64 / 107

o Character 1-2: always S_

o Character 3-12: the technical name of the composite provider on whch the query is based
on

o Character 13: always _ (underscore)

o Character from 14: free text

 Temporary query (not to be transported)

Technical name format: T_ <Composite Provider>_<Free text>

o Character 1-2: always T_

o Character 3-12: the technical name of the composite provider on whch the query is based
on

o Character 13: always _ (underscore)

o Character from 14: free text

 Planning query

Technical name format: P_<Aggregation level>_<Free text>

10.10.1 Queries-Key figures

The naming convention is for global key figures used in standard queries.

Use of temporary global key figures shall be avoided, in case it is really needed, they should start
with T.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 65 / 107

10.10.1.1 Restricted key figure


The restricted key figure description should be meaningful. It should be agreed with business.

Name format: RKF_ <Composite Provider>_<Free text>

o Character 1-4: always RKF_

o Character 5-14: the technical name of the composite provider on which the key figure is
based on

o Character 15: always _ (underscore)

o Character from 16: free text

Example:

Object Type 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Restricted Key Figure R K F _ A S O P 1 _ Y D L 1 _ X X X X X X

10.10.1.2 Calculated keyfigure

The restricted key figure description should be meaningful. It should be agreed with business.

Name format: CKF_ <Composite Provider>_<Free text>

o Character 1-4: always CKF_

o Character 5-14: the technical name of the composite provider on which the key figure is
based on

o Character 15: always _ (underscore)

o Character from 16: free text

Example:

Object Type 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Calculated Key Figure C K F _ A S O P 1 _ Y D L 1 _ X X X X X X

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 66 / 107

10.10.2 Queries-Structure

Name format: S_<Composite Provider>_<Free text>

o Character 1-2: always S_

o Character 3-12: the technical name of the composite provider on which the structure is
based on

o Character 13: always _ (underscore)

o Character from 14: free text

10.10.3 Check list

Area Check
General Settings Check the “By OLE DB for OLAP”

Filters Set relevant filter on 0SOURSYSTEM

Filters Set filter on 0FISCVARNT

Variables Check the sequence of the variables, mandatory on top

Rows/Columns/Free Characteristics All in free characteristics

Characteristics Remove result rows for all characteristics by default

Unit Keyfigures Create corresponding KYF with unit conversion

Currency Keyfigures Create corresponding KYF with currency conversion

10.10.4 General Setting


Under the General Tab of the Query designer, check the “By OLE DB for OLAP” so that the query can
be used in Webi.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 67 / 107

10.10.5 Filters

 Source system Compounding

The field 0SOURSYSTEM is compounding to nearly all info-objects. For queries that have been created
on local composite providers, set a fixed filter on the relevant source system in the fixed filter values.

 Fiscal Variant Compounding

The field 0FISCVARNT is compounding to the fiscal period info-objects.

Set a fixed filter on fiscal variant = ‘K4’.

10.10.6 Variables

Variable sequence:

o Mandatory variables on top


o Use the following order
o Time, covering all time selection fields
o Organization, covering all organizational fields like company code, plant, etc
o Business Partner, covering business partners, as customer, vendor etc
o Material, covering selection fields related to material, like material group etc
o Transactional, like document numbers, document type etc.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 68 / 107

10.10.7 Rows/Columns/Free Characteristics


For Master Queries, put all characteristics in the free characteristics to improve initial template
performance.

10.10.8 Queries-Characteristics
Remove the result rows by default for each characteristic in used in the query (increases visibility and
performance)

10.11 Variables

If business content variables exists then they can be used, however do not change them when used
(except for description). Note only few people should have the access right to create variables.

The name should be <infoObject text> or <meaningful text>

Name format: <Input Type><Variable Type><Process type>_<InfoObject>_<Free text>

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 69 / 107

o Character 1: <Input Type>

 A Authorization (only used for Authorization variable in the Analysis Authorization,


not to be confused with A in character 3)

 O Optional

 M Mandatory

 K Key (only for replacement path)

 L Label (replacement path)

 C Counter/Characteristic reference (constant 1)(replacement path)

o Character 2: <Variable Type><

 P Parameter / single values

 M Multiple single values

 I Interval

 S Selection Option

 V Precalculated value set (not to be used)

 F Formula variable

 T Text variable

 H Hierarchy variable

o Character 3: <Process type>

 I input

 R replacement path

 C for customer exit

 A authorization

o Character 4: always _ (underscore)

o Character 5-Pos: <InfoObject>

o Character Pos+1: always _ (underscore)

o Character from Pos+2: free text

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 70 / 107

When creating variables on referenced InfoObjects, it might make sense to create


the variable on the original InfoObject so that the variable is reusable to all
referenced InfoObjects.

Only create it on the InfoObject itself if there is a specific naming required so that
creating it on the original object is not meaningful.

E.g. When a variable is required on 0PAYER, if it is a general customer selection


screen, you might be able to use the variables created on 0CUSTOMER however
when you want to make a distinction between 0SOLD_TO and 0PAYER selection
variables, it makes sense to the create a variable on the 0SOLD_TO or 0PAYER
itself rather than creating a variable on the referenced object 0CUSTOMER

10.12 Conversions (on Key Figures)

10.12.1 Quantity Conversions (RSUOM)


Best Practice

- Quantity Conversion type by material with target uom set to target variable
- Not with fixed target UOM (e.g. TNE) because US uses STN. Using the variable with default value,
this can be easily changed in the prompts without the need for an additional unit of measure
conversion type.

Naming Conventions

o 3 char - Conversion Factor: MAT (material UOM) – can be any other iobj
o 3 char - Source: REC(from record), MAT(determined from Iobj), VAR(from variable), etc.
o 3 char - Target unit: VAR (variable – on 0UNIT), TNE(fixed to TNE), etc.
o 1 char: sequential number if not unique

Example: MATRECVAR

Example how to get the value from the UOM input variable in the query

Replacement Text in keyfigure header: LTR_0UNIT (Text variable with replacement path to the input variable on
0UNIT)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 71 / 107

10.12.2 Currency Conversion types (RSCUR)

Best Practice

- Local reporting: Conversion to local currency


o Different variants by date/period
- Global reporting: Conversion with target currency in variable with default to NOK
o Different variants by date/period
- Possible differences in exchange rates as SAP takes the rate from the day before upon saving of
sales order. Only when sales order is changed, the actual rate is taken. Since we do the
conversion in BI on the fly, based on a certain date this can cause (minor) differences.

Naming Conventions

o Source currency (from record/keyfigure) – not in naming conventions; always from record
o 3 Char - Target currency (local or variable): CMP (local/company code currency),
VAR(variable – created on 0currency), EUR (euro), NOK (Norwegian Kroner), …
o 4 Char - Exchange rate type: EURX, …(in case Exchange rate type less then 4 char,
make it 4 by adding underscore, ‘BW__’).
o 3 Char - Reference Data: PST (Posting Date), BIL (Billing Date), etc.

Example: CMPEURXPST

10.13 Transports
Valid for BW and YSAP (only?)

Naming convention for transports is aligned with the ECC guidelines (YSAP), and
checked by the CCB procedure on IBM side.

Format: <ChangeType>:<Domain>:<TransportType>:<Identifier>:<FreeText>

<ChangeType>, length 3

AMS, Application Maintenance Service, for changes outside projects

RFS, Request for Service, indicates that it concerns a project

<Domain>,

Can be SAP module (2 or 3 letter abbreviation), mandatory for ECC transports,


but may also be source system, like Evoc (for evocon), or Aris process
(appendix A). Depending on what you would like to group together in the
transports.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 72 / 107

<TransportType>, length 1

C, Configuration

W, Workbench

<Identifier>,

ClearQuest ticket, Length7, CQnnnnn (only with changetype AMS)

RFS, length7, PRnnnnn (only with changetype RFS)

OSS-note, length8, Onnnnnnn

<FreeText> For adding comments

11 HANA modelling standards

11.1 Basics

As stated earlier, native HANA shall only be used if the same modelling in BW is impossible or
would require extra data staging.

11.2 Packages

Packages are used to store/group the calculated views.

There is a main package ‘Yara’, under which there is a structure for Information Block (as per
current guidelines, we will only use calculated views in the information blocks). Under Information
block package, we create a package for each block.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 73 / 107

For the package on Information Block level, it shall follow the name of the actual block, in the
example above it is I (from information block), and SDOR (for area/subarea as per appendix 1).

However if the package has to store views which are combining data from different areas, it shall
follow the ARIS naming, for example ISOP1

11.3 Calculation view

Information views are used in Native HANA to create a virtual data model, based on data that
resides in the HANA database (schemas).

The purpose of information views is to organize the data from several tables, and to perform a
variety of data calculation, in order to get relevant and meaningful set of measures and dimensions
or attributes to answer a specific reporting need.

There are different kind of information views:

- Calculated views:

o Dimension

o Cube

o Cube with Start join

o SQL script

- Attribute views

- Analytic views

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 74 / 107

As attribute and analytic views are flagged as ‘depreciated’ (Obsolete) by SAP, we shall only use
Calculated views.

11.3.1 Naming convention


The naming convention for a calculated view is as follows:

Format: xxxxx_xxxx

Length: 10

Examples:

ISOP1_YON1 (view specifically for SOP, combining order and notification)

ISDBL_YHI1 (generic view combining billing header and item, can be reused for different
processes)

- Characters 1 – 5 represent the block name

- Sixth character is always underscore

- Seventh character represents the source system, for example ‘Y’ for YSAP, ‘P’ for PSAP, ‘S’ for
SUN, ‘B’ for BRASAP and G for ‘Global’. In some exceptional cases, also ‘X’ can be used from
cross application reporting (see also chapter 2, and appendix C).

- Eight and ninth characters (free text) to specify/describe the data in the view, for example ‘HI’ in
case the view combines header and item, or ‘PR’ in case the view combines Purchase order and
Requisitions.

- Tenth character is a sequential number

As it was decided to start using Alteryx/PowerBI in 2018, we will start using Calculated views also in the
Analytical block. Reasons why we prefer to not have ‘reporting’ directly in Information block (calc.views):

- (Key)users should only see the Analytical block (not all)


- Prefer to keep technical names (of the objects) in the Information block (and other names in the
Analytical block)
- Prepare data for reporting; Currency conversion…
- Complex ‘restricted keyfigures’; eg costing. Where 0VTYPE/0VERSION are used for making
budget/planning/actuals etc. This we need to prepare for the Alteryx Keyusers.

This means Alteryx users will use calculation views as their datasource. Therefore modelling rules have
changed.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 75 / 107

The calculation view need to have a create description, as this is used by business to check what
calculation view is to be used in Alteryx (Y_GENERAL.BW_CALCULATION_VIEWS_LIST).

Example: ‘PSAP Sales, contains information from Contract, Order, Delivery and Billing’, ‘YSAP Inventory,
contains the stock on a certain day – IP_CURR for currency conversion, Default USD’ or ‘Primavera
Project costs overview’ etc.

The description should first start with the source system name, then area, short description and last
parameter usage.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 76 / 107

ASOP1_PBL1     Contains information from Billing header/item from the PSAP system

ASOP1_PDL1     Contains information from Delivery header/item from the PSAP system

ASOP1_POR1    Contains information from Sales header/item/schedule line/status header/status item


from the PSAP system

ASOP1_POB1    Contains Order & Contract Balance information from the PSAP system

PSAP Inventory (available on DB1/QB1)

AMAM1_PST1  Contains the stock date on a specific date           

IP_CURR, to be used for currency conversion; default this is set to NOK.

IP_CALDAY, to be used to specify on which day the stock needs to be calculated ; IP_UOM , to be used
for the Unit of Measure Conversion IP_PERIOD_TYPE , to be used for the period selection in combination
with IP_CALDAY"

AMAM1_PMV1

IP_CURR, to be used for currency conversion; default this is set to NOK, IP_UOM , to be used for the Unit
of Measure Conversion"

11.3.2 Development principles


When it comes to develop the calc.views in a common way, the following high level principle shall
be applied:

11.3.2.1 Information Block Calculated Views


Information Block Calculated Views should only include the following:

- Transaction joins (for complex joins, for example > 2 aDSOs, it is allowed to have stacked
information blocks / calc.views).

- Unit of Measurement conversions (cfr 11.3.2.9)

- Time dimensions (add year/month/week etc for a certain date) (cfr 11.3.2.10)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 77 / 107

11.3.2.2 Analytical Block Calculated Views


Analytical block Calculated Views should only include the following:

- Join requested (additional) masterdata

- Apply currency conversion (cfr 11.3.2.11)

- Rename fields in semantics (visible in Alteryx) (cfr 11.3.2.5)

IN JOINS YOU NEED TO SPECIFY THE CARDINALITY for PERFORMANCE REASONS.

11.3.2.3 Naming convention of nodes in calculated views


For naming the several nodes (projection, join,…) in a calculated view you shall take the following
guidelines:

- Use clear abbreviations


- Use upper case
- Try to keep it short <15 characters

For the projection, it cannot have the same name as the ‘source’ of the projection, else it will return an

error during activation. For example screenshot here is not possible:

(both source calc.view. and projection are called DCOCC_II1).

For naming of the Semantics block, it is possible to enter a description when clicking on the Semantics
block,.

Is it advised to use that field to put a description on which data is combined in the information view.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 78 / 107

11.3.2.4 Naming convention of fields in calculated views (except semantics analytical


block)
(For naming convention of semantics in the analytical block, see 11.3.2.5)

For the fields you move to the output in the projection node (= source fields), you shall rename the fields
to make clear from which source they come. This is required so during analysis we can easily see where it
comes from… for example, is payer coming from the order header or item

For InfoObjects coming from an aDSO, the InfoObject name shall be concatenated with the technical
name of the source aDSO (so object_source);

example:

0DOC_NUMBER_DSDOS_YI1, this is the field 0DOC_NUMBER from aDSO (DSDOS_YI1)

0MATERIAL_T_DSDOS_YI1, this is the text of 0MATERIAL from aDSO (DSDOS_YI1)

So always put the name of the source aDSO in the end…

For navigational attributes coming from an aDSO, we keep the name of the Nav.Attribute, like
0MATERIAL_0MATL_GROUP; with the source behind it.

(Also valid for Union, to know where it comes from).

example:

0MATERIAL_0MATL_GROUP_DSDOS_YI1

0MATERIAL_0MATL_GROUP_T_DSDOS_YI1 (for the text)

So always put the name of the source aDSO in the end…

For InfoObjects coming directly from a Masterdata object, it should be the name of the source masterdata
object concatenated with the InfoObject itself (attributes). Exactly how navigational attributes usually are
named (so source_object);

example:

LYS_MATER_LPI_INNER indicates the field LPI_INNER from masterdata object LYS_MATER (exactly as
a nav_attr)

LYS_MATER_LPI_INNER_T indicates the text of field LPI_INNER from masterdata object LYS_MATER

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 79 / 107

If objects coming from another calc.view it shall always include the source aDSO, or (nav. InfoObject).

Like:

11.3.2.5 Naming convention of semantics in the analytical block


The ‘Semantics in the Analytical block are visible for the Alteryx users.

As a general rule, we take the short text of the InfoObject, without spaces and with usage of capitals to
make it readable.

Examples:

Customer

CompanyCode

Material

ProfitCenter

For Text, we add ‘_T’

For Attributes, we add ‘_the short text of the attribute’

Examples for text and attributes:

Material_T

Material_MaterialGroup (Max 5 nav attributes per masterdata object)

Material_MaterailGroup_T

For Global, we add ‘_G’, like ProfitCenter_G.

If the same data is sourced from different system types, the naming can be extended with source system
ID to clarify the source of the data.

Example ‘YSAP SAP’ data mixed with ‘YPPM’ e.g. ProjectName_YP

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 80 / 107

If characteristic would exists on Header and Item, indicate with _H, or _I.

Example:

ShipTo_H / ShipTo_I

If we have keyfigures with different unit/currency, we could indicate with 2 char abbreviation of the
unit/currency.

Example:

SalesQty_TO, NetValue_LC

11.3.2.6 Text in calculated views


We should have text available in calculated views. In the lowest level of your calculated view (information
block), we would need to select texts and organize in output (0Material next to 0Material text). All the texts
should be added to output and moved to the upper level (analytical block)

11.3.2.7 Currency fields in calculated views


When it comes to currency fields like 0NET_VALUE.CURRENCY, we need to check case-by-case if they
are needed. In this example, when 0CURRENCY is available we don’t need it here.

11.3.2.8 Attributes in calculated views


Masterdata can be added as navigational attributes from InfoObjects in aDSO’s

In this case, it is defined that per masterdata object there can be 5 attributes. Note that they have to be
marked first as navigation and secondly the flag ‘Use Navigation attributes in Extract’ needs to be set.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 81 / 107

Masterdata can also be added in the analytical block.

In case we would like to ‘join’ data from a masterdata object into a view, we should choose to select it
from the ‘SYS’ table, and not compose the masterdata object to HANA, as that will generate an attribute
view (which will be phased out).

Masterdata can also be added in Alteryx.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 82 / 107

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 83 / 107

11.3.2.9 Unit conversion


Unit conversion (to Input Parameter) will be done in the information block (as this is more generic).

View, ITECH_GUC1 can be used. It contains:

- Material unit conversion


- T006 mass conversion.

in both directions, so if PCE-TNE is maintained in material unit, then it can also convert TNE-PCE.

The view can be used to join (source system/unit)

Dynamic conversion is also possible with input parameter.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 84 / 107

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 85 / 107

11.3.2.10 Time conversion


For time conversions, we use technical calc.view (ITECH_GDA1) to convert dates to other formats. The
name of these object should be followed by the new format (_YEAR, _MONTH, _WEEK,…).

Only the date that is used as a basis for the time conversion will keep the source in its name.

Example:

0PSTNG_DATE_DSDOR_YH1

0PSTNG_DATE_YEAR,

0PSTNG_DATE_MONTH

0PSTNG_DATE_FERT_SEASON

11.3.2.11 Currency conversion

We need to perform currency conversion (especially for global), in 2 steps.

(Note that by default the decimal shift is also done already in the system generated calculation view on
the D-block layer aDSO; however be aware that this shift happens on YSAP currency tables! As in the
system generated calc.view we can’t set the schema. Take this into account as it can potentially cause
issues).

Step 1: Set the ‘Default Client’ to ‘100’ and the ‘Default Schema’ to the schema of the respective source-
system ‘Y_TECH_CUR_XXXX’

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 86 / 107

Step 2: Flag decimal shift, based on local schema (YSAP, PSAP etc) in the information block.

So that we have the ‘external format’, as an output of the information block.

See PSAP example:

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 87 / 107

Here you can see that decimal shift happens on currency table (TCURX) from PSAP, via SLT.

Only for YSAP it should use ABAP (SAPPB1) schema.

Step 3: Then the currency conversion itself should happen in the analytical block.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 88 / 107

Also here, as this is calculation view for PSAP, it uses the PSAP currency tables for conversion.

In this case M which is a local rate (maintained in each source system).

For Global reporting calc.views you could also use the BW exchange rate type from YSAP. As this
contains all global rates, and exchange rate type BW is only maintained in YSAP!

See for details and another example on global reporting:


https://team.pulse.yara.com/corporate/sapbwonhana/Technical/_layouts/15/WopiFrame.aspx?
sourcedoc=/corporate/sapbwonhana/Technical/TeamDocuments/Currency
%20Conversion.docx&action=default

Each currency keyfigure will exists twice – one as we get is from source (typically local or document
currency), second with a parameter (_IP), with default value on USD (Yara’s document currency)

Currency conversions are performed in the semantics layer of the Analytical block by using Input
Parameters. Currency conversion is performed as follows:

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 89 / 107

1. For each measure that requires currency conversion, the same measure is duplicated in the
semantic layer of the analytical block giving the same name but we suffix ‘IP’. For example the
measure NetValue will get an additional duplicated field ‘NetValueIP’.
2. For currency conversion, the following parameters can be set:

Semantic Setting Comment


Decimal Shift Flag Flag decimal shift, based on local schema (YSAP, PSAP etc)
in the information block

Conversion Flag Set when currency conversion is required, in the analytical block

Schema for currency Conversion Set to SAPDB1 (for YSAP currency Tables)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 90 / 107

Separate schema for other source systems will be made available


(e.g. PSAP)

Client for currency conversion Set fixed to 100

Source Currency Select the column containing the source currency field

Target Currency Set to target currency field – Input Parameter

Exchange Type Set to exchange rate type – can be Input Parameter

Conversion Date Select the column containing the date for the conversion – can be
Input Parameter
NOTE!! Make sure to select the column through ‘…’ don’t only
copy paste as this might cause issues at runtime. If you copy
pasted, you will see a little ancor icon in front in stead of the
column icon.

Exchange Rate Leave empty

Generate result currency columns Leave un-flagged

Upon Conversion Failure Set to NULL

3. Input parameters need to be used for the following settings (see Appendix D for Input Parameter
Naming)
a. Target Currency: Mandatory Input Parameter
b. Exchange rate type: preferred Input Parameter or set fixed
c. Conversion Date: optional Input Parameter or set specific column

4. Important:
a. Make sure to always set the ‘Upon Conversion Failure’ to “Set to NULL”.
b. If you would like to show a column containing the target currency field; please create a
calculated column with the Input Parameter in the formula.

Use of currency conversion in Alteryx: add the following statement at the end

('PLACEHOLDER' = ('$$IP_CURR$$', 'NOK'))

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 91 / 107

11.3.2.12 Analytical privileges


Since we are not using analytical privileges on our calculation views (used for row based authorizations
like e.g. profit center), it’s important to change the setting on ALL calculation views (both in the analytical
block package as information block package) created not to use analytical privileges (see screenshot
below). If this setting is not changed, Alteryx power users will get an authorization error because the
system looks for an analytical privilege that is not created.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 92 / 107

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 93 / 107

12 Processing Event triggers (Transaction SM64)

Naming convention for process chain relevant event triggers

PC_<Technical_Name_process_Chain>

Example :

PC_DF_COPA_EX_REG

PC_DF_COPA_EX_COR

13 APPENDIX A – Information source (module) and Aris Process

See both overviews below, for PSA, corporate memory, data and some information block, the
Source/submodule table has to be taken to find the abbreviation to be used for the respective
block. For other information and analytical block, it shall be based on ARIS process (Appendix B).

Table is to be used/filled as follows, in case source system is SAP, then we take SAP
module/submodule in column 1 / 2. In case the source system is non-SAP it is filled with the name
of the actual source system. This so that all object from a certain source system are easily
grouped together, also to have in the technical name of the infoProviders (aDSO) a more clear
naming convention, there is only one character available to distinct the source system name
(which could be too less).

Source System (or SAP ((sub)module) Abbreviation (to be


module) used for block name)

SAP SOURCE SYSTEMS:

Cross Application Master Data CAMD

Sales and Distribution Sales Orders SDOR

Sales and Distribution Deliveries SDDL

Sales and Distribution Billing Document SDBL

Sales and Distribution Masterdata SDMD

Sales and Distribution GTM GTMX

Materials Management Purchase Requisitions MMPR

Materials Management Purchase Orders MMPO

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 94 / 107

Materials Management Inventory MMIM

Materials Management Masterdata MMMD

Logistics Shipments (incl. costs) LESH

Finance Assest Accounting FIAA

Finance Accounts Receivable FIAR

Finance Accounts Payable FIAP

Finance General Ledger FIGL

Finance Masterdata FIMD

Corporate Finance All (no split IHC etc) CFMX


Management

Controlling Cost Center COCC


Accounting

Controlling Product Costing COPC

Controlling Internal Orders COIO

Controlling Profitability Analysis COPA

Controlling Masterdata COMD

Controlling Enterprise Controlling COEC


Prof.Center Acc.

Production Control PPCO (currently in use


for ISAP z-development
only)

Project Systems Project Structure PSPS

Project System Finance&Controlling PSFC

Plant Maintenance Orders PMOR

Plant Maintenance Notifications PMNO

Plant Maintenance Controlling PMCO

Plant Maintenance Equipments PMEQ

Authorization Analysis AUAA

Finance

Masterdata Governance Customer MGCU

Masterdata Governance Vendor MGVE

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 95 / 107

Masterdata Governance Material MGMA

GRC (Governance Risk & Process Control GCPC


Compliance)

GRC (Governance Risk & Access Control GCAC


Compliance)

GRC (Governance Risk & Foundation (=overall) GCFD


Compliance)

EHS (Environment, Health Product Safety EHSP


and Safety)

NON-SAP SOURCE SYSTEMS (SUN) – Separate naming convension – See below.Appendix A.1

NON-SAP SOURCE SYSTEMS (OTHER)

CashPooler CashPooler CAPO

YPOD YPOD YPOD

Evocon Evocon EVOC

Synergy Synergy SYNY

OPPM YPPM OPPM YPPM OPPM YPPM

P6RP P6RP P6RP

External contribution (temp EXCO EXCO


solution?)

SUN6 SUN SUN

MOC MOC MOCX

Apppendix A.1 SUN naming convension – under construction.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 96 / 107

As for SUN the setup will be more table based, it does not fit to our naming convension for other
modules.

Naming convension (proposal).

aDSO’s:

In the PSA block one aDSO with 19 datasources (1 per unit) – Source table names are
AUL_CUST, CHC_CUST, DSH_CUST, etc).

As for aDSO’s we are restricted to 9 characters only we can not use the table name in the
technical name of the aDSO.

Therefore agreement is to have a technical name without meaning for SUN based aDSO. For
example:

PSUN00001 (The description of the aDSO should be equal to the table name; like XXX_CUST).

No corporate memory for SUN, as data amount are not huge.

The Data block, would be a 1:1 mapping with the PSA block.

Naming: DSUN00001 (key of the aDSO?)

Calc.Views:

It has been agreed that for these object the target is to following the existing naming convension!;
so link to the Aris model; and user descriptions for fields instead of technical names.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 97 / 107

If something is missing, please contact the owner of this document.

For Information- & Analytical blocks, we shall align to Aris level 2 process. It has been notified that
this can change sometimes, however decision is to still use this.

http://sr00972.ad.yara.com/businesspublisher/link.do?
login=yara&password=12345&localeid=1033&ph=0000s0l&modelguid=f9d87320-27fd-11e6-4eb5-
005056925fde

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 98 / 107

ARIS Processes (for Analytical (and information) block):

Block ARIS 2nd level process Description/usage


abbreviation

MAM1 V.20.50 Material Management General inventory data

TRM1 V.30.10 Transport Management Supply Chain Operations – Transport Management


(i.e. LES)

SOP1 V.30.20 Sales order processing General Supply Chain (Sales flow information), for
all SOP users

SOP2 V.30.20 Sales order processing KPI information for SOP, only for dedicated users
(authorization purposes)

TRA1 V.30.30 Trading Currently in used for GTM

PMA1 V.40.10 Production Management Combined data related to production i.e. Volumes
and costs.

PRM1 V.40.30 Production Monitoring Data related to production monitoring, i.e. Evocon,
Upbase etc.

CSI1 E.30.50 Continual Service Used for pure technical (IT) related information.
Improvement Only for BI team.

COM1 E.10.10 Controlling & Used for General CO information, for all users
Management… working in accounting?

COM2 E.10.10 Controlling & Used for (Global) Contribution reporting (dedicated
Management… authorization)

FIA1 E.10.20 Financial Accounting Used for General FI (GL/AR/AP/AA) information,


for all users working in accounting?

FIA2 E.10.20 Financial Accounting Used for Internal control (i.e. GRC) reporting

FTI1 E.10.30 Finance, Treasury & Used for Treasury/Cashpooler


Insurance

FTI2 E.10.30 Finance, Treasury & Used to extract Cashpooler data (Manual
Insurance payements), dedicated for use in Exiger (thus
separate authorization)

SAF1 E.20.20 Process Safety Data related to Safety (Synergy, EHS product
Safety, ….)

QSC1 E.20.60 Quality Systems, General data related to QM, HESQ, etc…
00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 99 / 107

Certification

CSM1 E.50.20 Contract & Supplier General data related to procurement, contracts,
Management requisitions, purchase orders, etc. For all users
who will get access to procurement data.

MAE1 E.70.20 Maintenance Execution General data related to Plant Maintenance,


Notifications, PM orders etc. For all users in
maintenance.

YOP1 E.900.20 YCN Operations Data specifically for the YCN (Yara Crop Nutrition)
Performance Monitoring project.

PPM1 M.20.30 Project Portfolio Data related to Portfolio, projects, its resources,
Management costs etc.

SNP1 V.20.30 Supply Network Planning Data related to YPOD

If something is missing, please contact the owner of this document.

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 100 / 107

Aris overview:

Management Processes:

Value-Chain Processes:

Example: SOP = Sales Order Processing

Enabling Processes:

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 101 / 107

CSI = Continual Service Improvement

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 102 / 107

14 APPENDIX B - Role Naming convention

For single roles:

We use the first 3 digits as follows:

Y=Single Role

R=Reporting system

B=BW

Forth character will then be A or M or U analytical or menu role or UserType)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 103 / 107

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 104 / 107

For composite roles:

We use the first 4 digits:

J= composite role

R= Reporting System

B = Business (or I = IT)

P = targeted role for production system

JRBP-<free text>

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 105 / 107

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 106 / 107

15 Appendix C – Source System abbreviations

Find below, here the source system abbreviations to be used:

Source system 1-letter abbreviation 2-letter abbreviation

GLOBAL Reporting G GL

Cross application Reporting X (meant for more systems but XX


not yet Global reporting)

SAP Systems

YSAP (Yara SAP ECC) Y YS

S/4 Hana Treasury 3 H3


( DH3/QH3/PH3)

PSAP (Colombia SAP ECC) P PS

BSAP (New Brazil SAP ECC) B BS

GRC SAP GC GS

ISAP (India SAP ECC) I IS

MOC SAP M MS

BRASAP (Old Brazil SAP ECC) R BR

Non-SAP Systems

SUN4 4 S4

SUN6 S SU

Evocon E EV

OPPM YPPM OM YP

Primavera (P6 database) P PR

Synergi S SY

CashPooler (+for Exiger) A CP

YPOD L PL

External contribution files X (= Cross application) EX

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx
INTERNAL 2016-07-06 107 / 107

Note that 1 letter abbreviation of non-SAP source systems can be the same. This is because non-
SAP source systems also have the source system name as part of the block name (which is part
of the technical name of the aDSO.

16 Appendix D – Common Input Parameters

These are input parameter namings which should be used across the system

Input Parameter Infoobject/Field Default Notes


Value

IP_CURR 0CURRENCY USD Create a calculated column IP_CURR_COL to report


selected currency

IP_UOM 0UNIT TO Create a calculated column IP_UOM_COL to report


selected UoM

IP_EXCHTYPE KURST Dependent Dependent on dataflow


on dataflow
(Exchange Rate Type)

IP_CURTYPE 0CURTYPE 10 10 (Company Code Currency)


(Company
Code
Currency)

00/conversion/tmp/activity_task_scratch/566726544.docx566726544.docx

You might also like