Professional Documents
Culture Documents
SAP BODS BeginnersGuide
SAP BODS BeginnersGuide
By Alok Ranjan
AIM:- The purpose of the tutorial is to give a novice a good hands experience on BODS product to be
able to create his/her own local repository , configure job server, start his basic job development and
execute the job to extract data from source systems and load data to target systems after performing
transformations, look-ups and validations.
What is BODS:- It is an ETL tool currently acquired by SAP used for integration with all types of disparate
systems, extracting data from them, transforming them into meaningful information and loading them into
all types of systems.
It is tightly integrated with SAP systems and hence really a good tool to migrate data from legacy systems
to SAP systems with ease and less development efforts with effective debugging and monitoring
capabilities.
After getting BODS installed on the machine, you can see the following on Start>All Programs
There are several components installed as part of SAP BODS 3.2 installation.
In the recent release like SAP BODS4, there have been improvements on the architecture, components
and the enhancement features in several transforms.
However, the BODS Designer looks pretty same on both 3.2 and 4.0 versions.
The major enhancement is in the Validation Transform in 4.0 which facilitates developers to switch on and
off the validations flags easily for one or several columns.
What to launch next- Normally, the developers would like to start BODS Designer to start developing
jobs straightaway, but it is always good to start creating own local repository first so that you have full
control on your own development environment without any dependency on others.
In order to create your own local repository, it is important to know the database where you are allocated
space quota to create the same.
It depends on the database vendor like Oracle, SQL Server, Informix, DB2, etc which you have currently
access to.
However, BODS installation comes with default mySQL database, which you should have installed on
your system.
There is need of user/schema and database creation on my SQL which can be easily done by DBA.
Once user /schema and database name information is assigned to you, you can create the local
repository.
Next Screen shows the BODS Repository Manager which administers you to create local repository.
As you need to create your own local repository to do your own development activities, you need to select
the Repository type as ‘Local’.
Now you need to choose the Database Type/vendor, where you want to create repository on.
Since as mentioned earlier, mySQL database comes shipped with BODS product, it is convenient to
create repository on the same.
All you should know is your user name and password and database schema name.
If you want a dedicated DSN for yourself, you should be able to add yours.
Select the driver ‘MySQL ODBC’.( Please note that the version can be different, hence you need to ask
admin team if there is any discrepancy, but all what you want is mySQL driver).
Now, you need to supply information as below:
After having provided all the information , you would like to go for testing the connection.
Now, launch again the Repository Manager and check in ODBC drop-down for newly created DSN name.
If the name is not visible, then you need to go to ODBC again and re-configure.
Select the flag ‘Show details’. It gives you all the information what version it is and all the other details
when you start creating local repository.
You need to wait for some time for the repository to be created.
Go to All Programs>SAP Business Objects XI 3.2>sap Business Objects Data Services>Data Services
Designer
Provide all the information you used in creating local repository a while ago.
2) Project Area:- It contains all the jobs which are ready for execution or you are currently working on.
3) Workspace area: The bigger space, where you actually design your job.
Start development
Right on empty space in local object area/click on ‘Create Project’ under Getting Started in the main
page.
No, it is not!!!
On the right hand side, you can see the list if icons.
You click-once the dataflow and click-once on the workspace area (bigger area)
Name the dataflow as per your wish. It should start with DF_.
In this example, we are using the flat file as source. So, we will create flat file format.
Click under the ‘field name’ and add the field names.
Under ‘Data type’ choose relevant datatype like ‘int’ for EMP_ID, ‘varchar’ for EMP_NAME’ etc.
In the left hand side, click on ‘Root directory’ folder to browse to the correct path where the input file is
placed .
Click on ‘File name(s)’ and you should be able to browse to see the input file.
Since, we already defined the flat file format structure with field names, we can ignore the first row header
(shown in the next screen)
When you try to save and close the file format editor, it gives you the above warning.
If you have defined the structure with field names in your file format editor, you should select ‘no’, else it
will overwrite with input file provided in the file name.
Now, since we have already defined the file format with our field names, we do not require the row header
provided in the input file.
Drag and drop the file format in the workspace area and make it as source.
Click once on the ‘Query’ icon on the right hand side and click once on the workspace.
Name the query and join it to the source file format.
You will get the above screen where you need to provide inputs.
Datastore type;-Database
Database type:-mySQL( in this case, as we are using my SQL database for staging tables)
All over again, you need to provide all login credentials and database name as you provided earlier for
creating local repository.
If you are using another database for staging tables, you can use information provided by the DBA.
You see the datastore created.
Now, you click once on the template table icon in the right hand side and click once on the workspace
You need to define the name of the target table as per your wish.
Click on OK
But, we have not still mapped the fields from source to target.
You would like to map all the three fields from source to target
Drag and drop each field from left pane to right pane one by one.
Now, try to validate that the design has any errors or warnings
[Source File:"EMPLOYEE.txt"(EMPLOYEE)]
Job Server error (the Job Server may not be responding). The directory for the file <C:/EMPLOYEE.txt>
cannot be validated to ensure that it is valid at design time. Ensure that the directory is valid by run-time.
(BODI-1110017)
Please remember that you have BODS server components installed on your machine and so you are
responsible to configure Job server as well.
Click on ‘Add’.
In the ‘Repository Information’ section, provide the local repository details.
Click on ‘Apply’.
In the associated repositories, the repository name will be reflected.
Click on ‘OK’.
You can now see the name of the Job server configured.
Click on OK.
Click on ‘Restart’ in the window
Click on OK to restart Data Services service.
Click on OK.
You find the job executed successfully!!!
Source first:-
Target now:-
So, we find that data is loaded from flat file to database table.
SAP-BODS Integration using BAPI
1. Introduction:
We can call SAP ERP or R/3 Remote Function Call (RFC) enabled functions, including Business
Application Programming Interface (BAPI) functions, from queries inside data flows. A Remote Function
Call is really what the word suggests: You call a function but it is remote rather than part of your code.
This is the standard way how 3rd party’s tools access SAP, both for read and write. All RFCs meant to be
used for business reasons, their function name starts with the text BAPI_XXXX like
BAPI_ABSENCE_CREATE or BAPI_ABSENCE_GETDETAIL, both meant to create or display
employee’s absence records in HR.
If you design data flows with BAPI calls against one version of SAP ERP or R/3, then change data stores
to a later version of SAP ERP or R/3. Data Services allows this without the need to re-import the BAPI.
Any new parameters added to the function call, including additional columns in table parameters, are
added automatically to the call and filled with NULL values. Thus Data Services allows you to design jobs
that are portable between SAP ERP or R/3 systems. After you import the metadata for an SAP ERP or
R/3 function, the function is listed in the Functions category of the ERP or R/3 data store. You will also
see the function in the function wizard, listed under the data store name.
Data Services supports tables as input and output parameters for SAP ERP or R/3 RFC and BAPI
functions. The function import process automatically includes the metadata for tables included as function
parameters.
2. BAPI Call:
After you import the metadata for an SAP ERP or R/3 function, the function is listed in the Functions
category of the ERP or R/3 datastore. You will also see the function in the function wizard, listed under
the datastore name. To call a BAPI depends on the requirement; here in this document we have
considered an example of creating a CONTRACT using ‘BAPI_CONTRACT_CREATEFROMDATA’. For
this BAPI, we need to send mandatory input and Tables parameters to create a CONTRACT.
To specify a table as an input parameter to a function, the table must be an input to a query, either as a
top-level input or nested under the top-level. The table must also be available in the FROM clause of the
context where you call the function. Data Services maps columns in the input schema by name to the
columns in the table used as the function input parameter. You need only supply the columns that are
required by the function. At validation, if Data Services encounters type mismatches between supplied
columns and the function signature, it attempts to convert the given type to the expected type. For type
mismatches that it cannot resolve, Data Services produces validation errors.
Note: This entire documentation refers to IDES SAP system, refer the below screen shot for detail.
The following Process indicates how to import the BAPI.
4. Processing the BAPI:
In this BAPI the mandatory importing and Tables parameters required to create a Contract are:
We can call this BAPI in a query transform and give the inputs as shown below. Before calling the BAPI
create a output schema for passing the Tables parameter as shown below.
Mapping Inside the Query from the above screen shot.
• Padding values
To determine the data requirements of various SAP ERP or R/3 functions, you can read the function
requirements in the SAP GUI transaction screens:
• BAPI and RFC source and input and output parameters: SE37
You can also determine appropriate values, such as the language-specific code values, by looking at the
table where the data is ultimately stored.
One solution would be to RFC-enable the function. That is a SAP provided function, I guess the better
idea is to write a new function which does nothing else than calling this function and enable RFC. For
example to call the Function module ‘READ_TEXT’ which was a normal function one, copy the same
function module and make it RCF enabled. The following steps are required to enable the RFC.
Create a new function module in the transaction SE37 ‘ZREAD_TEXT’ and assign to function group
‘ZTEST_GRP’ as shown.
As we have table parameter available in this Function module, we need to normalize the schema by
calling one more query as shown below.
Once the Job is executed we can see the result that has been extracted from the Function module.
SAP-BODS integration using IDOCS
1. Introduction:
Imagine you want to build a Reporting Solution, not a Data Warehouse in pure terms. So somebody
opens the balance sheet report and does not like the way it looks. So a booking in SAP is created to
correct it and then....he has to wait the entire night until the Data Warehouse gets refreshed. Another
option would be to configure SAP so it sends all changes to the reporting database immediately. And that
is what IDOCs are for.
On the downside though, to configure SAP to actually send changes is quite a challenge. If an IDOC is
provided by SAP already it is not that bad, but if you want to distribute changes for data SAP never
thought about, you have to write your IDOC from scratch and hook it into every single application dealing
with that data.
The basic problem is the IDOC design. It is not like a database triggers that is independent from all the
applications and fires no matter who and how a change happened. It is happening on application level, so
there is a common ABAP call that has to be done at every single application.
Inside SAP, a couple of settings have to be made, this configuration is mandatory for both sending and
receiving IDOCs. The following configurations steps are involved to send IDOCs from BODS to SAP
ECC.
Defining Logical systems is done in SAP ECC with the transaction ‘SALE’.
Note: This entire documentation refers to SAP IDES system refer the below screen shot for detail.
Enter the Transaction ‘SALE’ to define logical system and its assignment. As this confirmation doesn’t
have authorization we will request basis to create it. Hence we used the following logical system as
shown below.
Click on the above shown to define the logical system. The below popup come just continue
Enter the transaction ‘SM59’ to configure the RFC destination in SAP ECC as shown below.
Now check the connection from SAP side as shown below. Click on the connection test as shown.
If the connection is fine then the following screen will appear.
Enter the transaction ‘WE21’ to create the Port as shown. The RFC destination will be defined here.
2.2 Define Partner Profile
And finally, we can configure SAP to route all IDOCs of a given type to DI by running the transaction
"WE20". You click on "New", type in the Partner Number created in "/nsale" (we called it ID3CLNT801)
and its type is "LS"....Logical System. Then you click on save.
Configure the Inbound parameters as SAP need to receive the IDOCS.
Note: From SAP if we need to send the IDOCS to BODS then configure outbound parameters, vice versa
to receive the IDOCS (Inbound).
In this case it is considered to create the customer which comes from BODS; hence the configuration
should be Inbound. It is shown below. Click on Add
Enter the message type “DEBMAS” and process code as ‘DEBM’, click on save.
Note: The following are the IDOC types, Message types on category wise.
General Message types / IDoc types / BAPI
3. Sending IDOCs:
Sending an IDOC to SAP is an asynchronous task. One or many IDOCs are posted to SAP, just like
inserting into a database table, and get stored in the SAP database. Later, SAP starts to process the
IDOC and e.g. creates this new material record. Once the processing is done, no matter if successful or
not, the status of this IDOC is updated in the SAP table EDIDC.
As a consequence, unlike with BAPIs, you get no return of the processing. Your only choice is to
periodically query above status table and copy the status back into the senders table. If an immediate
return code is required or the actual error message you have to use BAPIs. On the counter side, IDOCs
are supposed to be faster while still running full data consistency tests.
Before sending the IDOCs we need to have SAP Data store to be created and then import the required
IDOC. For our case to create the customer ‘DEBMAS06’
The data store created as referred here as ‘SAP_ECC’ shown above, then import the IDOC ‘DEBMAS06’
as shown below.
Click on import by name and give the IDOC name as shown below.
The following flat file information is required to create the customer through IDOCs. Row generator is
required for input the configuration.