Professional Documents
Culture Documents
Tips Transportes, Jerarquias, Transacciones, Etc V1
Tips Transportes, Jerarquias, Transacciones, Etc V1
Tips Transportes, Jerarquias, Transacciones, Etc V1
Thanks all for your help. We had to take SAP help as we Going-Live this week. SAP
has given the following solution.
We were able to load the data after we ran these programs.
Reimportar transportes
Hierarchies in Authorizations
RSECADMIN is tcode for maintain the authorization system, you use the following
standard infoobject 0TCAIPROV, 0TCAVALID and 0TCAACTVT for set up your
analysis authorization, when you work with characteristic relevant of authorization and
this characteristic have a hierarchy, to your develop system from quality system also
you have to transport the hierchies by each infoobject.
If you want to build left outer join with DSO on the left side and InfoCube on the right,
do the following.
2. Build Multiprovider that will include both the above InfoSet and original DSO. In
characteristic assignment of Multiprovider assign the same characteristics to both
InfoSet and DSO.
As Multiprovider is a union, you will end up with a union betweem original DSO and
inner join of original DSO and InfoCube. Effectively this will constitute Left Outer Join
between DSO and InfoCube with DSO on the left side.
Go to SPRO in BI and enter values for SAP NetWeaver -> Business Intelligence ->
General BI Settings -> Set Material Number Display.
Tip: The value here shall be same as value displayed in the transaction OMSL in source
system.
Autorizaciones de Analisis
Recommendation
Features
You can then assign this authorization to one or more users. Since the authorizations are
TLOGO objects (analytics security objects), they can also be transported to other
systems.
Solucion??????
The test query is on a multiprovider (FM data) with Funds Center in the characteristic
restrictions area and in the rows. In the characteristic restriction area, Funds Center has
on it an authorization variable (with selection options setting on the details tab)
OJOOOOOOOOOOOO. We have added a simple key figure to the rows.
We've created WEBIs on Bex Queries, we're using Web Services (created on the webi
in the WEB Intelligence Rich Client) to prepare the data for transmission to the
dashboard. Within the dashboard, we're using Web Service Query (query as a web
service) (data --> new connection) to connect the web services. We set our components
to refresh before opening so we assumed it would refresh the data for the user logging
into the dashboard and only bring values permitted by the underlying BW Analysis
Authorizations on the query.
We were wrong. We determined we need to set the Refresh on Open checkbox that
appears under the advanced button in the webi save as dialog. As a result, our simple
test query now refreshes from within the WEBI Rich Client and displays only the values
permitted by the Analysis Authorization.
The Webi Report using this connection prompts correctly and correctly only shows the
Values they have access for the Filter.
I put in a SAP Ticket but only thing they could tell me was only Analysis for OLAP
uses BW Authorizations correctly and they couldnt give me any direction on the other
tools or if they even consider it a problem and would be corrected when going through
WEBI. It was kind of strange considering the InfoObject filter on the Query did work
correctly in WEBI by only showing what they have access but failed during report
running. Seems like it would be a simple fix on their end.
(The restricted values in prompt LOV were correct, but if I refreshed it without the
restricted value (*) I got the error. If I filtered one of the values, everything was ok.)
Now I have two BICS variables, one for the prompt as before and one authorization
variable.
SAP published the note 1812327 for this issue. However I didn't use any authorization
object and solved the issue by user exit. Check the following steps:
- Create bex variable with customer exit type and no input ready in characteristic
restrictions area on BEX.
- Write a customer exit to read the mapping data from ODS in CMOD.
- Create a webi report using this olap connection and using this bex query.
- Run webi report and you can see the data in bex and webi is same.
As per SAP Note 1812327 - Error:" BW System <system ID> returned state:
USER_NOT_AUTHORIZED (WIS 00000)" is displayed in Webi when restricted SAP
user doesn't provide an input value for an optional BW variable with analysis
authorization
1) Turn on logging within RSECADMIN for the ID you're testing. That log will be
infinitely more helpful than SU53, which will always show a failure on 0BI_ALL. The
log will usually show you exactly where the problem is.
2) If the log doesn't help, try changing the restriction in the ZEMPLOYEE analysis auth
to a star. If the query still fails, then the problem is likely some other field you didn't
realize was set to auth-relevant. If the query runs, then the variable in the query isn't set
up to work correctly with the restricted variable in the analysis auth. That happens a lot.
2) You have to make an authorization relevant variable in BEX for 0costelement that
will be not ready for input if the user don't have to select the values in pop up; if the
user has to select values you make it ready for input but it is mandatory to make the
variable.
3) Add also the infoobject 0tcakyfnm in authorization RSECADMIN and make the
value = *
Here is the solution we got for the analysis authorization check from Webi report.
SAP published the note 1812327 for this issue. However I didn't use any authorization
object and solved the issue by user exit. Check the following steps:
- Create bex variable with customer exit type and no input ready in characteristic
restrictions area on BEX.
- Write a customer exit to read the mapping data from ODS in CMOD.
- Try with a user to see how the data is displayed in BEX.
- Create a webi report using this olap connection and using this bex query.
- Run webi report and you can see the data in bex and webi is same.
Workarounds:
https://wiki.scn.sap.com/wiki/display/BI/The+authorization+Log+in+RSECADMIN
http://scn.sap.com/people/kamaljeet.kharbanda/blog/2009/02/26/step-by-step-sap-bi-
security
http://www.sapsecuritypages.com/authorization-trace-in-bw/
0tcakyfnm
Cause
Resolution
In Bex query designer, define the authorization variable into "Characteristic Restriction"
Año
2117325 - Error: "The action cannot be performed. (WIS 30650)" while opening the
Web Intelligence report based on UNX universe in Web Intelligence Rich Client.
Symptom
Error "The action cannot be performed. (WIS 30650)" is thrown while opening the
report based on UNX universe.
While creating a new report based on .UNX universe, error: "An error occurred while
calling ‘getSessionInfosEX’ API (Error: ERR_WIS_30270) (WIS 30270)" is thrown.
These error are only seen from Web Intelligence Rich Client.
Only Web Intelligence Rich Client is installed on client machine (No other client tools).
Environment
Scenario 1:
Scenario 2:
Cause
While installation of client tools, some of the "Databse Access and Security"
components were not selected.
Resolution
Perform "Modify" installation on client machine and at the 'Select Features' page,
select all the "Databse Access and Security" components.
Keywords
No pueden ser cargas deltas las DSO por que no borra las PSA.
Variantes en queries APD
Al transportar hay que crear las variantes nuevamente por RSRT en el sistema destino
http://scn.sap.com/docs/DOC-48579
VARID
RSRVARIANT
SE38: RSDS_DATASOURCE_ACTIVATE_ALL
RS_TRANSTRU_ACTIVATE_ALL
Schedule :
You use this function to schedule and execute the process chain in the background, in
accordance with the settings in the start process.
Execute synchronously:
You use this function to schedule and execute the process in dialog mode, instead of in
the background.
The processes in the chain are processed serially using a dialog process. The system
writes a log entry for each process. You can display the chain run in the log view.
Deben crearse por Bex Analyzer. Quitar Checkmark de variante de Usuario y aparecerá
en la tabla RSRPARAMETRIZA
ZPI_BW_C4C_TRACKING_CONF
Modificación masiva de Usuarios en masa
Transacción SU10
https://apps.support.sap.com/sap/support/knowledge/public/en/1922053
Firstly make sure you have no old Java installs or 'add ons' lingering about on your
system, check in programs and features (assuming your using windows 7) and uninstall
Java completely. Check the add ons in IE and show: 'All add ons' - remove manually if
required. Do a clean up of the computers temp files and cache. I used a program called
ccleaner. Also check c:\users\<username>\app data\locallow\ and delete the 'Sun' folder. Then reboot
machine.
- click continue
Now try again, you will need to enable the 'add on' when prompted on opening IE for
thefirst time. Also when accessing the BI Web I for the first time, I placed a tick in the
'do not show this again
/H (modo debugging)
GD-EDIT
GD-SAPEDIT
Colocar X a estas dos variables. Ir hasta el final del debugging y borrar las líneas que se
requieren y salvar
Just put your query ID in field COMPID and execute. Than select duplicate record for
the specific query in table and delete record (Shift+F2).
https://blogs.sap.com/2012/06/27/dynamic-determination-of-file-name-in-ohdsapds/
https://scn.sap.com/thread/1720012
http://scn.sap.com/docs/DOC-29556
MANEJO DE ARCHIVOS DINAMICOS EN AL11 en Infopaquetes
program filename_routine.
* Global code
*$*$ begin of global - insert your declaration only below this line
*-*
* Enter here global variables and type declarations
* as well as additional form routines, which you may call from the
* main routine COMPUTE_FLAT_FILE_FILENAME below
*TABLES: ...
* DATA: ...
*$*$ end of global - insert your declaration only before this line
*-*
* -------------------------------------------------------------------
form compute_flat_file_filename
using p_request type RSREQUID
p_infopackage type rslogdpid
p_datasource type rsoltpsourcer
p_logsys type rsslogsys
changing p_filename type RSFILENM
p_subrc like sy-subrc.
*$*$ begin of routine - insert your code only below this line
*-*
* This routine will be called by the adapter,
* when the infopackage is executed.
DATA: lit_dir_list TYPE STANDARD TABLE OF epsfili INITIAL SIZE 0,
"Files Table
wa_dir_list LIKE LINE OF lit_dir_list.
DATA: lv_dir_name TYPE epsf-epsdirnam. "Directory Name
lv_dir_name = '\\BEHNWBI011\sapmnt\BWFILES\'.
*** Fetch all the files stored in the directory.
CALL FUNCTION 'EPS_GET_DIRECTORY_LISTING'
EXPORTING
dir_name = lv_dir_name
FILE_MASK = 'PresupuestoVenta_*'
* IMPORTING
* DIR_NAME =
* FILE_COUNTER =
* ERROR_COUNTER =
TABLES
dir_list = lit_dir_list[]
EXCEPTIONS
invalid_eps_subdir = 1
sapgparam_failed = 2
build_directory_failed = 3
no_authorization = 4
read_directory_failed = 5
too_many_read_errors = 6
empty_directory_list = 7
OTHERS = 8.
IF sy-subrc <> 0.
ENDIF.
SORT lit_dir_list DESCENDING BY NAME.
READ TABLE lit_dir_list
INTO wa_dir_list INDEX 1.
IF sy-subrc = 0.
concatenate lv_dir_name wa_dir_list-NAME into p_filename.
ENDIF.
p_subrc = 0.
*$*$ end of routine - insert your code only before this line
*-*
endform.
DATA: BEGIN OF itab OCCURS 0,
data(100000) TYPE c,
END OF itab,
wa LIKE itab.
DATA: FILE_OUTPUT type string,
TIMESTAMP1 type string,
TIMESTAMP type timestamp.
GET TIME STAMP FIELD TIMESTAMP.
MOVE TIMESTAMP TO TIMESTAMP1.
concatenate '\\172.16.5.201\sapmnt\BWFILES\TrackingConfirmado_' TI
MESTAMP1 '.csv' into FILE_OUTPUT.
OPEN DATASET '\\172.16.5.201\sapmnt\BWFILES\TrackingConfirmado.csv
' FOR INPUT IN TEXT MODE ENCODING DEFAULT.
IF sy-subrc NE 0.
MESSAGE e999(za) WITH 'Error opening file' '\\172.16.5.201\sapmn
t\BWFILES\TrackingConfirmado.csv'.
ENDIF.
DO.
* Reads each line of file individually
READ DATASET '\\172.16.5.201\sapmnt\BWFILES\TrackingConfirmado.c
sv' INTO wa.
IF sy-subrc NE 0.
EXIT.
ELSE.
APPEND wa TO itab.
ENDIF.
* Perform processing here
* .....
ENDDO.
OPEN DATASET FILE_OUTPUT FOR OUTPUT IN TEXT MODE ENCODING NON-
UNICODE.
LOOP AT itab INTO wa.
TRANSFER wa TO FILE_OUTPUT.
ENDLOOP.
DELETE DATASET '\\172.16.5.201\sapmnt\BWFILES\TrackingConfirmado.c
sv'.
CLOSE DATASET FILE_OUTPUT.
CLOSE DATASET '\\172.16.5.201\sapmnt\BWFILES\TrackingConfirmado.cs
v'.
https://blogs.sap.com/2012/06/27/dynamic-determination-of-file-name-in-ohdsapds/
Applies to:
SAP BI 7.0. Will also work on SAP BI 3.5. For more information, visit the Business
Intelligence homepage.
Summary
This article gives you a way of generating files with dynamic file names using Open
Hub Destination (OHD)/ Analysis Process Designer (APD) in the application server.
Author Bio
Table of contents
Introduction
Scenario
Steps to be followed
Introduction
Many a times, data from BW is to be sent to various legacy systems or for business
users in the form of flat files. These flat files are generated as per the requirements and
placed in the application server of BW in a specific directory with a specific naming
convention. Here we deal with dynamic determination of the path of file or the name of
the files using logical file name in OHDs/ APDs.
Scenario
When using OHDs or APDs there might be a requirement to place the generated file in a
specific directory in the application server, also the naming convention of the file may
contain dynamic parameter such as timestamp, counter etc., to distinguish the files that
are generated by the same OHD or APD every day. The timestamp used in the file
name should be logical instead of placing the system date and time.
When the loads to the cube are completed, the delta/full data is to be put on the
application server. OHD is used for generation of this file. The file generated by OHD
will have the following naming convention. “ZXXXX_21062011163045.csv” where
ZXXXX is the fixed component and 21062011163045 is the dynamic component, i.e.,
the time when the last load to the cube has happened.
Steps to be followed
1. 1. Create a logical file path, path where the file needs to be stored in the application
server.
2. 2. Create file exit function module, which is used for filling the dynamic parameter in
the file name.
4. 4. Use the logical file name in the OHD, to generate files in the specified path in the
application server with the specific naming
convention.
To define logical path name, Go to the FILE transaction. Select “Logical File Path
Definition” and select “New Entries”.
To assign the logical file path to physical path, select the new entry added and click on
“Assignment of Physical Paths to Logical Path” and select “New Entries”
Enter the syntax group, which has to same as that of the OS of the application server.
For ex., Windows NT, UNIX etc., also enter the physical path where the file has to be
stored in Application server. The reserved name <FILENAME> is to be used in the
physical path as a place holder for the file name. There are lists of other reserved words
like <OPSYS>, <SYSID>, <DATE> which can be replaced with current value at
runtime. Save the entry.
Now, If this Logical file path is used in the logical file name then the files that are
generated would be placed in the physical path i.e. DIR_HOME
When generating the file with an OHD/APD without using a logical file name in the
application server, by default the files are stored in DIR_HOME directory. Logical file
name gives us flexibility to store the files in any directory we require. In this example,
the files are saved in DIR_HOME directory itself.
The physical file name must have the timestamp which is the runtime parameters that
needs to be filled during execution. For doing this, we will be using the reserved word
<F=name> while defining the logical file name. And a corresponding file name exit
user defined function module should be created with the following naming
FILENAME_EXIT_name. The return value of this function module will replace the
reserved word in the file name.
Following is a list of reserved words and the corresponding function modules that are to
be created.
The function module that is written must satisfy the following requirements:
An export parameter with the name “OUTPUT” must exist. No structure may
exist for these parameters.
The FM must be created before creating the logical file name, otherwise we will get an
“error message”.
OHDs are created with the required fields. Logical file name is used in the OHD that is
created.
Give “Type of the file name” as “Logical file name” and “Appl Server File Name” as
the logical file name that is created (Y_FILE_MULTIPLE)
Results
On executing the OHD, the files are generated in the application server. The application
server can be accessed using tcode AL11. Go to the directory DIR_HOME. The file is
generated with the file name as ZOPT_20120611141959.7140000_G1.csv
Where the timestamp is the last load timestamp of the cube, which is present in the
RSBKREQUEST table.
Note:
1. 1. The logical file name created can be used in APDs when the Data Target of an APD
is “Files
2. The reserved words <F=name>, <Z=name>, <Y=name> can also be used in the logical
file path definition along with a corresponding file exit function module. By doing so
the path in which the file has to be stored can be determined dynamically.
COMO RECUPERAR DATA DE DELTAS DE Logistica
https://blogs.sap.com/2013/10/10/full-repair-requests-for-lo-extractors/
Full Repair Requests: Full Repair extraction request is a data request for data that has previously been
extracted and loaded to a target InfoProvider. These types of requests can be updated in every target
InfoProvider, even if that target already has the data from an initialization or Init w/o Data Transfer for this
DataSource.
Need of Full Repair Requests: Sometimes, delta extractions don’t extract all the delta records or the
process failed and can’t be restarted and to fill this gap we need Full Repair Requests. Even we might
need to re-synchronize the data between BW and the source system. These types of requests are also
useful for the times when we are initially extracting large volumes of data and execute an Init w/o Data
Transfer and execute multiple parallel Infopackages that are Full Repair requests with specific selection
criteria.
LO Extractors
Logistics involves movement/flow of material, people, money, energy and information from a point of
origin to a point of consumption so as to meet the customer requirements for goods and services.SAP
ERP has a number of Logistics (LO) applications, so SAP provides a number of DataSources that are
related to Logistics. The extraction for these DataSources follows a well-defined process that uses a
special framework – the Logistics Cockpit.
LO Cockpit:
DataSource: A DataSource is a structure in the source system that specifies the method for transferring
data out of the source system. It is created in the source system and replicated to the BW system. With
SAP ERP and other SAP systems as source, we have a large number of DataSources that are offered by
SAP which support a wide range of analysis requirements.But when our requirements are not met by using
SAP Business Content, we can create a DataSource based on our own logic. Such a DataSource is
known as Generic DataSource.
All the DataSources belonging to logistics can be found in LO Cockpit grouped by their respective
application areas. Also, if the datasource is provided in the LO Cockpit, changes can be made there too
depending on how the extraction of data is made for that datasource.
Datasource for LO extraction is delivered by SAP as a part of business content. It has the naming convention:
2LIS_<Application_Component>_<Event>_<Suffix>
Application Component: Application Components are sets of DataSources grouped logically. Here,
these are 2 digit numbers. These can be checked in LO cockpit. e.g.: application 11 refers to SD Sales.
Event: specifies the transaction that provides the data for the application specified, and is optional in the
naming convention.
A. E.g.:
So, 2LIS_11_VASCL will extract Sales Order Schedule Line data from SAP ECC system.
DataSource can be activated at RSA5 (T-code) and the activated datasources can be viewed in
transaction RSA6. Upon activation of the business content datasources, all components like the extract
structure, extractor program etc. also gets activated in the system.
An extract structure generated will have the naming convention:
Initialization/Full Upload: To differentiate the new data from the historical data, the process of
initialization is used which sets a point of reference for the historical data and also provides the option to
load the historical data to SAP NetWeaver BW. Also, to use the delta capability of the DataSource, the
process of initialization must be carried out. The new data generated after initialization is identified as delta
data.
Setup Tables: For loading data first time into the BW system, the set up tables have to be filled. The
restructuring/set up tables are cluster tables that hold the respective application data, and the BI system
extracts the data as a onetime activity during the initialization process or during the full update mode, and
the data can be deleted from the set up tables after successful data extraction into BW to avoid redundant
storage. The setup tables are filled for the entire application component and not for the individual
DataSources.
Thus, the DataSource 2LIS_11_VASCL having extract structure MC11VA0SCL has the set up table
MC11VA0SCLSETUP.
Once the initialization is completed successfully, we don’t need the data in the setup table. It can be
deleted using transaction LBWG.
Delta Loads: After the successful initialization, delta records get captured and are passed to an area
known as delta queue, which stores records that are ready to be extracted by the BW system.
There are three different methods for processing this data but before proceeding its necessary to
understand the various update modes.
Update Modes: While carrying out a transaction, for e.g. the creation of a sales order, the user enters
data and saves the transaction. The data entered by the user from a logistics application perspective is
directly used for creating the orders and also indirectly forms a part of the information for management
information reporting. The data entered by the user is used by the logistic application for achieving both
the above aspects, but the former, i.e. the creation of the order takes a higher priority than result
calculations triggered by the entry. The latter is often termed as statistical updates. The SAP system treats
both these events generated by the creation of order with different priorities by using different update
modes to achieve the same.
V1 Update: A V1 update is carried out for critical or primary changes and these affect objects that has a
controlling function in the SAP System, for example the creation of an sales order in the system. These
updates are time critical and are synchronous updates.
V2 Update: A V2 update, in contrast with V1 is executed for less critical secondary changes and are pure
statistical updates resulting from the transaction.
V3 Update: V3 update which consists of collective run function modules. Compared to the V1 and V2
updates, the V3 update is a batch asynchronous update, which is carried out when a report (RSM13005)
starts the update (in background mode). The V3 update does not happen automatically unlike the V1 and
V2 updates.
Delta Load Methods: SAP provides us with different mechanisms for pushing the data into the delta
queue and is called update mode. In transaction LBWE, we can see the various update modes available.
Direct Delta: This method stores the posted documents in the delta queue using V1 update. V1 update
accords the highest priority to this process over others and is generally used for most critical updates. It is
recommended only when a small number of documents are being posted.
Queued Delta: This method stores the posted document data in an extraction queue using the V1 update.
A collective run is required to read the data from the extraction queue and transfer it to the delta queue.
This method can be used when the number of documents posted is large, and serialization is also
required.
Unserialized V3: This method stores posted documents in an update table using the V3 collective run,
which happens before the data is written to the delta queue. Another V3 collective run than reads the data
from the update table without considering the sequence, and transfers the data to the delta queue. Since
this doesn’t retain the sequence in which records were generated, we should not load data with this update
type directly into a DSO. This is used when serialization of data isn’t required.
If there is an issue in the data being extracted through delta load from LO DataSource and it is required to
reload the data into BW from R/3, then full repair has to be used. To use full repair option for LO
datasources, setup tables need to be filled from the source tables present in SAP R/3. It is required
because delta load data moves directly into delta queue only and not into the setup tables.
Identify the extract structure corresponding to the LO data source using transaction code
LBWE.
Identify the corresponding Set-up table (Set-up table is Extract structure name suffixed by
SETUP).
Identify the selection criteria for filling Set-up table and the selection options available in Full
repair Infopackage.
Delete the Set-up table using transaction code LBWG and fill it based on the decided selection
criteria.
Reload the required data from setup table by executing a Full Repair Infopackage.
To explain this process, let us consider an example to reload the data for 2LIS_11_VASCL datasource
which uses LO Extractor.
We can check all the active extract structures in transaction LBWE (LO Data Extraction: Customizing
Cockpit).
Now, the name of the setup table as explained before will be MC11VA0SCLSETUP for the corresponding
datasource.
To move ahead with the reload, first we need to identify the selection criteria to fill the setup table.
Transaction to fill the setup table is OLI*BW, where “*” depends upon the application area. Some
examples are given below:
OLI2BW: Stocks
OLI7BW: Order
OLI8BW: Deliveries
So the selection screen has Sales Organization, Company code and SD Document (Sales Document
Number) which means data based on the above selection can only be filled in the setup tables. In case we
have other fields for selection, then we need to find the corresponding values for these columns from the
application tables to proceed.
Before filling the setup tables, it’s a good practice to delete the setup table. We can use the transaction
LBWG to delete the contents of setup tables. It is not mandatory to delete the setup table but we follow
this practice to maintain consistency as setup table has an additive property.
In our case we delete the setup table for application component 11 i.e. SD Sales BW.
Before filling the setup table, we must ensure that our system is locked to avoid loss of the new records
posted.
Now we enter the desired selection criteria and the details in the other fields where date and time of
termination is the estimated time to fill the setup table with desired data and then execute. On clicking
continue, the selected records are transferred to the setup table.
Once we have filled the setup table, we can go to transaction RSA3 and check the number of records in
2LIS_11_VASCL. It should be equal to the number of records we have loaded.
Now, we need to execute a Full Repair Infopackage to load the data from setup table. Below steps define
the complete procedure to execute the Infopackage:
1)
2) From the Scheduler dropdown, select the Repair Full Request Option.
4) Make sure to check the Full Update mode in the Update tab.
5) Execute the Infopackage by clicking Start. We can execute the Infopackage immediately or schedule it
to a future date as per our requirements.
Once the load completes successfully, check the data in the InfoProvider.
Thanks!!
https://archive.sap.com/discussions/thread/3196262
And now, you can delete the transport for example in transaction: SE10...
Hi Vam.
For the periods of the previous year, you restrict with and interval where you use
variable offsets: [1st period of current year-12;current period-12].
I dont know if there are standard variables for this, but I'd be surprised if the "current
period" variable is not delivered in bct... if not, you'll have to create your own exits.
good luck!
Jacob
DATA : l_s_MC02M_0ITM LIKE MC02M_0ITM .
DATA : TI_BUKRS like EKPO-BUKRS ,
l_tabix LIKE sy-tabix.
CASE i_datasource.
WHEN '2LIS_02_ITM'.
LOOP AT c_t_data INTO l_s_MC02M_0ITM .
l_tabix = sy-tabix.
CLEAR TI_BUKRS .
SELECT SINGLE BUKRS FROM EKPO INTO TI_BUKRS
WHERE EBELN EQ l_s_MC02M_0ITM-EBELN .
l_s_MC02M_0ITM-BUKRS = TI_BUKRS .
MODIFY c_t_data FROM l_s_MC02M_0ITM INDEX l_tabix.
ENDLOOP.
ENDCASE.
EXPORTING
I_PERIV = trans_structure-/bic/0fiscvarnt
I_FISCPER = trans_structure-/bic/0fiscper
I_CALMONTH_ICHANM = ''
IMPORTING
ES_CALMONTH = cal
EXCEPTIONS
date_invalid = 1
OTHERS = 2.
result = cal.
https://blogs.sap.com/2014/01/18/user-id-changes-in-sap-bw/
Information Broadcasting:
When a setting for the Information broadcasting is created, a background job with the user
name will be created and ABAP program RSRD_BROADCAST_STARTER will be triggered with
the same user name. This program will access the table for the authorization user mentioned in
the table RSRD_SETT_NODE_A.
Due to this process, we have to change the user in the background job and the
authorization user. Background user can be changed using TBTCP and the authorization user
can be changed in the table RSRD_SETT_NODE_A.
Before doing the changes in the user ID we will have to implement the above mentioned
changes so that there is not much effect on BW users after their ids are changed. We will have
to design ABAP programs accordingly so that the required changes could be done in the
standard tables mentioned above. Logic which needs to be implemented is discussed below.