Professional Documents
Culture Documents
5) Explain A Work Flow Process?: 1) What Is A Server?
5) Explain A Work Flow Process?: 1) What Is A Server?
5) Explain A Work Flow Process?: 1) What Is A Server?
The power center server moves data from source to targets based on a
workflow and mapping Metadata stored in a repository.
3) what is session?
Use the work flow monitor work flows and stop the power center server.
Load manager process: stores and locks the workflow tasks and start
the DTM run the sessions.
Mapping thread.
Transformation thread.
Reader thread.
Writer thread.
1) Task developer.
3) Worklet designer.
You can sehedule a work flow to run continuously, repeat at given time
or interval or you manually start a work flow.By default the workflow
runs on demand.
If the power center is executing a session task when you issue the stop
the command the power center stop reading data. If continuous
processing and writing data and committing data to targets.
If the power center can’t finish processing and committing data you
issue the abort command.
You can also abort a session by using the Abort() function in the mapping
logic.
Target Based commit: The power center server commits data based on
the number of target rows and the key constraints on the target table.
The commit point also depends on the buffer block size and the commit
interval.
Source-based commit:---------------------------------------------
User-defined commit:----------------------------------------------
When bulk loading the power center server by passes the database
log,which speeds performance.
With out writing to the database log, however the target database can’t
perform
What is a constraint based loading?
When you select this option the power center server orders the target load on a
row-by-row basis only.
If session is configured constraint absed loading when target table receive rows
from different sources.The power center server revert the normal loading for
those tables but loads all other targets in the session using constraint based
loading when possible loading the primary key table first then the foreign key
table.
Use the constraint based loading only when the session option treat rows as set
to insert.
When using incremental aggregation you apply captured changes in the source
to aggregate calculations in a session.If the source changes only incrementally
and you can capture changes you can configure the session to process only
those changes. This allows the power center server to update your target
incrementally rather than forcing it to process the entire source and
recalculate the same data each time you run the session.
You can capture new source data.use incremental aggregation when you can
capture new source data much time you run the session.Use a stored procedure
on filter transformation only new data.
The first time u run an incremental aggregation session the power center
server process the entire source.At the end of the session the power center
server stores aggregate data from the session runs in two files, the index file
and the data file .The power center server creates the files in a local directory.
Transformations
Q. what is transformation?
1) active 2) passive.
Active transformation can change the number of rows that pass through it.No
of output rows less than or equal to no of input rows.
1) Expression Transformation
2) Sequence Generator Transformation
3) Stored Procedure Transformation
4) XML Source Qualifier Transformation
5) LookUp Transformation
Drop rows does not store anywhere like session log file..
Router transformation to filter the data based on multiple conditions and give
yiou the option to route rows that don’t match to a default group.
1. Input group
2. output groups.
2. default group.
Both input pipelines originate from the same source qualifier transformation.
3) what are the settings that u use to configure the joiner transformation?
Master and detail source.
Type of join
Condition of the join
Get a related value.For example if your table includes employee ID,but you
want to include such as gross sales per invoice or sales tax but not the
calculated value(such as net sales).
Update slowly changing dimension tables. You can use a lookup transformation
to determine whether records already exist in the target.
Receives input values directly Receives input values from the result of a clkp
from the pipe line. expression in a another transformation.
Cache
Cache includes all lokkup Cache includes all lookup/output ports in the
columns used in the mapping(that lookup condition and the lookup/return port.
is lookup table columns included
in the lookup condition and
lookup table columns linked as
output ports to other
transformations)
Can return multiple columns from Designate one return port(R).Returns one column
the same row or insert into the from each row.
dynamic lookup cache.
If there is no match for the lookup If there is no matching for the lookup condition
condition, the informatica server the informatica server returns NULL
returns the default value for all
output ports.If u configure
dynamic caching the informatica
server inserts rows into the cache.