Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 3

1. How to convert different type currency in data stage job. 2.

In informatica, the sources, the transformations and the target are known as mappings. What do u say in data stage. 3. How will you use XML files and Excel files sources in data stage. 4. In which format, the information is stored in the repository. 5. What are the two services available in client server architecture. 6. Can we use tables as lookups. What is the difference between hashed file and table lookup. 7. How will you improve the performance while using billion records. 8. Can we use source table as a target table for the reject link. 9. What s row transaction and parameter array size. 10. Is there any possible way to execute the same job for both warnings and ok condition in single trigger? 11. 12. 13. 14. 15. 16. 17. 18. 19. What is hashed file types? What data contained in hashed file stage? Can u view the data. where can u update? How did u get the data into hashed file stage? When u r doing project what errors can get? What is more complexity job? How many jobs create in project? What is difference between secfilestage & hashed file stage? If u take odbc, oci which one is better in this two.

20. Without odbc, oci, universe, unidata .how can u call oracle into data stage? 21. Already one ETL tool (informatica) is there but why r using data stage?

Ascential re-engineered their products sometime ago around the "Torrent" parallel processing technology. They seem to have done a pretty good job of this and can boast very good performance and

throughput figures as a result (although to get similar results you may have to make a significant investment in a capable ETL server(s)). It is my understanding that Informaticss implementation of transformation parallelism may be a little less mature. Possibly significantly, in their next release due Q2 this year, Informatica is introducing "push-down optimization" into their product-set - this will effectively allow the database server to do the "heavy-lifting" transformation processing, so if you have a parallel database server you should then also get very good transformation performance and possibly without a requirement for a heavy-duty ETL server. The Sunopsis product already works in the same way, although it is functionally less rich than either Informatica or Ascential.
22. 23. 24. 25. What is stage variable & link variables. How many facts, dimensions in the project? What is odbc purpose? What is alternative for Remove Duplicates Stage in data stage.

26. What is the default data type for the aggregator stage when row is coming out. 27. What type of protocol did oracle use when using oracle enterprise stage.

28. Difference between joiner and look up


29. 30. tier 31. Data stage architecture. Which type of model does data stage follow either 2 tier / 3 What is the use of copy stage

Debugging * In parallel jobs I use the copy stage a lot to debug errors that do not indicate which stage caused the error. I create a copy of the job and start removing output stages, replacing them with a copy stage. I progressively remove stages until I locate the one with the error.
32. Which type of model project is following (either E-R modeling, Star / Snowflake schema)

33. 34. 35.

Partitioning. How many source systems in your project. Node concept in data stage

You might also like