Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Processing Hierarchical Data on the Spark Engine
  7. Stateful Computing on the Spark Engine
  8. Monitoring Mappings in the Hadoop Environment
  9. Mappings in the Native Environment
  10. Profiles
  11. Native Environment Optimization
  12. Data Type Reference
  13. Complex File Data Object Properties
  14. Function Reference
  15. Parameter Reference

Relational Sources

Relational Sources

Relational sources are valid in mappings that run in a Hadoop environment if you use the Hive engine or the Blaze engine. The Spark engine cannot run mappings with relational resources.
The Data Integration Service does not run pre-mapping SQL commands or post-mapping SQL commands against relational sources. You cannot validate and run a mapping with PreSQL or PostSQL properties for a relational source in a Hadoop environment.
The Data Integration Service can use multiple partitions to read from the following relational sources:
  • IBM DB2
  • Oracle
You do not have to set maximum parallelism for the Data Integration Service to use multiple partitions in the Hadoop environment.


Updated November 09, 2018