Table of Contents

Search

  1. Preface
  2. Mappings
  3. Mapplets
  4. Mapping Parameters
  5. Where to Assign Parameters
  6. Mapping Outputs
  7. Generate a Mapping from an SQL Query
  8. Dynamic Mappings
  9. How to Develop and Run a Dynamic Mapping
  10. Dynamic Mapping Use Cases
  11. Mapping Administration
  12. Export to PowerCenter
  13. Import From PowerCenter
  14. Performance Tuning
  15. Pushdown Optimization
  16. Partitioned Mappings
  17. Developer Tool Naming Conventions

Developer Mapping Guide

Developer Mapping Guide

Validation Environment

Validation Environment

The validation environment indicates whether the Developer tool validates the mapping definition for the native or non-native execution environment. When you run a mapping in the native environment, the Data Integration Service processes the mapping.
Based on your license, you can run a mapping in the non-native environment. When you run a mapping in the non-native environment, the Data Integration Service pushes the mapping execution to the compute cluster through a cluster connection. The compute cluster processes the mapping.
When you choose the Hadoop execution environment, you can select the Blaze or Spark engine to process the mapping.
The following image shows the validation environment:
Choose native, Hadoop, and Databricks environments in the following situations:
  • You want to test the mapping in the native environment before you run the mapping in a non-native environment.
  • You want to define the execution environment value in a parameter when you run the mapping.
If you choose all environments, you must choose the execution environment for the mapping in the run-time properties.
If you run the mapping in the Hadoop environment, Informatica recommends choosing the Spark engine for new mapping development.

0 COMMENTS

We’d like to hear from you!