Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Snowflake
  3. Snowflake Connections
  4. PowerExchange for Snowflake Data Objects
  5. PowerExchange for Snowflake Mappings
  6. PowerExchange for Snowflake Dynamic Mappings
  7. Snowflake Run-Time Processing
  8. Pushdown Optimization
  9. Appendix A: Snowflake Data Type Reference

PowerExchange for Snowflake User Guide

PowerExchange for Snowflake User Guide

Mapping Validation and Run-time Environments

Mapping Validation and Run-time Environments

You can validate and run mappings in the native environment or in a non-native environment, such as Hadoop or Databricks.
The Data Integration Service validates whether the mapping can run in the selected environment. You must validate the mapping for an environment before you run the mapping in that environment.
You can configure the mappings to run in the native or non-native environment.

Native environment

When you run mappings in the native environment, the Data Integration Service processes the mapping and runs the mapping from the Developer tool.

Spark Engine

When you select the Hadoop environment, the Data Integration Service pushes the mapping to a compute cluster and processes the mapping on a Spark engine. The Data Integration Service generates an execution plan to run mappings on the Spark engine.

Databricks

When you run mappings in the Databricks environment, the Integration Service pushes the mapping logic to the Databricks Spark engine, the Apache Spark engine packaged for Databricks.
You can view the plan in the Developer tool before you run the mapping and in the Administrator tool after you run the mapping.
For more information about the non-native environment, see the
Data Engineering Administrator Guide
.

0 COMMENTS

We’d like to hear from you!