Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Snowflake
  3. Snowflake Connections
  4. PowerExchange for Snowflake Data Objects
  5. PowerExchange for Snowflake Mappings
  6. PowerExchange for Snowflake Dynamic Mappings
  7. Snowflake Run-Time Processing
  8. Pushdown Optimization
  9. Appendix A: Snowflake Data Type Reference

PowerExchange for Snowflake User Guide

PowerExchange for Snowflake User Guide

Rules and Guidelines for Snowflake Mappings

Rules and Guidelines for Snowflake Mappings

Use the following rules and guidelines when you create a mapping:
  • You cannot use the OR operator in a filter condition.
  • Temporary tables are not created on Windows when the table name contains /\ : * ? " < > | special characters.
  • You cannot use special characters except @ and # in the column name and table name for the create target object.
  • When you configure a native expression to filter Snowflake records where the table name and column name contain special characters, enclose the table name and column name that contain special characters with double quotes in the native expression.
  • You must define a primary key in the target table. If you do not define a primary key in the target table, the mapping fails to delete the record from or update the record in the target table.
  • When you run mappings in the non-native environment, the Data Integration Service does not consider the JDBC parameters that you specify in the Snowflake connection and the mapping fails.
  • When you select the
    Retain Target
    option with the target schema strategy property in a mapping that runs on Spark engine, and if the target table does not exist, the Integration Service creates the target table and writes the data to the table. A similar mapping that runs in the native environment fails with an error stating that the table does not exist.
  • When you enable the truncate table option for the Snowflake target that has special characters and run the mapping on the Databricks Spark engine, the mapping fails because Databricks uses an earlier version of the spark-snowflake driver and this error is from the third-party. To overcome this issue, you must add the
    usestagingtable=off
    property in the Snowflake target additional runtime properties, and then run the mapping.
  • When you run a mapping to create a Snowflake target to write data from Kafka, where the target scheme strategy is CREATE - if target does not exist, and the data contains the Binary data type, the mapping fails. You must use the Expression transformation to exclude the binary column when you configure the mapping with the create target scenario.

0 COMMENTS

We’d like to hear from you!