Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon Redshift Configuration
  4. Amazon Redshift Connections
  5. PowerExchange for Amazon Redshift Data Objects
  6. Amazon Redshift Mappings
  7. Pushdown Optimization
  8. Amazon Redshift Lookup
  9. Appendix A: Amazon Redshift Datatype Reference
  10. Appendix B: Troubleshooting

PowerExchange for Amazon Redshift User Guide

PowerExchange for Amazon Redshift User Guide

Amazon Redshift Dynamic Mapping Overview

Amazon Redshift Dynamic Mapping Overview

You can use Amazon Redshift data objects as dynamic sources and targets in a mapping.
Use the Amazon Redshift dynamic mapping to accommodate changes to source, target, and transformation logics at run time. You can use an Amazon Redshift dynamic mapping to manage frequent schema or metadata changes or to reuse the mapping logic for data sources with different schemas. Configure rules, parameters, and general transformation properties to create the dynamic mapping.
If the data source for a source or target changes, you can configure a mapping to dynamically get metadata changes at runtime. If a source changes, you can configure the Read transformation to accommodate changes. If a target changes, you can configure the Write transformation accommodate target changes.
You do not need to manually synchronize the data object and update each transformation before you run the mapping again. The Data Integration Service dynamically determine transformation ports, transformation logic in the ports, and the port links within the mapping.
There are the two options available to enable a mapping to run dynamically. You can select one of the following options to enable the dynamic mapping:
  • In the
    Data Object
    tab of the data object read or write operation, select the
    At runtime, get data object columns from data source
    option when you create a mapping.
    When you enable the dynamic mapping using this option, you can refresh the source and target schemas at the runtime.
  • In the
    Ports
    tab of the data object write operation, select the value of the
    Columns defined by
    property as
    Mapping Flow
    when you configure the data object write operation properties.
    Additionally, specify a value of the
    Target Schema Strategy
    data object write operation property when you select the
    Mapping Flow
    option.
    When you enable the dynamic mapping using this option, you can add all the Source transformation or transformation ports to the target dynamically, retain an existing target table, or create a new target table if the table does not exist in the target.
Dynamic mapping is applicable when you run the mapping in the native environment, on the Spark engine, or on the Databricks Spark engine.
For information about dynamic mappings, see the
Informatica Developer Mapping Guide
.

0 COMMENTS

We’d like to hear from you!