Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Amazon S3
  3. PowerExchange for Amazon S3 Configuration Overview
  4. Amazon S3 Connections
  5. PowerExchange for Amazon S3 Data Objects
  6. PowerExchange for Amazon S3 Mappings
  7. Appendix A: Amazon S3 Data Type Reference
  8. Appendix B: Troubleshooting

PowerExchange for Amazon S3 User Guide

PowerExchange for Amazon S3 User Guide

Mapping Flow

Mapping Flow

You can add all the Source transformation or transformation ports to the target dynamically when enable a mapping to run dynamically using the
Mapping Flow
option. You can then use the dynamic ports in the Write transformation.
When you select the
Mapping Flow
option, the Data Integration Service allows the Target transformation to override ports of the Write transformation with all the updated incoming ports from the pipeline mapping and loads the target file with the ports at runtime.
To enable a dynamic mapping using the
Mapping Flow
option, select the value of the
Columns defined by
property as
Mapping Flow
in the
Ports
tab in the Write transformation.
When you use the
Mapping Flow
option to read data from a flat file that contains a port of Integer or Double data type, the mapping runs successfully. However, the Data Integration Service does not write the data of the port with Integer or Double data type and the consecutive ports regardless of the data type.
When you run a dynamic mapping on the Spark or Databricks Spark engine using the
Mapping Flow
option to fetch the metadata changes from any source that contains a FileName port, the mapping fails. You must add a transformation and configure the
Input Rules
in the
Ports
tab of the transformation to exclude the FileName port from the Write transformation. Then, map the rest of the ports.

0 COMMENTS

We’d like to hear from you!