Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Microsoft Azure Blob Storage
  3. PowerExchange for Microsoft Azure Blob Storage Configuration
  4. Microsoft Azure Blob Storage Connections
  5. Microsoft Azure Blob Storage Data Objects
  6. Microsoft Azure Blob Storage Mappings
  7. Data Type Reference

PowerExchange for Microsoft Azure Blob Storage User Guide

PowerExchange for Microsoft Azure Blob Storage User Guide

Microsoft Azure Blob Storage Dynamic Mapping Overview

Microsoft Azure Blob Storage Dynamic Mapping Overview

You can use Microsoft Azure Blob Storage data objects as dynamic sources and targets in a mapping.
Use the Microsoft Azure Blob Storage dynamic mapping to accommodate changes to source, target, and transformation logics at run time. You can use a Microsoft Azure Blob Storage dynamic mapping to manage frequent schema or metadata changes or to reuse the mapping logic for data sources with different schemas. Configure rules, parameters, and general transformation properties to create the dynamic mapping.
If the data source for a source or target changes, you can configure a mapping to dynamically get metadata changes at runtime. If a source changes, you can configure the Read transformation to accommodate changes. If a target changes, you can configure the Write transformation accommodate target changes.
You do not need to manually synchronize the data object and update each transformation before you run the mapping again. The Data Integration Service dynamically determine transformation ports, transformation logic in the ports, and the port links within the mapping.
When you run a mapping to dynamically get metadata changes at runtime, ensure that the metadata does not have the FileName port. You cannot write the FileName port to a Microsoft Azure Blob Storage target.
There are the two options available to enable a mapping to run dynamically. You can select one of the following options to enable the dynamic mapping:
  • In the
    Data Object
    tab of the data object read or write operation, select the
    At runtime, get data object columns from data source
    option when you create a mapping.
    When you enable the dynamic mapping using this option, you can refresh the source and target schemas at the runtime.
  • In the
    Ports
    tab of the data object write operation, select the value of the
    Columns defined by
    property as
    Mapping Flow
    when you configure the data object write operation properties.
Dynamic mapping is applicable when you run the mapping in the native environment, on the Spark engine, or on the Databricks Spark engine. When you create a dynamic mapping to read multiple files from a directory and override the directory, verify that the override directory contains a source file with the same name as the imported object. Else, the mapping fails.
For information about dynamic mappings, see the
Informatica Developer Mapping Guide
.

0 COMMENTS

We’d like to hear from you!