Migrating a connector from previous versions using the Data Integration REST API

Migrating a connector from previous versions using the Data Integration REST API

Guidelines to migrate to Amazon S3 V2

Guidelines to migrate to Amazon S3 V2

Some guidelines and restrictions apply to assets when you migrate the Amazon S3 V1 connection to Amazon S3 V2.

Source transformation

Consider the following migration guidelines when you configure a read operation:
  • When you create an Amazon S3 V1 mapping with a header file and set the advanced source properties
    Header Line Number
    and
    Read Data from Line
    as 0, the mapping fails after you migrate to Amazon S3 V2. This issue occurs because of the configuration differences for this functionality between Amazon S3 V1 and V2 connections. To read a file using Amazon S3 V1 Connector, you need to update the advanced source property
    Read Data from Line
    as 0 to read data from a file with a header. In Amazon S3 V2, you need to update the
    Header Line Number
    and
    First Data Row
    as 1 in the source formatting options.
  • If you enable the
    Read Data from Line
    property advanced source property in an Amazon S3 V1 Source transformation, after the migration the header data is lost and replaced with
    FIELD0, FIELD1, FIELD2, FIELD3
    .
  • The
    Enable Downloading S3 Files in Multiple Parts
    advanced source attribute that is available in Amazon S3 V1 is unavailable in Amazon S3 V2. If you enable the option to download the S3 file in multiple parts using the
    Enable Downloading S3 File in Multiple Parts
    field and also specify the part size to download the file in multiple parts in Amazon S3 V1, and you then migrate to Amazon S3 V2, the functionality to download in multiple parts is enabled through the
    Multi-part Download Threshold
    field. The configured part size is also retained in the
    Download Part Size
    field in the Amazon S3 V2 mapping.

Target transformation

Consider the following migration guidelines when you configure a write operation:
  • The following restrictions when you run a mapping configured to create a new target at runtime:
    • Special characters from the source appear without the double quotes in the target.
    • The target output file is not uploaded to the specified Amazon S3 bucket path. The file is uploaded to a path that is not valid.
    • Junk characters appear in the object name after the migration. The junk characters do not affect the object and the mapping runs successfully.
    • Ensure that the folder path to the Amazon S3 bucket specified in the Amazon S3 V1 and V2 connection is the same. After the migration, if the folder path is different, or if the object in the S3 folder path is within a nested folder, the data preview fails but the mapping runs successfully.
      To fix the data preview, reselect the source object in the mapping.

Partitioning

When you configure a passthrough partitioning for an Amazon S3 V1 mapping, the partitioning type is not retained and defaults to None after the migration. You need to manually configure partitioning after the migration.

Parameterization

Consider the following migration guidelines for parameterized mappings:
  • If you create a completely parameterized mapping, do not specify the connection type in the connection parameterization. If you specify the connection type in the connection parameterization and then migrate the mapping, the mapping fails at runtime.
  • When you parameterize both the connection and the objects in an Amazon S3 V1 mapping, the data preview in the mapping task might fail after the migration with the following error:
    Internal error while processing request. Please contact Informatica Global Customer Support.
    To avoid this error, reselect the object in the mapping to preview the data.

0 COMMENTS

We’d like to hear from you!