Migrating a connector from previous versions using the Data Integration REST API

Migrating a connector from previous versions using the Data Integration REST API

Guidelines to migrate to Amazon Redshift V2

Guidelines to migrate to Amazon Redshift V2

Some guidelines and restrictions apply to assets when you migrate the Amazon Redshift V1 connection to Amazon Redshift V2.

Source transformation

Consider the following migration guidelines when you configure a read operation:
  • You cannot apply migration for mappings configured with the multi-object source type. When you migrate to Amazon Redshift V2, the mapping fails to read from multiple Amazon Redshift source objects and displays the following error:
    Operation failed: error [String index out of range: -1]
  • When you run an Amazon Redshift V1 mapping with an SQL query to read from a date/time column, the mapping runs successfully. The same mapping fails after the migration to Amazon Redshift V2 with the following error:
    [FATAL] Timestamp format must be yyyy-mm-dd hh:mm:ss[.fffffffff]
  • If you specify a value for the
    Number of Sorted Ports
    field in the advanced properties in an Amazon Redshift V1 source object, this value does not apply for Redshift V2 after the migration. To configure a sort operation, you can use the
    Sort
    property from the
    Query
    tab in the Amazon Redshift V2 Source transformation after the migration.
  • When you enable the
    Turn on client side encryption
    property to define the source attributes to read from an Amazon Redshift V1 source, these configured values are not retained after migrating to Amazon Redshift V2.
  • When a mapping contains a parameterized Lookup transformation, the configured encryption in the connection used in the lookup is not retained after the migration.
  • When you migrate and the mapping is configured with partitioning for the source, the number of rows processed by each partition varies in Amazon Redshift V2 when compared to Amazon Redshift V1.

Target transformation

Consider the following migration guidelines when you configure a write operation:
  • When you configure any operation other than insert, and then migrate to Amazon Redshift V2, you must select the applicable target attribute
    Treat Source Row As
    in the Amazon Redshift V2 target properties before you run the mapping.
  • If a mapping is configured with an override to the Amazon Redshift table where the overridden columns are of greater precision, the entire column with higher precision is skipped and an exception is logged. The difference in precision between the columns at design and runtime does not affect the same mapping that uses the Amazon Redshift V1 connection.
  • The attribute
    Number of bytes needed to support multibytes for varchar
    is available in the Amazon Redshift V1 connection but not in Amazon Redshift V2. When you migrate to Amazon Redshift V2, the data appears truncated in the target columns. As a workaround, you can edit the metadata of the target before you run the mapping.

Parameterization

Consider the following migration guidelines when you parameterize mappings:
  • If you create a completely parameterized mapping, do not specify the connection type in the connection parameterization. If you specify the connection type in the connection parameterization and then migrate the connection, the mapping fails at runtime.

0 COMMENTS

We’d like to hear from you!