Table of Contents

Search

  1. Preface
  2. Part 1: Introduction to Amazon Redshift connectors
  3. Part 2: Data Integration with Amazon Redshift V2 Connector
  4. Part 3: Data Integration with Amazon Redshift Connector

Amazon Redshift Connectors

Amazon Redshift Connectors

Amazon Redshift V2 targets

Amazon Redshift V2 targets

You can use an Amazon Redshift V2 object as a target in a
mapping
or
mapping
task. You can also create an Amazon Redshift V2 target based on the input source.
When you configure the advanced target properties, configure properties specific to Amazon Redshift V2. You can encrypt data, update statistical metadata of the database tables to improve the efficiency of queries, load data into Amazon Redshift from flat files in an Amazon S3 bucket, and use vacuum tables to recover disk space and sort rows in tables.
If the distribution key column in a target table contains null values and you configure a task with an upsert operation for the same target table, the task might create duplicate rows. To avoid creating duplicate rows, you must perform one of the following tasks:
  • Replace the null value with a non-null value when you load data.
  • Do not configure the column as a distribution key if you expect null values in the distribution key column.
  • Remove the distribution key column from the target table temporarily when you load data. You can use the Pre-SQL and Post-SQL properties to remove and then add the distribution key column in the target table.
The following table lists the Amazon Redshift V2 target features that you use in mappings:
Feature
Mapping
Elastic Mapping
Staging directory
Yes
No
Assume Role
Yes
Yes
Update override
Yes
No
Server-side encryption
Yes
Yes
Client-side encryption
Yes
No
Analyze target table
Yes
Yes
Retain staging files on S3
Yes
Yes
Copy Command
Yes
Yes
Vacuum Tables
Yes
Yes
Recovery and restart processing
Yes
No
Preserve record order on write
Yes
No
Partitioning
Yes
No
Working with Large Tables
Yes
Yes
Octal Values as DELIMITER and QUOTE
Yes
Yes
Success and Error Files
Yes
No
Import objects from different schema
Yes
Yes