Table of Contents

Search

  1. Preface
  2. Part 1: Introduction to Amazon Redshift connectors
  3. Part 2: Data Integration with Amazon Redshift V2 Connector
  4. Part 3: Data Integration with Amazon Redshift Connector

Amazon Redshift Connectors

Amazon Redshift Connectors

Amazon Redshift V2 targets in mappings

Amazon Redshift V2 targets in mappings

To write data to Amazon Redshift, configure an Amazon Redshift V2 object as the target in a mapping.
When you enable the source partition, the Secure Agent uses the pass-through partitioning to write data to Amazon Redshift to optimize the mapping performance at run time. Specify the name and description of the Amazon Redshift V2 target. Configure the target and advanced properties for the target object
The following table describes the target properties that you can configure in a Target transformation:
Property
Description
Connection
Name of the target connection. Select a target connection, or click
New Parameter
to define a new parameter for the target connection.
Target Type
Type of the target object.
Select Single Object or Parameter.
Object
Name of the target object.
You can select a single target object.
Parameter
Select an existing parameter for the target object or click
New Parameter
to define a new parameter for the target object. The
Parameter
property appears only if you select Parameter as the target type. If you want to overwrite the parameter at runtime, select the
Overwrite Parameter
option.
Operation
Type of the target operation.
Select one of the following operations:
  • Insert
  • Update
  • Upsert
  • Delete
  • Data Driven
    1
Select
Data Driven
if you want to create a mapping to capture changed data from a CDC source.
Data Driven Condition
Enables you to define expressions that flag rows for an insert, update, delete, or reject operation.
You must specify the data driven condition for non-CDC sources. For CDC sources, you must leave the field empty as the rows in the CDC source tables are already marked with the operation types.
Appears only when you select
Data Driven
as the operation type.
Update Columns
Select columns you want to use as a logical primary key for performing update, upsert, and delete operations on the target.
This field is not required if the target table already has a primary key.
Create Target
Creates a new target.
When you create a new target, enter a value of the following fields:
  • Name
    : Enter a name for the target object.
  • Path
    : Provide a schema name and create a target table within the schema. By default, the field is empty.
The Secure Agent converts the target table names that you specify in the
Create Target
field into lower case.
When you create a target, you can view and edit the metadata of the target object in the
Target Fields
tab. You can edit the data type, precision and define primary key of the columns in the target objects. To edit the metadata, click
Options
Edit Metadata
in the
Target Fields
tab.
1
Applies only to mappings.
The remaining properties are applicable for both mappings and
elastic mappings
.
When you read or write data using an elastic mapping, the source table name must not contain unicode or special characters.
The following table describes the Amazon Redshift V2 advanced target properties:
Property
Description
S3 Bucket Name
Amazon S3 bucket name for writing the files to Amazon Redshift target.
You can also specify the bucket name with the folder path. If you provide an Amazon S3 bucket name that is in a different region than the Amazon Redshift cluster, you must configure the
REGION
attribute in the Copy command options.
Enable Compression
Compresses the staging files before writing the files to Amazon Redshift.
The task performance improves when the Secure Agent compresses the staged files. Default is selected.
Staging Directory Location
1
Location of the local staging directory.
When you run a task in Secure Agent runtime environment, specify a directory path that is available on the corresponding Secure Agent machine in the runtime environment.
Specify the directory path in the following manner:
<staging directory>
For example,
C:\Temp
. Ensure that you have the write permissions on the directory.
Batch Size
1
Minimum number of rows in a batch.
Enter a number greater than 0. Default is 2000000.
Max Errors per Upload Batch for INSERT
Number of error rows that causes an upload insert batch to fail.
Enter a positive integer. Default is 1.
If the number of errors is equal to or greater than the property value, the Secure Agent writes the entire batch to the error file.
Truncate Target Table Before Data Load
Deletes all the existing data in the Amazon Redshift target table before loading new data.
Require Null Value For Char and Varchar
Replaces the string value with NULL when you write data to Amazon Redshift columns of Char and Varchar data types.
Default is an empty string.
When you run a mapping to write null values to a table that contains a single column of the Int, Bigint, numeric, real, or double data type, the mapping fails. You must provide a value other than the default value in the
Require Null Value For Char And Varchar
property.
WaitTime In Seconds For S3 File Consistency
1
Number of seconds to wait for the Secure Agent to make the staged files consistent with the list of files available on Amazon S3.
Default is 0.
Copy Options
Copy command options.
Add options to the Copy command to write data from Amazon S3 to the Amazon Redshift target when the default delimiter comma (,) or double-quote (") is used in the data. Provide the Amazon Redshift Role Amazon Resource Name (ARN).
You can add the following options:
  • DELIMITER
  • ACCEPTINVCHARS
  • QUOTE
  • COMPUPDATE
  • AWS_IAM_ROLE
  • REGION
For example:
DELIMITER = \036;ACCEPTINVCHARS = #;QUOTE = \037 COMPUPDATE = ON;AWS_IAM_ROLE=arn:aws:iam::<account ID>:role/<role-name>;REGION = ap-south-1
Specify a directory on the machine that hosts the Secure Agent.
If you do not add the options to the Copy command manually, the Secure Agent uses the default values.
S3 Server Side Encryption
Indicates that Amazon S3 encrypts data during upload.
Provide a customer master key ID in the connection property to enable this property. Default is not selected.
S3 Client Side Encryption
1
Indicates that the Secure Agent encrypts data using a private key.
Provide a master symmetric key ID in the connection property to enable this property. If you enable both server-side and client-side encryptions, the Secure Agent ignores the server-side encryption.
When you use a serverless runtime environment, you cannot configure client-side encryption for Amazon Redshift V2 targets.
Analyze Target Table
Runs an ANALYZE command on the target table.
The query planner on Amazon Redshift updates the statistical metadata to build and choose optimal plans to improve the efficiency of queries.
Vacuum Target Table
Recovers disk space and sorts the row in a specified table or all tables in the database.
You can select the following recovery options:
  • None
  • Full
  • Sort Only
  • Delete Only
  • Reindex
Default is None.
Prefix to retain staging files on S3
Retains staging files on Amazon S3.
Provide both a directory prefix and a file prefix separated by a slash (/) or only a file prefix to retain staging files on Amazon S3. For example,
backup_dir/backup_file
or
backup_file
.
Success File Directory
1
Directory for the Amazon Redshift success file.
Specify a directory on the machine that hosts the Secure Agent.
Error File Directory
1
Directory for the Amazon Redshift error file.
Specify a directory on the machine that hosts the Secure Agent.
Treat Source Rows As
Overrides the default target operation.
Default is
INSERT
.
Select one of the following override options:
NONE
By default, none is enabled. The Secure Agent considers the task operation that you select in the
Operation
target property.
INSERT
Performs insert operation. If enabled, the Secure Agent inserts all rows flagged for insert. If disabled, the Secure Agent rejects the rows flagged for insert.
DELETE
Performs delete operation. If enabled, the Secure Agent deletes all rows flagged for delete. If disabled, the Secure Agent rejects all rows flagged for delete.
UPDATE and UPSERT
Performs update and upsert operations. To perform an update operation, you must map the primary key column and at least one column other than primary key column. You can select the following data object operation attributes:
  • Update as Update: The Secure Agent updates all rows as updates.
  • Update else Insert: The Secure Agent updates existing rows and inserts other rows as if marked for insert.
Amazon Redshift V2 Connector does not support the Upsert operation in the Upgrade Strategy transformation. To use an Update Strategy transformation to write data to an Amazon Redshift target, you must select
Treat Source Rows As
as
None
.
By default, the Secure Agent performs the task operation based on the value that you specify in the
Operation
target property. However, if you specify an option in the
Treat Source Rows As
property, the Secure Agent ignores the value of that you specify in the
Operation
target property or in the Update Strategy transformation.
Override Target Query
1
Overrides the default update query that the Secure Agent generates for the update operation with the update query that you specify.
TransferManager Thread Pool Size
1
Number of threads to write data in parallel.
Default is 10.
Pre-SQL
The pre-SQL commands to run a query before you read data from Amazon Redshift. You can also use the UNLOAD or COPY command. The command you specify here is processed as a plain text.
Post-SQL
The post-SQL commands to run a query after you write data to Amazon Redshift. You can also use the UNLOAD or COPY command. The command you specify here is processed as a plain text.
Preserve record order on write
1
Retains the order of the records when you read data from a CDC source and write data to an Amazon Redshift target.
Use this property when you create a mapping to capture the changed record from a CDC source. This property enables you to avoid inconsistencies between the CDC source and target.
Minimum Upload Part Size
1
Minimum size of the Amazon Redshift object to upload an object.
Default is 5 MB.
Number of files per batch
1
Calculates the number of the staging files per batch.
If you do not provide the number of files, Amazon Redshift V2 Connector calculates the number of the staging files.
Schema Name
Overrides the default schema name.
Target table name
Overwrites the default target table name.
Recovery Schema Name
1
Schema that contains recovery information stored in the
infa_recovery_table
table on the target system to resume the extraction of the changed data from the last checkpoint.
Temporary Credential Duration
The time duration during which an IAM user can use the dynamically generated temporarily credentials to access the AWS resource. Enter the time duration in seconds.
Default is 900 seconds.
If you require more than 900 seconds, you can set the time duration maximum up to 12 hours in the AWS console and then enter the same time duration in this property.
Forward Rejected Rows
This property is not applicable for Amazon Redshift V2 Connector.
1
Applies only to mappings.
The remaining properties are applicable for both mappings and
elastic mappings
.