Table of Contents

Search

  1. Preface
  2. Part 1: Introduction to Amazon Redshift connectors
  3. Part 2: Data Integration with Amazon Redshift V2 Connector
  4. Part 3: Data Integration with Amazon Redshift Connector

Amazon Redshift Connectors

Amazon Redshift Connectors

Amazon Redshift targets in synchronization tasks

Amazon Redshift targets in
synchronization
tasks

You can use an Amazon Redshift object as a target in a
synchronization
task.
When you use Amazon Redshift target objects, you can select a standard object as the primary source.
You can configure Amazon Redshift target properties on the
Target
page of the
Synchronization
Task wizard.
The following table describes the Amazon Redshift target properties:
Property
Description
Connection
Name of the target connection.
Target Object
Name of the target object. Select the primary target object.
Create Target
Creates a target.
Enter a name for the target object and select the source fields that you want to use. Default name is the source object name and by default, all source fields are used. Optionally, enter a file extension for the target object.
When you configure a
synchronization
task to use Amazon Redshift targets, you can configure advanced target properties.
The following table shows the Amazon Redshift advanced target properties:
Property
Description
S3 Bucket Name
Amazon S3 bucket name for the Amazon Redshift target data.
Use an S3 bucket in the same region as your Amazon Redshift cluster.
Enable Compression
Compresses staging files before writing the files to Amazon Redshift.
The performance of the
synchronization
task improves when the Secure Agent compresses the staging files.
Default is selected.
Staging Directory Location
Amazon Redshift staging directory.
When you run a task in Secure Agent runtime environment, specify a directory path that is available on each Secure Agent machine in the runtime environment.
When you run a task in Hosted Agent runtime environment, leave the staging directory blank. The Hosted Agent creates a directory at a temporary location.
Batch Size
Minimum number of rows in a batch. Enter a number greater than 0.
Default is 2000000.
Max Redshift Errors per Upload Batch for INSERT
Number of errors within a batch that causes a batch to fail. Enter a positive integer.
If the number of errors is equal to or greater than the property value, the Secure Agent writes the entire batch to the error rows file.
Default is 1.
Truncate Target Table Before Data Load
Truncates an Amazon Redshift target before writing data to the target.
Null value for CHAR and VARCHAR data types
String value used to represent null values in CHAR and VARCHAR fields in Amazon Redshift targets, such as NULL or a space character.
Default is an empty string.
Wait time in seconds for file consistency on S3
Number of seconds to wait for the Secure Agent to make the staging files available.
Default is 5.
CopyOptions Property File
Copy command options.
Add options to the Copy command to write data from an Amazon S3 bucket to Amazon Redshift target. You can add the following options:
  • DELIMITER
  • ACCEPTINVCHARS
  • QUOTE
  • COMPUPDATE
  • AWS_IAM_ROLE
When you run a task in the Secure Agent runtime environment, either specify the path of the property file that contains the copy options or specify the copy options directly in the
CopyOptions Property File
field.
When you run a task in the Hosted Agent runtime environment, you must specify options directly in the
CopyOptions Property File
field.
Turn on S3 Server Side Encryption
Indicates that Amazon S3 encrypts data during upload and decrypts data at the time of access.
Turn on S3 Client Side Encryption
Indicates that the Secure Agent encrypts data by using a private encryption key.
If you enable both server side and client side encryption, the runtime environment ignores the server side encryption.
Vacuum Target Table
Recovers disk space and sorts rows in a specified table or all tables in the database.
You can select the following recovery options:
  • None
  • Full
  • Sort Only
  • Delete Only
  • Reindex
Default is None.
Analyze Target Table
Improve the efficiency of the read and write operations.
The query planner on Amazon Redshift updates the statistical metadata to build and choose optimal plans to improve the efficiency of queries.
Prefix for Retaining Staging files on S3
Retains staging files on Amazon S3.
Provide both a directory prefix and a file prefix separated by a slash (/) or only a file prefix to retain staging files on Amazon S3. For example,
backup_dir/backup_file
or
backup_file
.
Pre-SQL
The pre-SQL commands to run a query before you read data from Amazon Redshift. You can also use the UNLOAD or COPY command. The command you specify here is processed as a plain text.
Post-SQL
The post-SQL commands to run a query after you write data to Amazon Redshift. You can also use the UNLOAD or COPY command. The command you specify here is processed as a plain text.
Target Table Name
You can override the default target table name.
Part Size
Specifies the part size of an object.
Default is 5 MB.
TransferManager Thread Pool Size
Specifies the number of the threads to write data in parallel.
Default is 10.
Number of Files per Batch
Provide the number of files to calculate the number of the target staging file per batch.
If you do not provide a value, the number of the target staging file is calculated internally.
Specify a minimum value based on the cluster type and number of nodes in the Amazon Redshift cluster. To avoid errors, specify a value lesser than 1500.
Success File Directory
Directory for the Amazon Redshift success rows file. Specify a directory path that is available on each Secure Agent machine in the runtime environment. By default,
Data Integration
writes the success file to the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/data/success
The Hosted Agent does not create a success rows file. Leave the
Success File Directory
field blank when you run a task in the Hosted Agent runtime environment.
Error File Directory
Directory for error rows file. Specify a directory path that is available on each Secure Agent machine in the runtime environment. By default,
Data Integration
writes the error rows file to the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/data/error
When you specify the default error file directory you can download the error file from the schedule tab.
When the Error File Directory is other than the default error directory, you cannot download the error file from the Schedule tab. You must go to the specified directory to retrieve the error file.
The Hosted Agent does not create an error rows file. Leave the
Error File Directory
field blank when you run a task in the Hosted Agent runtime environment.

0 COMMENTS

We’d like to hear from you!