Data Integration, by default, creates a flat file locally in a temporary folder to stage the data before reading from and writing to Amazon Redshift. You can set Data Integration to optimize the staging performance.
If you do not set the optimization property, Data Integration performs staging without the optimized settings, which might impact the performance of the task.
Consider the following rules when you enable the optimization property:
If you run a mapping enabled for
SQL ELT optimization
, the mapping runs without
SQL ELT optimization
.
For read operation, the staging applies only to source transformation and does not apply to lookup transformation.
If the data contains timestamp data types with time zone, the job runs without staging the data in the local flat file.
If the mapping contains Oracle CDC as a source and Amazon Redshift as the target, the job runs without staging the data in the local flat file.
When you run a mapping to read multiline data from the source that contains CR or LF characters, the column data is split into multiple lines.
Enabling Amazon Redshift Connector to optimize the staging performance
Perform the following steps to enable the Amazon Redshift V2 Connector to optimize the staging performance:
In Administrator, click
Runtime Environments
.
Edit the Secure Agent for which you want to set the property.
In the
System Configuration Details
section, select the
Service
as
Data Integration Server
and the type as
Tomcat
.
Set the value of the Tomcat property
INFA_DTM_RDR_STAGING_ENABLED_CONNECTORS
for the read operation,
INFA_DTM_STAGING_ENABLED_CONNECTORS
for the write operation, and
INFA_DTM_LKP_STAGING_ENABLED_CONNECTORS
for the cached lookup operation to the plugin ID of Amazon Redshift Connector.
You can find the plugin ID in the manifest file located in the following directory: