Use the following rules and guidelines when configuring
SQL ELT optimization
for an Amazon Redshift database:
To copy data from Amazon S3 to Amazon Redshift, you must use multiple data files by splitting large files. For more information, see the Amazon documentation.
When you use an upper case in the column name of a JSON file and configure
SQL ELT optimization
, the mapping fails with the following error:
Error Reason: Invalid JSONPath format: Member is not an object.
When you configure a mapping with
SQL ELT optimization
and define an ORC source with date/time values, the Secure Agent might not write a few values correctly to the Redshift target.
When you define a Parquet source with decimal values having precision greater than 19,0 and use
COPY
command to write to a Redshift target, the mapping fails with
SQL ELT optimization
.
When the data in delimited, Avro, or JSON files has values that are greater than the precision values, specify the attribute
TRUNCATECOLUMNS=ON
.
For the ORC and Parquet file types, specify
AWS_IAM_ROLE
in the COPY command, to enable full
SQL ELT optimization
.
You cannot use the assume role when the source has a Parquet or ORC file format and enable full
SQL ELT optimization
for the mapping task.
You cannot enable full
SQL ELT optimization
for a mapping task when the task contains a mapping with a single transformation connected to multiple transformations downstream and vice-versa.
You cannot enable full
SQL ELT optimization
for an Avro source with Date, Decimal, and Timestamp data types.
When you read data from an Amazon S3 flat file with Shift-JIS encoding, write to an Amazon Redshift target, and enable full
SQL ELT optimization
, the mapping fails.
When you configure a mapping that reads an Amazon S3 Avro source with column names in uppercase letters and uses an Amazon Redshift V2 connection to write data to an Amazon Redshift target, the COPY command writes blank rows to the target.
When you configure an update operation for data that contains the Time column for a mapping enabled with full
SQL ELT optimization
and you override the target query using the value specified in the Override Target Query field from the advanced target properties, the task runs but data is not processed.
If the Amazon S3 bucket region and the Amazon Redshift region are different, specify the
REGION
attribute in the
COPY
command to enable full
SQL ELT optimization
. Applies to delimited, Avro, and JSON files.
When you perform an update, upsert, or delete operation, ensure that you specify a primary key or an update column for the target table.
When you perform an upsert operation and set the JVM option
-DuseDeleteForUpsert=true
, the target statistics of processed rows shows an additional row as compared to the case when you do not set the JVM option. The number of rows in the target table are the same in both the cases.
When you perform an update, upsert, or delete operation on an Amazon Redshift target and specify the update column as a Date/Time data type in a different time zone, data is not written to the target. However, the task runs successfully. The issue occurs when you use an Amazon S3 Parquet source.
When you perform an update operation, you cannot map an id column of a target table in the field mapping.
Even if you configure a condition using a DD_INSERT, DD_UPDATE, or DD_DELETE data driven operation on a target object, the log contains queries for the remaining data driven operations as well. The mapping runs successfully.
When the data driven condition contains only the DD_REJECT operation, the mapping runs without generating a query.
When you parameterize a transformation in a mapping enabled for full
SQL ELT optimization
and configure a parameter file to override the input parameters, the Secure Agent ignores the overridden parameters.
If the Union transformation has inputs from a few source columns and you do not map the rest of the columns to the target, the columns that you do not map show null data in target.
If the input fields contain a decimal field and you do not map the decimal field to the target in a Union transformation, the mapping that runs with full
SQL ELT optimization
and uses an Amazon Redshift V2 connection fails with an error.
If the mapping enabled for
SQL ELT optimization
contains Union and Aggregator transformations, include the incoming field from the aggregate function or group by field in the field mapping, or remove the field from the aggregate function or group by field altogether. Otherwise, the mapping fails.
A mapping fails if the column of the Date or Datetime data type is not of the
YYYY-DD-MM
format.
You must map all the fields from the SQL query to the target for the mappings enabled for
SQL ELT optimization
to run successfully.
If the custom query contains duplicate columns, the mapping runs without
SQL ELT optimization
.
When you run a mapping enabled for
SQL ELT optimization
that uses an Amazon Redshift V2 connection to update data with the float or integer data types in an Amazon Redshift target, the mapping might fail.
When you parameterize an Expression transformation in a mapping task and configure a parameter file to override the parameters, the Secure Agent does not read the overridden parameters. The issue occurs when you configure full
SQL ELT optimization
for a mapping that uses an Amazon Redshift V2 connection.
A mapping fails at runtime when you specify an advanced native filter in the following format:
schema_name.table_name.column_name
. While defining the advance filter, the condition should not have table name qualified with a schema name.
When you use the query source type to read from Amazon Redshift, you can choose to retain the field metadata and save the mapping. Even if you edit the query and run the mapping, the field metadata specified at design time is retained.