You need to follow certain rules and guidelines for some of the data types.
Decimal
Consider the following rules and guidelines for data that contains the decimal data type:
The behavior of the decimal data type differs in mappings and mappings in advanced mode. Mappings support decimal max precision 28, while mappings in advanced mode supports max decimal precision of 38.
In mappings, if the precision of decimal data type in the source exceeds 28, the numeric value for the decimal is rounded off after the16th precision and the remaining digits are replaced with zeroes in the target.
For example, the value 2.3121356348957890888898699898989999990 from the source is rounded off to 2.3121356348957890000000000000000000000 in the target.
However, in mappings in advanced mode, the decimal data from the source remains the same in the target.
To resolve the issue in mappings, specify a precision that is less than or equal to 28 for the Decimal data type in the source table.
When you read data that contains columns of the decimal data type, any scale changes that you configure for the decimal data type columns is considered by default. If you do not want to consider the changes in scale, you can set the JVM option
-honorDecimalScaleRedshift
value to
false
in the Secure Agent properties. By default, this property is set to true.
When you write to an Amazon Redshift V2 target with the staging optimization property enabled for staging performance, and the source values have data of float data type, the decimal values in the target get truncated and rounded off to the nearest integer. To avoid this issue, in the system configuration details section, select the Type as DTM for the Data Integration service, and set the value for
DisableInfaDoubleHandlingForStaging
as yes.
String
Consider the following rules and guidelines when you read from external tables with data of string data type:
When you use a single object or multiple objects, the columns of String data type are ignored.
When use a custom query as source or create a view on top of an external table, the string data type is mapped to varchar data type.
Super
Consider the following rules and guidelines for data that contains the super data type:
When you configure a stored procedure in a SQL transformation, do not use in, out, or in-out parameters for columns of Super data type. Else, the mapping fails with the following warning:
EP_13248 DLL Error Msg: ERROR in SQL tx
When you configure a mapping in advanced mode to use the Copy command to write data to columns of Super data type in an Amazon Redshift target, do not insert a string value to a SUPER data type column. Else, the mapping in advanced mode fails.
You cannot define a lookup condition on the columns of Super data type in a simple filter or an uncached lookup.
By default, the precision of the super data type is 256. If you do not set the precision correctly in the mapping, the mapping fails due to data truncation or target overflow.
When you enable the mapping for source
SQL ELT optimization
to write to a flat file target and you do not set the precision, the data gets truncated.
When you create a new target at run time to write data that contains the super data type from a mapping enabled for
SQL ELT optimization
, the mapping fails if the precision value exceeds 65536 characters. Ensure that the incoming super data type is mapped to the super data type in the target. Otherwise, the Amazon Redshift super data type is mapped to the string transformation data type. Hence, the new target with the super type column is created as varchar. For varchar columns, the precision needs to be less than 65536.
The same limitation for the super data type precision is applicable for the dynamic schema refresh operation.
When you map the following data types to the super data type in the target, the COPY command fails:
char
varchar
date
time
timestamp
timestamptz
The issue occurs because the super column in a Amazon Redshift table always expects the string data to be enclosed within double quotes while loading the data using COPY command. When you map the Char and Varchar data types to the Super data type in the target, ensure that the incoming data contains double quotes (").
The same issue is also observed for an Amazon S3 V2 source object in a mapping to write to Amazon Redshift target columns of the super data type.
When you read from a single object source that contains super data type column values and define a simple filter on that column, the mapping is successful, but no records are generated for the columns in a mapping enabled with full
SQL ELT optimization
.
When you run a mapping in advanced mode to read from a Amazon Redshift source and write to a target and the columns of Super data type contain null values, the null values are written as the string value "null" in the target.
Time, datetime, timestamp, and timestampz
Consider the following rules and guidelines for data that contains the time, datetime, timestamp, or timestampz data types:
When you run a mapping using the time data type, the task runs successfully, but the decimal values are truncated from the output. For the time data type, the SQL query
CallableStatement .setTime(java.sql.Time)
has precision support till seconds only.
When you create a mapping for data that contains a time column and you edit the data type from time to timestamp, the mapping fails at run time.
When you read from or write to Amazon Redshift, the Time data type appears appended with dummy values in the data preview for all the transformations in the mapping. All the downstream transformations also receive the appended dummy date values. However, the time values appear correctly only in an existing Amazon Redshift target.
When you create a target at runtime and the incoming values contain time columns, the time column is converted to the timestamp data type in the target. If a time column is added to the source and you dynamically refresh the schema, the column is created with the timestamp data type in the target.
If you configure an SQL transformation to call a stored procedure from Amazon Redshift, and you specify the time data type with fractional seconds as a parameter in the SQL transformation, the precision is truncated.
When you specify SESSSTARTTIME variable in a query in a mapping task to return the datetime values, specify the query in the following format:
:select to_timestamp('$$$SESSSTARTTIME','YYYY-MM-DD HH24:MI:SS.MS')::TIMESTAMP as xz;
You must set the
DateTime Format String
in the Advanced session properties to
YYYY-MM-DD HH24:MI:SS.MS
.
If you want to use the default format
MM/DD/YYY
for SESSIONSTARTTIME in the mapping task, you must use the following format in the query:
select to_timestamp('$$$SESSSTARTTIME','MM/DD/YYYY HH24:MI:SS.MS')::TIMESTAMP as xz;
When you use a Filter transformation to apply a filter to the date, timestamp, or time column, use the to_date() expression in following format. For example,
. The source string format and the format string must match. Change the fractional part of the string based on the source values. For more information on the TO_DATE format strings, see the Transformation Language Reference Guide.
When you read from an Amazon Redshift source using a simple filter on the time data type column and run a mapping, the task fails at runtime with the following error:
Operation failed: Invalid expression string for filter condition []
.
As a workaround, use the advance filter to read from an Amazon Redshift source.
When you refresh the dynamic schema on a timestampz column, the varchar data type is only propagated in the column value and not timestampz.
When you write data of Date data type to an Amazon Redshift target and the Secure Agent and the Amazon Redshift cluster are in different time zones, the date logged in the target is one day behind of the actual date.
For example, the date 2014-12-07 is written as 2014-12-06 in the target.