Table of Contents

Search

  1. Preface
  2. Part 1: Introduction to Amazon Redshift connectors
  3. Part 2: Data Integration with Amazon Redshift V2 Connector
  4. Part 3: Data Integration with Amazon Redshift Connector

Amazon Redshift Connectors

Amazon Redshift Connectors

Rules and guidelines in mappings in SQL ELT mode

Rules and guidelines in mappings in SQL ELT mode

Consider the following rules and guidelines when you run mappings in SQL ELT mode:

General guidelines

  • When you read from multiple sources and use a Joiner or Union transformation, ensure that you use the same connection in all the sources.
  • When you provide more than five arguments in the Greatest() or Least() function, the following error message is displayed:
    Function [LEAST] doesn't contain the required number of arguments. The number of required arguments is [1] The number of provided arguments is [5].
    The error message is misleading. You can specify a maximum of four arguments in the Greatest() and Least() functions.
  • Do not use keywords or function names as column names or output port names.
  • When you perform an update, upsert, or delete operation, ensure that you specify a primary key or a column name in the update column field for the target table.
  • You cannot use nested aggregate functions. For example, you cannot specify SUM(AVG(col1)) in the aggregate function.
    Use nested aggregations by adding multiple Aggregator transformations in the mapping.
  • When you configure a connected lookup, you can use only Return All and Return Any multiple match policies. When you configure an unconnected lookup, you can use only Return Any multiple match policy.
  • When you use a connected or an unconnected lookup transformation and the
    Multiple Match Policy
    option is set to
    Return Any
    , ensure that the lookup condition doesn't contain operators other than equal to (=). Else, the mapping fails.
  • When you use the TEXT_TO_NUMERIC_ALT() function, you can use one of the following syntaxes:
    • TEXT_TO_NUMERIC_ALT (expression)
    • TEXT_TO_NUMERIC_ALT (expression, format)
    • TEXT_TO_NUMERIC_ALT (expression, precision, scale)
    • TEXT_TO_NUMERIC_ALT (expression, format, precision, scale)
    Ensure that the format argument is a string and the precision parameter is an integer. Also, if you specify the precision, you must specify the scale along with the precision.
  • When you pass a function as an argument within another function, ensure that the data type of the value that the argument returns is supported by the parent function.

Amazon Redshift source and target

  • When you read from and write to Amazon Redshift, ensure that you use the same connection in the source and target transformations.
  • Ensure that the source data doesn't contain Boolean, Time, TIMESTAMPTZ, and Super data types.
  • If you select Query as the source type, ensure that there are no duplicate columns in the query.
  • When you configure a Lookup transformation with
    Multiple Match Policy
    option set to
    Return Any
    , and use a SQL query to override the lookup objects at runtime, ensure that number of columns that the SQL query returns is the same as that in the lookup object.
  • When you use an unconnected Lookup transformation and set the
    Multiple Match Policy
    option to
    Return Any
    , the columns defined in the lookup condition are redundantly added to the SELECT clause of the query.
  • When you use a custom query as source, do not enable the
    Select Distinct
    option in the advanced source properties. Else, the Select Distinct query overrides the custom query and eliminates the duplicate rows.

Amazon S3 source and Amazon Redshift target

  • When you read from an Amazon S3 source, ensure that the source data doesn't contain the filename port.
  • Ensure that the source data doesn't contain hierarchical data types.
  • You cannot read Date, Decimal, and Timestamp data types from an Avro file.
  • You can read only String data type from a delimited file.
  • You can't preview data in transformations configured midstream in the mapping flow.
  • When you read from an Avro file, ensure that column names do not contain unicode characters.
  • When you read data from an Avro or a JSON file, ensure that the columns names are in lowercase.
  • When you read data from an ORC or a Parquet file, specify
    AWS_IAM_ROLE
    in the COPY command.
  • When you read data from a delimited, Avro, or JSON file and the precision of the values is greater than the default precision, specify the attribute
    TRUNCATECOLUMNS=ON
    in the Copy command.
  • When you read data from a delimited, Avro, or JSON file and the Amazon S3 bucket and the Amazon Redshift cluster are located in different regions, specify the Amazon S3 bucket region in the
    REGION
    attribute in the
    COPY
    command.

0 COMMENTS

We’d like to hear from you!