Table of Contents

Search

  1. Preface
  2. Part 1: Introduction to Amazon Redshift connectors
  3. Part 2: Data Integration with Amazon Redshift V2 Connector
  4. Part 3: Data Integration with Amazon Redshift Connector

Amazon Redshift Connectors

Amazon Redshift Connectors

Rules and guidelines for aggregate functions

Rules and guidelines for aggregate functions

Use the following rules and guidelines when pushing aggregate functions to an Amazon Redshift database:
  • You cannot use conditional clauses in the aggregate expression functions.
  • You can use non-aggregate functions in aggregate expressions.
  • You cannot use nested aggregate functions directly. For example, you cannot specify
    SUM(AVG(col1))
    in the aggregate expression function columns. Use nested aggregations by adding multiple aggregate transformations in the mapping.
  • You can parameterize the
    GROUP BY
    fields and aggregate functions in a mapping task.
  • When you use STDDEV or VARIANCE functions for an expression that consists of a single value and run the mapping, the result of the function is NULL. When you run the mapping without pushing it down, the result of the function is 0.
  • During the field mapping, you must map only the fields that you added in the
    GROUP BY
    port. Else, the mapping runs in the non-PDO mode with an error message.
  • When you do not specify a port from an Amazon S3 flat file source in the
    GROUP BY
    clause of the aggregate function and map the port to a Redshift target, the mapping task runs successfully in the non-PDO mode with the following message:
    Pushdown optimization to the source stops before transformation [Aggregator] because [f_varchar] is a non-group by passthrough port, which is not allowed.
    The mapping fails when you push down a mapping with an Amazon S3 Avro or Parquet source.