Table of Contents

Search

  1. Preface
  2. Introduction to ODBC Connector
  3. Connections for ODBC
  4. Synchronization tasks with ODBC Connector
  5. Mappings and mapping tasks with ODBC Connector
  6. SQL ELT optimization
  7. Data type reference

ODBC Connector

ODBC Connector

Rules and guidelines for SQL ELT optimization

Rules and guidelines for
SQL ELT optimization

Certain rules and guidelines apply when you configure
SQL ELT optimization
to specific databases.

Amazon Redshift

Consider the following rules and guidelines for
SQL ELT optimization
to an Amazon Redshift database:
  • To push TRUNC(DATE) to Amazon Redshift, you must define the date and format arguments. Otherwise, the agent does not push the function to Amazon Redshift .
  • The aggregator functions for Amazon Redshift accept only one argument, a field set for the aggregator function. The filter condition argument is ignored.
    In addition, verify that all fields mapped to the target are listed in the GROUP BY clause.
  • The Update Override ODBC advanced target property is not applicable when you use an ODBC connection to connect to Amazon Redshift.
  • To push TO_DATE() to Amazon Redshift, you must define the string and format arguments.
  • To push TO_CHAR() to Amazon Redshift, you must define the date and format arguments.
  • Do not specify a format for SYSTIMESTAMP() to push the SYSTIMESTAMP to Amazon Redshift. The Amazon Redshift database returns the complete time stamp.
  • To push INSTR() to Amazon Redshift, you must only define string, search_value, and start arguments. Amazon Redshift does not support occurrence and comparison_type arguments.
  • The flag argument is ignored when you push TO_BIGINT and TO_INTEGER to Amazon Redshift.
  • The CaseFlag argument is ignored when you push IN() to Amazon Redshift.
  • If you use the NS format as part of the ADD_TO_DATE() function, the agent does not push the function to Amazon Redshift.
  • If you use any of the following formats as part of the TO_CHAR() and TO_DATE() functions, the agent does not push the function to Amazon Redshift:
    • - NS
    • - SSSS
    • - SSSSS
    • - RR
  • To push TRUNC(DATE), GET_DATE_PART(), and DATE_DIFF() to Amazon Redshift, you must use the following formats:
    • - D
    • - DDD
    • - HH24
    • - MI
    • - MM
    • - MS
    • - SS
    • - US
    • - YYYY
  • When you push the DATE_DIFF() function to Amazon Redshift using a Redshift ODBC connection, the Secure Agent incorrectly returns the difference values. If the result is positive, the Secure Agent returns negative values and if the result is negative, the positive value is returned.
  • When you select an Amazon Redshift ODBC connection as source and select an external table as a source object, the data preview fails.

DB2

Consider the following rules and guidelines for
SQL ELT optimization
to a DB2 database:
Mappings
When you configure an ODBC DB2 mapping, adhere to the following guidelines:
  • You cannot use the ODBC DB2 connection to read
    or write
    unicode data.
  • You cannot calculate and store data temporarily using the
    Variable Field
    in an Expression transformation.
Functions
When you push functions to DB2, adhere to the following guidelines:
  • You cannot get a case-sensitive return value for the IN () function.
  • To push the MOD() function, the argument that you pass must be of the Integer date type.
  • When you push the SUBSTR() function, the value of the string argument must be greater than the value of the length argument.
  • When you push the SOUNDEX() function that contains an empty string or a character string without an English alphabet, the function returns Z000.
  • You cannot use microseconds or nanoseconds value when you push functions to the DB2 database.
  • You cannot push the GET_DATE_PART() function when the format argument is NULL or the format argument is not specified.
  • When you push the INSTR() function, you can only define the string, search_value, and start arguments.
  • When you push the DECODE() function that contains the Null value in the argument, the Secure Agent ignores the Null value and the function returns the value of the default argument.
  • You cannot push the LTRIM() or RTRIM() function that contains the trim_set argument.
  • To push the TO_BIGINT(), TO_DECIMAL(), TO_FLOAT(), or TO_INTEGER function, the argument that you pass must be of the Numeric date type.
  • To use the
    NULL
    expression in the Expression transformation, you must specify the expression in one of the following formats:
    • TO_CHAR(NULL)
    • TO_INTEGER(NULL)
  • When you push the EXP() function that contains the Numeric or Double data type, the function might return a different decimal value for the data types as compared to a mapping that runs without
    SQL ELT optimization
    .
  • To push TO_CHAR(date) and TO_DATE() to DB2, you must use the following formats:
    • YYYYMMDD
    • YYYYMMDD HH24MISS
    • YYYY-MM-DD HH24MISS
    • YYYYMMDD HH24:MI:SS
    • YYYY/MM/DD HH24:MI:SS
    • YYYY/MM/DD HH24MISS
    • YYYY-MM-DD HH24:MI:SS
    • YYYY-MM-DD-HH24.MI.SS.US
    • YYYY-MM-DD-HH24.MI.SS.MS
    • YYYY-MM-DD-HH24.MI.SS
  • To push ADD_TO_DATE() and GET_DATE_PART() to DB2, you must use the following formats:
    • HH
    • HH24
    • HH12
    • MM
    • MON
    • MONTH
    • Y
    • YY
    • YYY
    • YYYY
    • D
    • DD
    • DDD
    • DY
    • DAY
    • US
    • SS
    • MI
Transformations
  • To join source tables, ensure that the source tables are from the same database and use the same connection.
  • You cannot configure more than one Sorter transformation in a mapping.
  • You cannot configure a Union transformation when one of the following conditions are true:
    • One of the sources to the Union transformation contains either a distinct union or sorter.
    • The input groups do not originate from the same relational database management system.
  • You cannot configure a Lookup transformation when one of the following conditions are true:
    • The lookup connection is not compatible for
      SQL ELT optimization
      with the source connection.
    • The unconnected Lookup transformation is downstream from an Aggregator transformation.
  • The mapping fails when the Union transformation is downstream from a transformation that required a view or sequence generator to be created in a database and the connections are on different databases.
SQL override
  • The mapping fails when you use an ORDER BY clause in an SQL override query.
  • When you define an SQL override query to override the custom query, you cannot extract distinct rows from the source table.
  • When you configure an SQL override query in a Lookup transformation, ensure that you select all ports in the same order that appear in the Lookup transformation.
Target operations
  • You cannot configure an update override for the target.
  • You cannot perform an upsert operation on the target.

Google BigQuery

Consider the following rules and guidelines for
SQL ELT optimization
to a Google BigQuery database:
  • The
    Update Override
    ODBC advanced target property is not applicable when you use an ODBC connection to configure
    SQL ELT optimization
    to write data to a Google BigQuery target.
  • Update, upsert, and delete operations are not applicable when you use an ODBC connection to configure
    SQL ELT optimization
    to write data to a Google BigQuery target.
  • When you configure
    SQL ELT optimization
    to write data to a Google BigQuery target, the
    Truncate Target
    option is not supported. You can configure a pre SQL in the source to delete data from the target table.
  • When you configure
    SQL ELT optimization
    , ensure that the transformation does not contain a variable port.
  • To push the ADD_TO_DATE() function to the Google BigQuery database, you must define the arguments of the Date data type.
  • To push the DECODE() function to evaluate multiple columns and conditions for TRUE or FALSE, you must define a boolean expression instead of using TRUE or FALSE in the value argument.
  • To push the GET_DATE_PART() function to the Google BigQuery database, you must define the arguments of the Date, DateTime, or Timestamp data type.
  • To push the INSTR() function to the Google BigQuery database, you must use the following format: INSTR(string, search_value)
  • To push the LAST_DAY() function to the Google BigQuery database, you must define the arguments of the Date data type.
  • To push the MAX() function to the Google BigQuery database, you must define the arguments of the Number data type.
  • To push the MIN() function to the Google BigQuery database, you must define the arguments of the Date, Number, or String data type.
  • To push the ROUND(DATE) or TRUNC(DATE) function to the Google BigQuery database, you must define the arguments of the Timestamp data type.
  • To push the TO_CHAR(DATE) function to the Google BigQuery database, you must define the arguments of the Timestamp data type.
  • When you push the SYSTIMESTAMP() function to the Google BigQuery database, do not specify any format. The Google BigQuery database returns the complete timestamp.
  • When you push the SYSDATE() function to the Google BigQuery database, you must map the output of the expression transformation to a column of Date data type in the Google BigQuery target.
  • When you push the TO_DATE() function to the Google BigQuery database, you must configure the output field in the expression transformation to a column of Timestamp data type.
  • When you push TO_DATE(string, format) or IS_DATE(string, format) to Google BigQuery and specify the SS, SS.MS, or SS.US format, the function returns the same value for the formats in seconds and subseconds.
  • When you push TO_DATE(string, format) or IS_DATE(string, format) to Google BigQuery, you must use the following format arguments:
    • YYYY
    • YY
    • MONTH
    • MON
    • MM
    • DD
    • HH24
    • HH12
    • MI
    • SS
    • SS.MS
    • SS.US
    • PM
    • AM
    • pm
    • am
    If you specify HH12 in the format argument, you must specify AM, am, PM, or pm.
  • When you push the TO_DATE() function to Google BigQuery using an ODBC connection and provide a constant in the expression, ensure that you specify the format argument. Otherwise, the mapping fails.
  • When you push TO_CHAR() to Google BigQuery, you must use the following format arguments:
    • YYYY
    • YY
    • MONTH
    • MON
    • MM
    • Q
    • DD
    • DDD
    • D
    • DY
    • HH
    • HH24
    • HH12
    • MI
    • SS
    • SS.MS
    • SS.US
    • PM
    • AM
    • pm
    • am
    • T
    If you specify HH12 in the format argument, you must specify AM, am, PM, or pm.
  • When you push ROUND(string, format) or TRUNC(string, format) to Google BigQuery, you must use the following format arguments:
    • HH24
    • MI
    • SS
    • DD
    • MS
  • When you push a function that returns a Boolean value, you must configure the output field in the expression transformation to a column of Integer data type.
  • If you configure a Lookup condition, you must use only the equals to (=) operator. If you use any operator other than the equals to (=) operator, the mapping fails.
  • When you configure the Lookup Source Filter or Lookup SQL Override property in a Lookup transformation, you must add the
    Create Temporary View
    property under the
    Advanced Session Properties
    tab when you create a
    mapping
    task and select
    Yes
    in the
    Session Property Value
    field.
  • If the Lookup transformation name contains Unicode characters, the mapping fails.
  • When you configure an unconnected Lookup transformation, the fields specified in the Lookup SQL Override property are matched with the lookup fields based on the field names.
  • When you configure a Lookup transformation and select
    Report error
    in the
    Multiple Matches
    property, the mapping fails and the Secure Agent logs the following error in the session log file:
    FnName: Execute Direct - [Informatica] [BigQuery] (70) Invalid query: Scalar subquery produced more than one element
  • If you specify a function in the Lookup SQL Override property, you must specify the alias name for the function with the lookup field as an argument.
  • When you read data of date, datetime, or timestamp data type
    and write the data as a string to the target
    , you must add the
    DateTime Format String
    property under the
    Advanced Session Properties
    tab when you create a
    mapping
    task and specify
    YYYY-MM-DD HH24:MI:SS
    in the
    Session Property Value
    field.
  • Ensure that the Data Source Name, User name, and the Driver Manager for Linux in the source and target ODBC connection are same. If the values of the Data Source Name, User name, and the Driver Manager for Linux are different in the source and target ODBC connection, the mapping fails with the following error:
    "
    SQL ELT optimization
    stops because the connections are not pushdown compatible."
  • Ensure that you do not specify an in-out parameter of Date or Time data type. Otherwise, the mapping task fails.
  • Ensure that you do not parameterize the mapping and use a parameter file to define values for fields, expressions, or data filters. Otherwise, the mapping task fails.
  • Ensure that you do not completely parameterize the expression in the Expression transformation and use a parameter file to define values. Otherwise, the mapping task fails.

Microsoft Azure SQL Data Warehouse

Consider the following rules and guidelines for
SQL ELT optimization
to a Microsoft Azure SQL Data Warehouse database:
  • When you use the Microsoft ODBC Driver 17, you cannot run mappings on Red Hat Enterprise Linux 8.
  • When you read data that contains reserved keywords from Microsoft Azure Synapse SQL, ensure that you add the keywords to the
    reswords.txt
    file in the agent machine. The
    reswords.txt
    file is available in the following path:
    <Secure Agent installation directory>\downloads\package-ICSAgent_RXX.X\package\ICS\main\bin\rdtm
  • You cannot use the ORDER BY clause in a source custom query unless you also specify a TOP, OFFSET, or FOR XML clause in the query.
  • When you run a mapping configured with the
    Create New at Runtime
    option, and if you drop the target that was created on
    Microsoft Azure SQL Data Warehouse
    using the drop table tablename command, and rerun the same mapping, the Secure Agent instead of creating the target and then writing data to that target, does not trigger the create query and results in an error.
  • The
    Update Override
    ODBC advanced target property is not applicable when you use an ODBC connection to connect to
    Microsoft Azure SQL Data Warehouse
    .
  • Upsert operations for
    SQL ELT optimization
    are not applicable when you use an ODBC connection.
  • The
    datetimeoffset
    datatype is applicable only in passthrough mappings.
  • The
    Microsoft Azure SQL Data Warehouse
    aggregate functions accept only one argument, which is a field set for the aggregate function. The agent ignores any filter condition defined in the argument.
    In addition, ensure that all fields mapped to the target are listed in the
    GROUP BY
    clause.
  • To push the
    TO_CHAR()
    function to the
    Microsoft Azure SQL Data Warehouse
    database, you must define the date and format arguments.
  • When you push the
    SYSTIMESTAMP()
    and
    SYSDATE()
    functions to the
    Microsoft Azure SQL Data Warehouse
    database, do not specify any format. The
    Microsoft Azure SQL Data Warehouse
    database returns the complete time stamp.
    SYSDATE
    works without brackets
    ()
    only, if used it shows as invalid expression.
  • You cannot push the
    TO_BIGINT()
    or
    TO_INTEGER()
    function with more than one argument to the
    Microsoft Azure SQL Data Warehouse
    database.
  • When you push the
    REPLACECHR()
    or
    REPLACESTR()
    function to the
    Microsoft Azure SQL Data Warehouse
    database, the agent ignores the caseFlag argument.
    For example, both
    REPLACECHR(false, in_F_CHAR, 'a', 'b')
    and
    REPLACECHR(true, in_F_CHAR, 'a', 'b')
    return the same value.
  • To push
    INSTR()
    to
    Microsoft Azure SQL Data Warehouse
    database, you must only define string, search_value, and start arguments.
    Microsoft Azure SQL Data Warehouse
    does not support occurrence and comparison_type arguments.
  • Microsoft Azure SQL Data Warehouse
    connector supports the following date formats with the
    TO_DATE()
    function:
    • YYYY-MM-DD HH24:MI:SS.NS
    • YYYY-MM-DD HH12:MI:SS.NSAM
    • MON DD YYYY HH12:MI:SS.NSAM
    • MON DD YYYY HH24:MI:SS.NS
    • DD MON YYYY HH12:MI:SS.NSAM
    • DD MON YYYY HH24:MI:SS.NS
    • MM/DD/YY HH12:MI:SS.NSAM
    • MM/DD/YY HH24:MI:SS.NS
    • MM/DD/YYYY HH12:MI:SS.NSAM
    • MM/DD/YYYY HH24:MI:SS.NS
    • HH24:MI:SS.NS
    • HH12:MI:SS.NSAM
  • To push the
    SET_DATE_PART()
    function to the
    Microsoft Azure SQL Data Warehouse
    database, you must use the following date data types as arguments:
    • datetime
    • datetimeoffset
    • datetime2
    • smalldatetime
    You can use the following formats for date data types:
    • YYYY, YY, YY, Y
    • MM, MON, MONTH
    • D, DD, DD, DY, DAY
    • HH, HH12, HH24
    • MI
    • MS
    • SS
      NS
      and
      US
      formats are not applicable to
      SET_DATE_PART()
      .
  • To push the
    ADD_TO_DATE()
    function to the
    Microsoft Azure SQL Data Warehouse
    database, you must use the following date data types as arguments:
    • date
    • datetime
    • datetimeoffset
    • datetime2
    • smalldatetime
    • time
    You can use the following formats for date data types:
    • YYYY, YY, YY, Y
    • MM, MON, MONTH
    • D, DD, DD, DY, DAY
    • HH, HH12, HH24
    • MI
    • MS
    • SS
    • NS
      : applicable to
      datetimeoffset
      ,
      datetime2
      , and
      time
    • US
  • To push the
    MAKE_DATE_TIME()
    function to the
    Microsoft Azure SQL Data Warehouse
    database, you must use the following date data types as arguments:
    • date
    • datetime
    • datetimeoffset
    • datetime2
    • smalldatetime
    • time
    You can use year, month, day, hour, minute, second, and nanosecond with appropriate return date types.

Netezza

Consider the following rules and guidelines for
SQL ELT optimization
to a Netezza database:
  • When you configure
    SQL ELT optimization
    in a mapping to write data that contains numeric fields to a Netezza target, the Secure Agent rounds large-range numeric data (38,10) to the first decimal place.
  • You cannot push down data of the Timestamp data type to a Netezza target database.

Snowflake

Consider the following rules and guidelines for
SQL ELT optimization
to a Snowflake database:
Use the following rules and guidelines when you configure
SQL ELT optimization
to a Snowflake database:
Update override property
The update override property is applicable for all ODBC subtypes in the ODBC connection, except Snowflake.
Common fields in multiple sources
When you use a Snowflake ODBC connection in a mapping enabled with
SQL ELT optimization
to read data from two Snowflake sources that have fields with the same name and you define a filter condition for one of the common fields, the mapping fails.
Create Temporary View session property
Enable the
Create Temporary View
property in the session properties of the mapping task before you configure the following properties:
  • Upsert, update, or delete operation.
  • Filter or joiner in the query options of the source.
  • Push down a custom SQL query from the source.
  • Unconnected lookup.
Not Null constraint
When you run a mapping to write data to a Snowflake target, and you define the primary key for the target table but do not explicitly define the Not Null constraint, the upsert, delete, or update operation fails. You must add the Not Null constraint for the target table and then run the upsert, delete, or update operation.
Sequence Generator transformation
When you configure a Sequence Generator transformation in a mapping, adhere to the following guidelines:
  • Add the
    Create Temporary Sequence
    advanced session property and set the session property value to
    Yes
    .
  • In the
    Output
    fields of the Sequence Generator transformation, do not map the CURRVAL field to an input field in a Target transformation or other downstream transformation.
  • When we use the Sequence Generator transformation in a mapping and the target operation is upsert, update, or delete, the SQL ELT query is not generated. You must select the target operation as Insert to push down the Sequence Generator transformation to Snowflake.
  • When you configure cross-database
    SQL ELT optimization
    for a Snowflake ODBC mapping that includes a Sequence Generator transformation,
    SQL ELT optimization
    fails.
Lookup
When you configure a lookup, adhere to the following guidelines:
  • When you configure a connected lookup, you can select the
    Return All Rows
    multiple matches option in the lookup object properties. If you select any other option other than
    Return All Rows
    , the SQL ELT query is not generated.
  • When you configure an unconnected lookup, you must select the
    Report error
    multiple matches option in the unconnected lookup object properties for the
    SQL ELT optimization
    to work. Ensure that you enable the
    Create Temporary View
    property in the session properties of the mapping task.
  • When you configure an unconnected lookup in a mapping configured for
    SQL ELT optimization
    using a Snowflake ODBC connection, and if there are multiple matches in the data, the Secure Agent processes the records, but does not log an error when it finds multiple matches.
Create New at Runtime option
You can configure the
Create New at Runtime
option for a Target transformation in a mapping configured for
SQL ELT optimization
. When you use the
Create New at Runtime
option, adhere to the following guidelines:
  • As the SQL identifiers are always enclosed in double quotes, you must explicitly enable the
    AddQuotesAlwaysPDO
    flag and set the value to
    Yes
    in the custom properties in the advanced session properties of the mapping task .
  • If you enable the truncate target option in the target mapping, you must add the
    AddQuotesAlways
    DTM property and set the value to
    Yes
    . To configure this property, in the Secure Agent properties, navigate to the
    Custom Configuration Details
    section, select
    Data Integration Server
    as the service,
    Type
    as
    DTM
    , add the
    AddQuotesAlways
    property, and set the value to
    Yes
    .
  • When you use the
    Create New at Runtime
    option, the TIMESTAMP_LTZ, TIMETAMP_TZ, Boolean, and Time data types are not supported.
  • When you run a mapping configured with the
    Create New at Runtime
    option, and if you drop the target that was created on Snowflake using the drop table tablename command, and rerun the same mapping, the Secure Agent instead of creating the target and then writing data to that target, does not trigger the create query and results in an error.
  • When you use the
    Create New at Runtime
    option, and if the name you specify for the target already exists in Snowflake, the Secure Agent inserts the data to the existing target table.
  • When you use the
    Create New at Runtime
    option, but later delete the created target table and re-run the mapping task, the Secure Agent fails to create the target table.
Functions
When you push functions to Snowflake, adhere to the following guidelines:
  • The Snowflake aggregate functions accept only one argument, which is a field set for the aggregate function. The agent ignores any filter condition defined in the argument.
    In addition, ensure that all fields mapped to the target are listed in the GROUP BY clause.
  • When you push the SYSTIMESTAMP() function to the Snowflake database, do not specify any format. The Snowflake database returns the complete time stamp.
  • You cannot push the TO_BIGINT() or TO_INTEGER() function with more than one argument to the Snowflake database.
  • When you push the REPLACESTR() function to the Snowflake database, the agent ignores the caseFlag argument. The REPLACESTR() function must include four parameters for
    SQL ELT optimization
    to work.
  • When you push the MD5 function, it returns NULL if any input is NULL.
  • You cannot use millisecond and microsecond values when you push functions to the Snowflake database.
  • You must use only the following supported date and time formats:
    • Y
    • YY
    • YYY
    • YYYY
    • MM
    • MON
    • MONTH
    • D
    • DD
    • DDD
    • DY
    • DAY
    • HH
    • MI
    • SS
    • NS
    For information on date and time-related functions, see the Snowflake documentation.

Teradata

Consider the following rules and guidelines for full
SQL ELT optimization
to a Teradata database:
  • You cannot push the LTRIM(), RTRIM(), or ROUND(NUMBER) function that contains more than one argument to the Teradata database.
  • You cannot use the upsert operation in a Joiner transformation.
  • When you configure an SQL override query using full
    SQL ELT optimization
    , you must map all fields that you specify in the SQL override query to the Teradata target object.
  • You can push the STDDEV() and VARIANCE() functions to the Teradata database only in an Aggregator transformation.
  • You cannot use a ORDER BY clause in a custom query or SQL override query, unless you also specify the TOP clause in the query.

0 COMMENTS

We’d like to hear from you!