Hi, I'm Ask INFA!
What would you like to know?
ASK INFAPreview
Please to access Ask INFA.

Ingesting Db2 for i Change Data into Snowflake with Cloud Mass Ingestion Databases

Ingesting Db2 for i Change Data into Snowflake with Cloud Mass Ingestion Databases

Snowflake targets

Snowflake targets

Target preparation

Complete the following steps as the ACCOUNTADMIN user.
  1. Create a Mass Ingestion user. Use one of the following SQL statements:
    create user INFACMI_User password 'Xxxx@xxx';
    or
    replace user INFACMI_User password 'Xxxx@xxx';
  2. Create a new role and grant the role to the Mass Ingestion user. Use the following SQL statements:
    create role INFA_CMI_Role; grant role INFA_CMI_Role to user INFACMI_User;
  3. Grant usage on the Snowflake virtual warehouse to the new role. Use the following SQL statement:
    grant usage on warehouse CMIWH to role INFA_CMI_Role;
  4. Grant usage on the Snowflake database to the new role. Use the following SQL statement:
    grant usage, CREATE SCHEMA on database CMIDB to role INFA_CMI_Role;
  5. Set the default role for the newly created user. Use the following SQL statement:
    alter user INFACMI_User set default_role=INFA_CMI_Role;
Also, as the INFACMI_User, create a new schema:
create schema CMISchema;
If the user's default role is used for ingestion tasks and does not have the required privileges, the following error will be issued at runtime:
SQL compilation error: Object does not exist, or operation cannot be performed.

Usage considerations

  • Before writing data to Snowflake target tables,
    database ingestion
    jobs first write the data to an internal stage with the name you specified in the associated
    database ingestion
    task.
  • When you define a connection for a Snowflake target, you must set the
    Additional JDBC URL Parameters
    field to
    database=
    target_database_name
    . Otherwise, when you try to define the target in the
    database ingestion
    task wizard, an error message reports that the list of schemas cannot be retrieved.
  • When you define a connection for a Snowflake target using the
    KeyPair
    option as the authentication method and you generate the private key with OpenSSL 3.x.x version, use
    PBE-SHA1-2DES
    or
    PBE-SHA1-3DES
    cipher while generating the private key. Run one of the following commands:
    openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -v1 PBE-SHA1-3DES -out rsa_key.p8
    or:
    openssl genrsa 2048 | openssl pkcs8 -topk8 -inform PEM -v1 PBE-SHA1-2DES -out rsa_key.p8
    If you use a generic command without the
    PBE-SHA1-2DES
    or
    PBE-SHA1-3DES
    cipher, an error message about an invalid or unsupported private key might occur while fetching the target schema in the target definition step of the
    database ingestion
    task wizard.
    The error message does not occur if you use OpenSSL 1.1.1 to generate the private key.
  • Database ingestion
    incremental load jobs and combined initial and incremental load jobs generate a recovery table named INFORMATICA_CDC_RECOVERY on the target to store internal service information that prevents jobs restarted after a failure from propagating previously processed data again. This recovery table is generated in the same schema as the target tables.
  • For Snowflake targets, you cannot alter the scale of NUMBER fields or change the data type of an existing field to a different data type because Snowflake does not support these actions.

0 COMMENTS

We’d like to hear from you!