Hi, I'm Ask INFA!
What would you like to know?
ASK INFAPreview
Please to access Ask INFA.

Implementing Oracle to Snowflake Synchronization Using Cloud Mass Ingestion Databases

Implementing Oracle to Snowflake Synchronization Using Cloud Mass Ingestion Databases

Step 3: Configuring the Snowflake target

Step 3: Configuring the Snowflake target

Configure the target on the
Target
page of the
database ingestion
task wizard.
  1. In the
    Connection
    list, select the Snowflake connection that you previously defined and tested in
    Administrator
    for the runtime environment. The connection type appears in parentheses after the connection name.
    The list includes only the connection types that are valid for the load type selected on the
    Definition
    page. No connections are listed if you did not select a load type.
    If you change the load type and the selected connection is no longer valid, a warning message is issued and the
    Connection
    field is cleared. You must select another connection that is valid for the updated load type.
    After you deploy the ingestion task, you cannot change the connection without first undeploying the associated ingestion job. You must then deploy the task again.
  2. In the
    Target
    section, configure the properties that pertain to a Snowflake target.
    The following table describes these properties:
    Property
    Description
    Target Creation
    The only available option is
    Create Target Tables
    , which generates the target tables based on the source tables.
    After the target table is created, Mass Ingestion Databases intelligently handles the target tables on subsequent job runs. Mass Ingestion Databases might truncate or re-create the target tables depending on the specific circumstances.
    Schema
    Select the target schema in which
    Mass Ingestion Databases
    creates the target tables.
    Stage
    The name of internal staging area that holds the data read from the source before the data is written to the target tables. This name must not include spaces. If the staging area does not exist, it will be automatically created.
    Apply Mode
    Indicates how source DML changes, including inserts, updates, and deletes, are applied to the target. Options are:
    • Standard
      . Accumulate the changes in a single apply cycle and intelligently merge them into fewer SQL statements before applying them to the target. For example, if an update followed by a delete occurs on the source row, no row is applied to the target. If multiple updates occur on the same column or field, only the last update is applied to the target. If multiple updates occur on different columns or fields, the updates are merged into a single update record before being applied to the target.
    • Soft Deletes
      . Apply source delete operations to the target as soft deletes. A soft delete marks the deleted row as deleted without actually removing it from the database. For example, a delete on the source results in a change record on the target with "D" displayed in the INFA_OPERATION_TYPE column. If an update followed by a delete occurs on the source, two records are written to the target both with "D" displayed in the INFA_OPERATION_TYPE column.
      Consider using soft deletes if you have a long-running business process that needs the soft-deleted data to finish processing, to restore data after an accidental delete operation, or to track deleted values for audit purposes.
      If you use
      Soft Deletes
      mode, you must not perform an update on the primary key in a source table. Otherwise, data corruption can occur on the target.
    • Audit
      . Apply an audit trail of every DML operation made on the source tables to the target. A row for each DML change on a source table is written to the generated target table along with the audit columns you select under the
      Advanced
      section. The audit columns contain metadata about the change, such as the DML operation type, time, owner, transaction ID, generated ascending sequence number, and before image. Consider using Audit apply mode when you want to use the audit history to perform downstream computations or processing on the data before writing it to the target database or when you want to examine metadata about the captured changes.
    The default value is
    Standard
    .
    ***In this scenario, accept the default value of
    Standard
    .***
  3. If you want to rename the target objects that are associated with the selected source tables, define table renaming rules. For example, you might want to add a prefix or suffix. To create a rule for renaming tables:
    1. Under
      Table Renaming Rules
      , in the
      Create Rule
      fields, enter a source table name or a table name mask that includes one or more wildcards. Then enter the corresponding target table name or table name mask.
      • For the source, you can enter only the asterisk (*) wildcard to select all source tables that match the table selection criteria on the
        Source
        page. Or you can enter a specific source table name or a table-name mask that includes one or more of the following wildcards: an asterisk (*) to represent one or more characters or a question mark (?) to represent a single character.
      • To use a table-name mask with the wildcard character for the target, you must also use the wildcard character in the source. If you use a specific source table name with a target table mask that includes the wildcard character, the task deployment will fail.
      • If a table name includes special characters, such as a backslash (\), asterisk(*), dot (.), or question mark (?), escape each special character in the name with a backslash (\).
    2. Click
      Add Rule
      The rule appears in the rules list.
      Target table renaming rule example
    You can define multiple table rules. The order of the rules does not matter with regard to how they are processed unless a table matches multiple rules. In this case, the last matching rule determines the name of the table.
  4. If you want to override the default mappings of source data types to target data types, define data type rules.
  5. Under
    Advanced
    , you can enter the following advanced target properties which will occur in target tables:
    Field
    Description
    Add Last Replicated Time
    Select this check box to add a metadata column that records the timestamp at which a record was inserted or last updated in the target table. For initial loads, all loaded records have the same timestamp. For incremental and combined initial and incremental loads, the column records the timestamp of the last DML operation that was applied to the target.
    By default, this check box is not selected.
    Prefix for Metadata Columns
    Add a prefix to the names of the added metadata columns to easily identify them and to prevent conflicts with the names of existing columns.
    The default value is INFA_.
    Superpipe
    Snowpipe Streaming API allows you to quickly stream rows of data directly to Snowflake Data Cloud target tables with low latency instead of first writing the data to stage files.
    By default, this check box is selected.
    ***In this scenario,
    deselect
    the check box.***
    Merge Frequency
    When
    Superpipe
    is selected, you can optionally set the frequency, in seconds, at which change data rows are merged and applied to the Snowflake target tables.
  6. Under
    Custom Properties
    , you can specify custom properties at the direction of Informatica Global Customer Support to meet your special requirements. In this scenario, skip this step.
  7. Click
    Next
    .

0 COMMENTS

We’d like to hear from you!