Table of Contents

Search

  1. Preface
  2. Mass Ingestion Applications

Mass Ingestion Applications

Mass Ingestion Applications

Snowflake target properties

Snowflake target properties

When you define an
application ingestion
task, you must specify the properties for your Snowflake target on the
Target
page of the task wizard.
The following table describes the Snowflake target properties that appear in
Target
section:
Property
Description
Target Creation
The only available option is
Create Target Tables
, which generates the target tables based on the source objects.
Schema
The target schema in which
Mass Ingestion Applications
creates the target tables.
Stage
The name of internal staging area that holds the data read from the source before the data is written to the target tables. The name must not include spaces. If the staging area does not exist, it will be automatically created.
Apply Mode
For incremental load and combined initial and incremental load jobs with Snowflake targets, indicates how source DML changes, including inserts, updates, and deletes, are applied to the target. Options are:
  • Standard
    . Accumulate the changes in a single apply cycle and intelligently merge them into fewer SQL statements before applying them to the target. For example, if an update followed by a delete occurs on the source row, no row is applied to the target. If multiple updates occur on the same column or field, only the last update is applied to the target. If multiple updates occur on different columns or fields, the updates are merged into a single update record before being applied to the target.
  • Soft Deletes
    . Apply source delete operations to the target as soft deletes. A soft delete marks the deleted row as deleted without actually removing it from the database. For example, a delete on the source results in a change record on the target with "D" displayed in the INFA_OPERATION_TYPE column.
    After enabling Soft Deletes, any update in the source table during normal or backlog mode results in the deletion of the matching record, insertion of the updated record, and marking of the INFA_OPERATION_TYPE operation as NULL in the target table. Similarly, inserting a record in the source table during backlog mode results in marking the INFA_OPERATION_TYPE operation as E in the target table record.
    Consider using soft deletes if you have a long-running business process that needs the soft-deleted data to finish processing, to restore data after an accidental delete operation, or to track deleted values for audit purposes.
  • Audit
    . Apply an audit trail of every DML operation made on the source tables to the target. A row for each DML change on a source table is written to the generated target table along with the audit columns you select under the
    Advanced
    section. The audit columns contain metadata about the change, such as the DML operation type, transaction ID, and before image. Consider using
    Audit
    apply mode when you want to use the audit history to perform downstream computations or processing on the data before writing it to the target database or when you want to examine metadata about the captured changes.
    After enabling the Audit apply mode, any update in the source table during backlog or normal mode results in marking the INFA_OPERATION_TYPE operation as E in the target table record. Similarly, inserting a record in the source table during backlog mode results in marking the INFA_OPERATION_TYPE operation as E in the target table record.
    The
    Audit
    apply mode applies for SAP source with SAP Mass Ingestion connector.
Default is
Standard
.
Under
Advanced
, you can enter the following advanced target properties:
Field
Description
Add Last Replicated Time
Select this check box to add a metadata column that records the timestamp at which a record was inserted or last updated in the target table. For initial loads, all loaded records have the same timestamp, except for Snowflake targets that use the Superpipe option where minutes and seconds might vary slightly. For incremental and combined initial and incremental loads, the column records the timestamp of the last DML operation that was applied to the target.
By default, this check box is not selected.
Add Operation Type
Add a metadata column that includes the source SQL operation type in the output that the job propagates to the target tables. The column is named INFA_OPERATION_TYPE by default.
This field is displayed only when the
Apply Mode
option is set to
Audit
or
Soft Deletes
.
In Audit mode, the job writes "I" for inserts, "E" for updates, or "D" for deletes to this metadata column.
In Soft Deletes mode, the job writes "D" for deletes or NULL for inserts and updates. When the operation type is NULL, the other "Add Operation..." metadata columns are also NULL. Only when the operation type is "D" will the other metadata columns contain non-null values.
By default, this check box is selected. You cannot deselect it.
Add Operation Time
Select this check box to add a metadata column that records the source SQL operation timestamp in the output that the job propagates to the target tables.
This field is available only when
Apply Mode
is set to
Audit
or
Soft Deletes
.
By default, this check box is not selected.
Add Operation Sequence
Select this check box to add a metadata column that records a generated, ascending sequence number for each change operation that the job inserts into the target tables. The sequence number reflects the change stream position of the operation.
This field is available only when
Apply Mode
is set to
Audit
.
By default, this check box is not selected.
Add Before Images
Select this check box to add _OLD columns with UNDO "before image" data in the output that the job inserts into the target tables. You can then compare the old and current values for each data column. For a delete operation, the current value will be null.
This field is available only when
Apply Mode
is set to
Audit
.
By default, this check box is not selected.
Prefix for Metadata Columns
Add a prefix to the names of the added metadata columns to easily identify them and to prevent conflicts with the names of existing columns.
The default value is INFA_.
Superpipe
Select this check box to use the Snowpipe Streaming API to quickly stream rows of data directly to Snowflake Data Cloud target tables with low latency instead of first writing the data to stage files. This option is available for all load types.
When you configure the target connection, select KeyPair authentication.
By default, this check box is selected. Deselect it if you want to write data to intermediate stage files.
Merge Frequency
When
Superpipe
is selected, you can optionally set the frequency, in seconds, at which change data rows are merged and applied to the Snowflake target tables. This field applies to incremental load and combined initial and incremental load tasks. Valid values are 300 through 604800. Default is 3600 seconds.
Enable Case Transformation
By default, target table names and column names are generated in the same case as the corresponding source names, unless cluster-level or session-level properties on the target override this case-sensitive behavior. If you want to control the case of letters in the target names, select this check box. Then select a
Case Transformation Strategy
option.
This check box is not available if you selected the
Superpipe
option. You cannot enable case transformation if you are using the Superpipe option for Snowflake.
Case Transformation Strategy
If you selected
Enable Case Transformation
, select one of the following options to specify how to handle the case of letters in generated target table (or object) names and column (or field) names:
  • Same as source
    . Use the same case as the source table (or object) names and column (or field) names.
  • UPPERCASE
    . Use all uppercase.
  • lowercase
    . Use all lowercase.
The default value is
Same as source
.
The selected strategy will override any cluster-level or session-level properties on the target for controlling case.

0 COMMENTS

We’d like to hear from you!