- Data Integration Connectors
- All Products
Property
| Description
|
|---|---|
Connection
| Name of the target connection.
You can select an existing connection, create a new connection, or define parameter values for the target connection property.
You can switch between a non-parameterized and a parameterized Snowflake Data Cloud connection. When you switch between the connections, the advanced property values are retained.
To overwrite the target connection properties at runtime, select the
Allow parameter to be overridden at run time option.
|
Target Type
| Type of target object. Select
Single Object or
Parameter .
|
Parameter
| A parameter file where you define values that you want to update without having to edit the task.
New Parameter to define a new parameter for the target object.
The
Parameter property appears only if you select parameter as the target type.
If you want to overwrite the target object at runtime, select the
Allow parameter to be overridden at run time option. When the task runs, it uses the parameters from the file that you specify in the task advanced session properties.
|
Object
| The target object for the task. Select the target object.
You can either select an existing table or create a new table. You can write data by selecting an existing table or creating a new table in the target by using the
Create New at Runtime option.
|
Create New at Runtime
| Creates a target table at run time based on the object name, table type,
and path you specify. For more information about the properties
you can configure when you create target at run time, see Specify a target. You can
edit the metadata of the source fields before you create the
target. |
Operation
| The target operation. Select Insert, Update, Upsert, Delete, or Data
Driven. To perform an upsert operation, you need to set
the target advanced property Update Mode
to Update Else Insert . |
Update columns
| The primary key column to update, upsert, or delete data in a Snowflake target.
If you do not specify a primary key column, the mapping considers the target column that you configured as the primary key column to update, upsert, or delete data.
|
Data Driven Condition
| Enables you to define expressions that flag rows for an insert, update, upsert, or delete operation.
|
Advanced Property
| Description
|
|---|---|
UpdateMode
| Loads data to the target based on the mode you specify.
This property applies when you select the
Update ,
Upsert , or
Data Driven operation with the update or upsert condition.
Select one of the following modes:
|
Database
| Overrides the database that you used to import the object.
|
Schema
| Overrides the schema that you used to import the object.
|
Warehouse
| Overrides the Snowflake name specified in the connection.
The warehouse name in the mapping overrides the warehouse name you specify in the connection.
Even though you provide an incorrect warehouse name in the connection properties, the connection is successful. However, before you run the mapping, ensure that you specify the correct warehouse name in the mapping properties.
|
Role
| Overrides the Snowflake role assigned to the user specified in the connection.
|
Pre SQL
| The pre-SQL command to run before the agent writes to Snowflake.
You can specify multiple pre-SQL commands, each separated with a semicolon.
|
Post SQL
| The post-SQL command to run after the agent completes the write operation.
For example, if you want to alter the table created by using create target option and assign constraints to the table before you write data to the table, specify a post-SQL statement.
You can specify multiple post-SQL commands, each separated with a semicolon.
|
Batch Row Size 1
| The number of rows written to a single file in the agent location. When the number of rows written to the file reaches the value specified, the agent flushes the data queue and starts processing the write commands.
For more information on configuring the batch size value, see
Configuring the batch size and the number of local staging files
|
Number of local staging files 1
| The number of files that represents a single batch of data. The default number of files considered is 64.
After the agent uploads the specified number of local staging files to the Snowflake user stage, Snowflake unloads the data to the target table.
For more information on configuring the number of local staging files, see
Configuring the batch size and the number of local staging files.
|
Truncate Target Table
| Truncates the database target table before inserting new rows. Select one
of the following options:
Default is false. |
Additional Write Runtime Parameters
| The additional runtime parameters that you can use when you write to Snowflake.
You can enter multiple write runtime parameters, separated by ampersand (&), in the following format:
For example, if you want to specify the user-defined stage in the Snowflake database to upload the local staging files and don't want to compress files before you write to Snowflake tables, enter the following runtime parameters:
For the list of additional write runtime parameters that you can configure, see
Write runtime parameters.
|
Table Name
| Overrides the table name of the Snowflake target table.
|
Rejected File Path 1
| The filename and path of the file on the agent machine where you want to write the rejected records.
For example,
\rejectedfiles\reject7
|
Update Override
| Overrides the default update query that the agent generates for the update operation with the update query.
|
Success File Directory
| Not applicable.
|
Error File Directory
| Not applicable.
|
Forward Rejected Rows
| Determines whether the transformation passes rejected rows to the next transformation or drops rejected rows. By default, the agent forwards rejected rows to the next transformation.
|
1 Doesn't apply to mappings in advanced mode.
| |
So to avoid entering duplicate records on Upsert, do we set the UpdateMode to "Update Else Insert"?
Hi Kevin Golde,
We're working to address your comments and will get back to you.
Regards,
Informatica Documentation Team
Hi Kevin Golde,
The Update Else Insert option does not handle duplicates. If there are duplicate records in the data, they will be inserted or updated if the primary key column value matches with the incoming column in the Snowflake table.
Regards,
Informatica Documentation Team
Any guidance from Informatica on handling the duplicates records, even after defining the primary key snowflake, IICS doesn't seem to handle the merge statement correctly?
Hi Pavan,
We're working to address your comments and will get back to you.
Regards,
Informatica Documentation Team
Hi Pavan,
You're requesting information that requires technical support. Please contact Global Customer Support for assistance.
Regards,
Informatica Documentation Team
Hi, if autocommit is disabled for the account, checking the truncate checkbox will not function as expected, but also the mapping will succeed and not throw any error (because IDMC is not aware of the the account level setting for autocommit). It may be a good idea to link back to the following KBs from this document, specifically from the truncate section.
Alternatively, perhaps we could just include the verbiage here in this doc.
Thanks!
Hi Coomar,
We're working to address your comments and will get back to you.
Regards,
Informatica Documentation Team
Hi Coomar,
We're glad to let you know that we addressed your comments on this page. We have updated the Truncate Target Table property description.
Regards,
Informatica Documentation Team