Table of Contents

Search

  1. Preface
  2. Part 1: Getting Started with Snowflake Data Cloud Connector
  3. Part 2: Data Integration with Snowflake Data Cloud Connector
  4. Part 3: SQL ELT with Snowflake Data Cloud Connector
  5. Appendix A: Data type reference
  6. Appendix B: Additional runtime configurations
  7. Appendix C: Upgrading to Snowflake Data Cloud Connector

Snowflake Data Cloud Connector

Snowflake Data Cloud Connector

Configure additional runtime parameters to run mappings in advanced mode

Configure additional runtime parameters to run mappings in advanced mode

In advanced mode, you can configure additional properties to write data to Snowflake. Specify the parameters in the
Additional Write Runtime Parameters
field in the Target transformation.
You can configure the following additional runtime parameters to write data to Snowflake:
autopushdown
Optional. Determines whether the automatic query SQL ELT is enabled.
If you enable SQL ELT and the query runs on an
advanced cluster
, the cluster application pushes part of the query to process in Snowflake, thereby optimizing the performance of these queries.
Default is
on
when the connector uses a compatible Spark version. When the connector does not use a compatible Spark version, the default value is
off
.
continueOnError
Optional. Determines whether the COPY command aborts the operation when you enter data that is not valid. For example, you specify a JSON format for a variant data type column that is not valid.
The values include
on
and
off
. When you specify the value as
on
, the COPY command continues even if an error occurs. If you specify
off
, the COPY command aborts when an error occurs. Default is
off
.
It is recommended that you keep the option as
off
. Else, when an error is encountered while copying data into Snowflake, some of the data might be missing.
parallelism
The size of the thread pool to use when the Secure Agent uploads or downloads data between Snowflake and the
advanced cluster
. Default is 4.
Do not change the default value unless you need to increase or decrease the throughput. When you want a high throughput, do not set the parallelism to an arbitrarily large number. A high value of parallelism might lead to undesired outputs and slows down the operation.
purge
Determines whether the Secure Agent deletes the temporary files created when transferring data from an
advanced cluster
to Snowflake through the external data transfer. The possible values are
on
and
off
. Default is
off
.
If you set this parameter to
off
, the Secure Agent automatically deletes the temporary files. Purging works only for data transfers from an
advanced cluster
to Snowflake, but not for transfers from Snowflake to the
advanced cluster
. If you set this parameter to
on
, the Secure Agent does not automatically delete the temporary files.
usestagingtable
Optional. Determines whether the data loading operation uses a staging table.
Snowflake creates a staging table with a temporary name. If the data loading operation is successful, Snowflake drops the original target table and renames the staging table to the original target table name. If the data loading operation fails, Snowflake drops the staging table and the target table retains the data that it contained before the operation.
Snowflake strongly recommends that you use a staging table. To create a staging table, you must have sufficient privileges to run the COPY command to create a table. If you do not have permissions to create a table, you can load directly without using a staging table.
The values include
on
and
off
. If you specify the
usestagingtable
parameter as
on
, Snowflake uses a staging table. If you specify the value as
off
, Snowflake directly loads the data into the target table. Default is
on
.

0 COMMENTS

We’d like to hear from you!