Table of Contents

Search

  1. Preface
  2. Part 1: Getting Started with Snowflake Data Cloud Connector
  3. Part 2: Data Integration with Snowflake Data Cloud Connector
  4. Part 3: SQL ELT with Snowflake Data Cloud Connector
  5. Appendix A: Data type reference
  6. Appendix B: Additional runtime configurations
  7. Appendix C: Upgrading to Snowflake Data Cloud Connector

Snowflake Data Cloud Connector

Snowflake Data Cloud Connector

Introduction to Snowflake Data Cloud Connector

Introduction to Snowflake Data Cloud Connector

You can use Snowflake Data Cloud Connector to access Snowflake from
Data Integration
. You can read from or write to Snowflake and other third-party applications, databases, and flat files.
You can configure Snowflake Data Cloud Connector to read data from the following Snowflake objects:
  • Snowflake tables and views, including Snowflake external tables, hybrid tables, and materialized views.
  • Apache Iceberg tables that are managed by Snowflake or any external catalog.
  • Snowflake that is enabled for staging data in Azure, Amazon, Google Cloud Platform, or Snowflake GovCloud.
You can write data to the following Snowflake objects:
  • Snowflake tables and views.
  • Apache Iceberg tables that are managed by Snowflake.
  • Snowflake that is enabled for staging data in Azure, Amazon, Google Cloud Platform, or Snowflake GovCloud.
When you use Snowflake Data Cloud Connector, you can create a Snowflake Data Cloud connection and use the connection in Data Integration mappings and tasks. When you run a Snowflake Data Cloud mapping or task, the Secure Agent writes data to Snowflake based on the workflow and Snowflake Data Cloud connection configuration. You can create a mapping to read and write to a wide variety of heterogeneous data sources.
You can switch mappings to advanced mode to include transformations and functions that enable advanced functionality.
A mapping in advanced mode can run on an advanced cluster hosted on Amazon Web Services, Google Cloud Platform, Microsoft Azure environment, or on a self-service cluster.
You can also create a mapping in SQL ELT mode to perform the data transformation entirely within your cloud ecosystem.
The functionality to read from or write to Apache Iceberg tables is available for preview.
Preview functionality is supported for evaluation purposes but is unwarranted and is not supported in production environments or any environment that you plan to push to production. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. Note that if you are working on a preview POD, all data is excluded from SOC 2 compliance coverage. For more information, contact Informatica Global Customer Support.

0 COMMENTS

We’d like to hear from you!