Table of Contents

Search

  1. Preface
  2. Part 1: Getting Started with Snowflake Data Cloud Connector
  3. Part 2: Data Integration with Snowflake Data Cloud Connector
  4. Part 3: SQL ELT with Snowflake Data Cloud Connector
  5. Appendix A: Data type reference
  6. Appendix B: Additional runtime configurations
  7. Appendix C: Upgrading to Snowflake Data Cloud Connector

Snowflake Data Cloud Connector

Snowflake Data Cloud Connector

Amazon S3 V2 source

Amazon S3 V2 source

The mapping supports the following properties for an Amazon S3 V2 connection:
  • Access Key
  • Secret Key
The mapping supports the following properties for an Amazon S3 V2 source:
  • Source connection parameter
  • Source Type - Single, parameter
  • Format - Delimited, Avro, ORC, Parquet, and JSON
  • Source Type - File and directory
  • Folder Path
  • File Name
  • Compression Format. - Gzip
A mapping enabled for SQL ELT optimization that reads from an Amazon S3 V2 source and writes to a Snowflake target has some restrictions.

Authentication

When you read multiple Avro files using an Amazon S3 connection enabled for IAM authentication, specify the right access key and the secret key in the Amazon S3 connection. For more information, see the help for Amazon S3 V2 Connector.

Create a new target at runtime

A mapping that creates a new target at runtime has the following restrictions:
  • To write data from file data types such as Avro, ORC, or Parquet from Amazon S3 to Snowflake, you must delete the
    Filename
    field.
  • Mappings fails with a casting error when the table name contains Unicode characters.

Data types

A mapping has the following restrictions for certain data types:
  • You cannot write Avro files that contain special characters.
  • You cannot write data that contains the Binary data type.
  • You cannot read data in JSON format that contains special characters. For more information about using identifiers, see Identifiers Syntax in the Snowflake documentation.
  • If you specify any escape character for the S3 file format, the escape character defaults to backslash.
  • ORC files with year 1523 is loaded incorrectly as 1524.
  • When you write data with the Time data types from a Parquet file from Amazon S3 to Snowflake, the value of the time differs in the target.
  • The precision of JSON data must not exceed the precision of the Snowflake target table.
  • If the Amazon S3 source type is a directory and you enable wildcard characters for the directory, the mapping fails. A warning message appears stating that wildcard characters read are not supported with SQL ELT optimization.
For information on how to configure the supported properties, see the Amazon S3 V2 Connector documentation.

0 COMMENTS

We’d like to hear from you!