Table of Contents

Search

  1. Preface
  2. Introduction to Amazon S3 V2 Connector
  3. Connections for Amazon S3 V2
  4. Amazon S3 V2 sources and targets
  5. Mappings and mapping tasks with Amazon S3 V2
  6. Migrating a mapping
  7. Upgrading to Amazon S3 V2 Connector
  8. Data type reference
  9. Troubleshooting

Amazon S3 V2 Connector

Amazon S3 V2 Connector

Rules and guidelines for setting formatting options

Rules and guidelines for setting formatting options

You must set the appropriate formatting options when you select the Amazon S3 file format types.
Use the following guidelines when you select the format types and set the formatting options:
  • You can use JSON format only for mappings in advanced mode.
  • When you create a mapping and if you do not click the
    Formatting Options
    tab, the Secure Agent considers the
    Format Type
    as
    None
    by default.
  • If you select an Avro, JSON, ORC, or Parquet format type and select
    Read from data file
    as the value of the
    Schema Source
    formatting option, you cannot configure the delimiter, escapeChar, and qualifier options.
  • If you select an Avro, JSON, ORC, or Parquet format type and select
    Import from schema file
    as the value of the
    Schema Source
    formatting option, you can only upload a schema file in the
    Schema File
    property field. You cannot configure the delimiter, escapeChar, and qualifier options.
  • If you select the flat format type and select
    Import from schema file
    as the value of the
    Schema Source
    formatting option, you can only upload a schema file in the JSON format.
    The following sample shows a schema file for a flat file:
    {"Columns":[{"Name":"f_varchar","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_char","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_smallint","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_integer","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_bigint","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_decimal_default","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_real","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_double_precision","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_boolean","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_date","Type":"string","Precision":"256","Scale":"0"},{"Name":"f_timestamp","Type":"string","Precision":"256","Scale":"0"}]}
  • Set the appropriate
    Formatting Options
    for the Avro, JSON, ORC, or Parquet format types that you select to avoid the following exception:
    invalid character encapsulated
  • You cannot select the
    Read multiple-line JSON files
    checkbox in the formatting options, as Amazon S3 V2 does not support the feature.
  • When you run a mapping with an Amazon S3 V2 source, with source columns having values of Parquet-datetime data types, then the timestamp values that are mapped to the target do not appear in UTC format if you do not enable full
    SQL ELT optimization
    .

0 COMMENTS

We’d like to hear from you!