Table of Contents

Search

  1. Preface
  2. Part 1: Introduction to Google BigQuery connectors
  3. Part 2: Data Integration with Google BigQuery V2 Connector
  4. Part 3: Data Integration with Google BigQuery Connector

Google BigQuery Connectors

Google BigQuery Connectors

Rules and guidelines for mappings that read from Amazon S3 source

Rules and guidelines for mappings that read from Amazon S3 source

Use the following rules and guidelines when you configure
SQL ELT optimization
in a mapping that reads from an Amazon S3 source and writes to a Google BigQuery target:
  • Do not map the boolean data type in Amazon S3 to the boolean data type in Google BigQuery. Else, the mapping fails.
  • When you edit the metadata in the mapping, you cannot add or remove source fields or change the scale and precision of data types. However, you can edit the field data types.
  • When you read data in AVRO, JSON, or CSV format, ensure that the date is in YYYY-MM-DD format and time is in hh:mm:ss format in the DATE, TIME, DATETIME, and TIMESTAMP columns.
  • The source fields must start with a letter or an underscore and can contain letters, numbers, and underscores up to a maximum of 300 characters. You cannot read source fields with special characters.
  • When you write data with the Numeric data types to a Google BigQuery target created at runtime, where the source column has precision greater than 28, the mapping runs without
    SQL ELT optimization
    .
  • When you write the DATE, TIME, or DATETIME data types to a Google BigQuery target, you must match the agent time zone with the time zone of the Google BigQuery application.
  • In a mapping enabled with
    SQL ELT optimization
    , you cannot read a single directory from multiple subdirectories. When you select the source type as
    Directory
    in the advanced source properties to read objects stored in subdirectories from an Amazon S3 source, you must select the
    Enable Recursive Read
    option. Otherwise, the mapping runs without
    SQL ELT optimization
    .
  • When you write data from Avro or Parquet file formats in an Amazon S3 source to a Google BigQuery target created at run time, you must delete the Filename field in the mapping.
  • When you configure a lookup from an Amazon S3 or a Google Cloud Storage V2 object in a mapping, the mapping runs without
    SQL ELT optimization
    .
  • When you read data from a smaller dataset such as Transaction Processing Council Ad-hoc/decision support benchmark (TPC-H) scale factor 1 or below and run a mapping enabled with
    SQL ELT optimization
    , the mapping takes 30% more time to process the data as compared to the mapping that runs without
    SQL ELT optimization
    .
  • When you read from an Amazon S3 source and write to a Google BigQuery target, the time taken to load data to the Google BigQuery staging in the first and subsequent mapping runs for the same dataset and resources is inconsistent.
  • When you configure a mapping enabled with
    SQL ELT optimization
    to read a boolean integer column and write to a boolean string column, the mapping fails.
  • When you read data from an Amazon S3 source and write to a Google BigQuery target, it takes a few minutes to initialize the transfer to the Google BigQuery target.
  • When you upload a file in an Amazon S3 bucket and then immediately run a data transfer task, the source file is not detected. Wait for at least five minutes and then run the mapping again.
  • When you run a mapping enabled with
    SQL ELT optimization
    to read data with wildcard characters from an Amazon S3 source and write to Google BigQuery, the mapping runs without
    SQL ELT optimization
    and fails with the following error:
    Wild card character option is not valid in the native mode of execution
  • When you set the JVM option system property for the DTM type to
    -DHonorInfaDateFormat=true
    for the Secure Agent and configure a mapping with
    SQL ELT optimization
    , the mapping fails with the following error if it reads the date data type that is not in the YYYY-MM-DD format:
    The Secure Agent failed to run the full SQL ELT query due to the following error: [Failed to parse input string "1972-12-31"]
  • When you map a string data type in the source to a time data type in the target, and the data is in the format HH24:MI:SS.US, the mapping fails with the following error:
    [Invalid timestamp: '00:00:00.000001']

0 COMMENTS

We’d like to hear from you!