Table of Contents

Search

  1. Preface
  2. Introduction to Google Cloud Storage V2 Connector
  3. Google Cloud Storage V2 connections
  4. Mappings for Google Cloud Storage
  5. Migrating a mapping
  6. Upgrading to Google Cloud Storage V2 Connector
  7. Appendix A: Data type reference

Google Cloud Storage V2 Connector

Google Cloud Storage V2 Connector

Troubleshooting a mapping task

Troubleshooting a mapping task

Time zone for the Date and Timestamp data type fields in Parquet or Avro file formats defaults to the Secure Agent host machine time zone.  
When you run a mapping in advanced mode to read from or write to fields of the Date and Timestamp data types in the Parquet or Avro file formats, the time zone defaults to the Secure Agent host machine time zone.
To change the Date and Timestamp to the UTC time zone, you can either set the Spark properties globally in the Secure Agent directory for all the tasks in the organization that use this Secure Agent, or you can set the Spark session properties for a specific task from the task properties:  
To set the properties globally, perform the following tasks:
  1. Add the following properties to the
    <Secure Agent installation directory>/apps/At_Scale_Server/41.0.2.1/spark/custom.properties
    directory:
    • infacco.job.spark.driver.extraJavaOptions=-Duser.timezone=UTC
    • infacco.job.spark.executor.extraJavaOptions=-Duser.timezone=UTC
  2. Restart the Secure Agent.
To set the properties for a specific task, navigate to the Spark session properties in the task properties, and perform the following steps:
  • Select the session property name as
    spark.driver.extraJavaOptions
    and set the value to
    -Duser.timezone=UTC
    .
  • Select
    spark.executor.extraJavaOptions
    and set the value to
    -Duser.timezone=UTC
    .
Data corruption occurs in the target for data of double data type.
When you read data of the double data type from a Google Cloud Storage JSON file and write data to a Google Cloud Storage flat file target, data corruption occurs in the target for the corresponding data of the Double data type.
Workaround: Change the data type of the target column from flat_string data type to flat_number data type and increase the precision to 38 and the scale to 15.
When you run the mapping, the Secure Agent writes the data of double data type to the target column of decimal data type with trailing zeros and without data loss.

0 COMMENTS

We’d like to hear from you!