Table of Contents

Search

  1. Preface
  2. Introduction to Google Cloud Storage V2 Connector
  3. Google Cloud Storage V2 connections
  4. Mappings for Google Cloud Storage
  5. Migrating a mapping
  6. Upgrading to Google Cloud Storage V2 Connector
  7. Appendix A: Data type reference

Google Cloud Storage V2 Connector

Google Cloud Storage V2 Connector

Rules and Guidelines for mappings in advanced mode

Rules and Guidelines for mappings in advanced mode

Consider the following guidelines when you create a mapping in advanced mode:
  • When you read or write hierarchical data to a Google Cloud Storage file in Avro, JSON, or Parquet format, you cannot preview data.
  • Gzip compression does not apply when you write to a Google Cloud Storage file in Avro or ORC format.
  • When you read data from Google Cloud Storage, you cannot preview data the data in the mapping.
  • When you write data to a Google Cloud Storage file in ORC format, you cannot preview data.
  • You cannot read files available in a sub-folder.
  • You cannot use a multi-character delimiter in a mapping.
  • You cannot write to the same object when you use multiple Google Cloud Storage targets in a mapping.
  • You cannot override input parameters from the parameter file in a mapping. Instead, you can use in-out parameters to override them. When you configure in-out parameters, you need to completely parameterize the values. You cannot use partial parameterization.
  • When write data to Google Cloud Storage flat file and select
    Informatica Encryption
    as the
    Encryption Type
    , ensure that you do not set the
    Compression Format
    to
    Gzip
    . Otherwise, the mapping fails with the following error:
    java.lang.RuntimeException
  • When you use Informatica encryption to decrypt or encrypt files of large size, ensure that you have at least twice the disk space available in the
    advanced cluster
    .
    For example, if the size of a file is 75 GB, you must have at least 150 GB of disk space available in the
    advanced cluster
    to successfully encrypt or decrypt the file.
  • When you set the qualifier mode to Minimal and use an escape character, the escape characters are not escaped and quoted in the target. To resolve this issue, set the qualifier mode to All.
  • When you set the qualifier mode to All and do not specify a value for the qualifier, \00 (Null) is considered as the qualifier.
  • When a column name in the Google Cloud Storage source file starts with a number and you create a Google Cloud Storage target at runtime, the corresponding target column is prefixed by an underscore character (_).
  • When a column name in the source file contains special characters and you create a target at runtime, the Secure Agent replaces the special characters with underscore (_) character in the target file.
  • When you perform update, upsert, or delete operations, the Secure Agent does not display the number of
    Rows Processed
    in
    My Jobs
    page.
  • When you read data from a Google Cloud Storage flat file source and create a Google Cloud Storage target at runtime in Avro or ORC format, ensure that the column names in the source does not contain unicode characters in Hindi or Kannada.
  • When you import a Google Cloud Storage source file in Parquet format to read Float values in an array and write the data to a Google Cloud Storage target file in Parquet format, precision loss is encountered in the target.
  • When you import a Google Cloud Storage source file in Avro format to read Float values in an array and write the data to a Google Cloud Storage target file in Avro format, precision loss is encountered in the target.
  • When you import a Google Cloud Storage JSON file and the file contains a column of Double data type, the data preview displays the Double values in scientific notation.
  • When you import a Google Cloud Storage source file and the file name contains special characters, the Secure Agent replaces special characters with underscore (_) character.
  • When you import a Google Cloud Storage source file and a file with the same name exist in another bucket, the Secure Agent imports the file with
    1_
    prefix.
    For example, if the source file name in Google Cloud Storage is
    accounts-1.csv
    , the Secure Agent imports the file as
    1_accounts_1_csv
    .
  • When you write data to an existing target in Google Cloud Storage or create a new target at runtime, the Secure Agent creates a new folder with the following format and creates part files and success file in the folder:
    <target_file_name>_<unique object ID>
    For example, if you run a mapping to write data to
    CustomerTgt.csv
    file, the Secure Agent creates a folder
    CustomerTgt.csv_233434
    and creates the
    SUCCESS.csv
    file and
    part-00000.csv
    file. The Secure Agent writes data to the
    part-00000.csv
    file.
    In the subsequent re-run of the mapping, the Secure Agent creates a new part file in the same folder.
  • When you write data to an existing target, the Secure Agent creates a success file in the created folder. If there are multiple success files in a bucket, the Secure Agent creates the file with
    1_
    prefix.
  • A mapping configured with the following settings fails:
    • If you do not select
      Optimize Object Metadata Import
      in the connection properties.
    • If you select
      Partition Directory
      as the source object and the source advanced property
      Is Directory
      is enabled in the mapping.
    • If you provide the directory partition folder created using the connection in the source advanced property by overriding the directory partition path.
    To run the mapping successfully, select the connection property
    Optimize Object Metadata Import
    and refresh the metadata, or select the directory partition folder created with the connection as the source object, and run the mapping again.

0 COMMENTS

We’d like to hear from you!