Table of Contents

Search

  1. Preface
  2. Part 1: Introduction to Google BigQuery connectors
  3. Part 2: Data Integration with Google BigQuery V2 Connector
  4. Part 3: Data Integration with Google BigQuery Connector

Google BigQuery Connectors

Google BigQuery Connectors

Configure unique staging object names for concurrent mappings

Configure unique staging object names for concurrent mappings

When you run concurrent mappings to read data from or write data to Google BigQuery using staging mode, you can configure the mapping to create staging objects with unique names.
Google BigQuery V2 Connector creates the following staging objects based on how you configure the mapping:
  • Staging tables
  • Staging files
  • Staging views
To create staging objects with unique names for concurrent mappings, configure the following property in the Google BigQuery V2 connection:
Connection Property
Value
Provide Optional Properties
RandomizeStagingObjectNames:true
Google BigQuery V2 Connector uses the following naming conventions to create unique names for the staging objects based on how you configure the mapping:
Mapping Configuration
Staging Object Type
Staging Object Name Format
Custom Query and SQL Override
Temporary staging view for metadata import
<Informatica_Prefix>_temp_view_for_metadata_
<TimestampUptoMilliseconds>_<UUID>
Custom Query and SQL Override
Staging view for runtime
<Informatica_Prefix>_View_
<TimestampUptoMilliseconds>_<UUID>
Staging Read Mode
Staging table
<Informatica_Prefix>_<SourceTableName>_
<PartitionID¹>_E_<TimestampUptoMilliseconds>_
<UUID>
Staging Read Mode with Custom Query and SQL Override
Staging view
<Informatica_Prefix>_View_Table_Name_
<PartitionID¹>_E_<TimestampUptoMilliseconds>_
<UUID>
Staging Read Mode with Staging File Name for CSV, JSON, Avro, or Parquet format
Staging File
<StagingFileName>_<PartitionID¹>
Staging Read Mode with CSV, JSON, Avro, or Parquet format without specifying the Staging File Name property
Staging File
<Informatica_Prefix>_StgF_
<TimestampUptoMilliseconds>_ <UUID>
<PartitionID¹>.<FileExtension>
Update, Upsert, Delete, or Data Driven operations on a Google BigQuery target
Staging table
<Informatica_Prefix>_<TargetTableName>_
<PartitionID¹>_T_<TimestampUptoMilliseconds>_
<UUID>
Lookup
Staging table
<Informatica_Prefix>_LKP_<SourceTableName>_
_E_<TimestampUptoMilliseconds>_
<UUID>_<TableIDIncrement>
Lookup with CSV, JSON, Avro, or Parquet staging file format without specifying the Staging File Name property
Staging file
<Informatica_Prefix>_StgF_
<TimestampUptoMilliseconds>_ <UUID>
<StagingFileIncrement>.<FileExtension>
SQL ELT optimization
with Google Cloud Storage source and Google BigQuery target
Staging table
<Informatica_Prefix>_<TargetTableName>_
PDO_<TimestampUptoMilliseconds>_
<UUID>
SQL ELT optimization
with Google BigQuery source and target
Staging table
<Informatica_Prefix>_<TargetTableName>_
PDO_<TimestampUptoMilliseconds>_
<UUID>
SQL ELT optimization
with Google BigQuery source and target
Temporary views for custom query and SQL override
<Informatica_Prefix>_View_PDO_
<TimestampUptoMilliseconds>_<UUID>
¹Applies only when you configure partitions.
UUID represents an Unique Universal Identifier string.
If you select the
Persist Extract Staging File After Download
source advanced property or the
Persist Staging File After Loading
target advanced property, Google BigQuery V2 Connector appends "P" to the end of the <Informatica_Prefix>.
For example when you use Staging Read Mode and select the
Persist Extract Staging File After Download
source advanced property, Google BigQuery V2 Connector uses the following format for the staging table name:
<Informatica_Prefix>P_<SourceTableName>_<PartitionID¹>_E_<TimestampUptoMilliseconds>_<UUID>
If you cancel a job, Google BigQuery V2 Connector deletes the staging object for the following mapping configurations:
  • Staging or direct read mode
  • Bulk write mode
  • Staging read mode and bulk write Mode with staging file name for CSV, JSON, Avro, or Parquet format
  • Custom query
  • SQL override
  • Update, upsert, delete, or data driven operations on a Google BigQuery target

0 COMMENTS

We’d like to hear from you!