Data Integration Connectors
- Data Integration Connectors
- All Products
Property
| Description
|
---|---|
Target Dataset ID
| Optional. Overrides the Google BigQuery dataset name that you specified in the connection.
|
Target Table Name
| Optional. Overrides the Google BigQuery target table name that you specified in the
Target page of the
synchronization task.
|
Create Disposition
| Specifies whether Google BigQuery Connector must create the target table if it does not exist.
You can select one of the following values:
|
Write Disposition
| Specifies how Google BigQuery Connector must write data in bulk mode if the target table already exists.
You can select one of the following values:
Write disposition is applicable for bulk mode.
Write disposition is applicable only when you perform an insert operation on a Google BigQuery target.
|
Write Mode
| Specifies the mode to write data to the Google BigQuery target.
You can select one of the following modes:
Default is Bulk mode.
|
Streaming Template Table Suffix
| Specify the suffix to add to the individual target tables that Google BigQuery Connector creates based on the template target table.
This property applies to streaming mode.
|
Rows per Streaming Request
| Specifies the number of rows that Google BigQuery Connector streams to the BigQuery target for each request.
Default is 500 rows.
The maximum row size that Google BigQuery Connector can stream to the Google BigQuery target for each request is 10 MB.
This property applies to streaming mode.
|
Staging file name
| Name of the staging file that Google BigQuery Connector creates in the Google Cloud Storage before it loads the data to the Google BigQuery target.
This property applies to bulk mode.
|
Data Format of the staging file
| Specifies the data format of the staging file. You can select one of the following data formats:
|
Persist Staging File After Loading
| Indicates whether Google BigQuery Connector must persist the staging file in the Google Cloud Storage after it writes the data to the Google BigQuery target. You can persist the staging file if you want to archive the data for future reference.
By default, Google BigQuery Connector deletes the staging file in Google Cloud Storage.
This property applies to bulk mode.
|
Enable Staging File Compression
| Select this option to compress the size of the staging file before Google BigQuery writes the data to the Google Cloud Storage and decompress the staging file before it loads the data to the Google BigQuery target.
You can enable staging file compression to reduce cost and transfer time.
|
Job Poll Interval in Seconds
| The number of seconds after which Google BigQuery Connector polls the status of the write job operation.
Default is 10.
|
Number of Threads for Uploading Staging file
| The number of files that Google BigQuery Connector must create to upload the staging file in bulk mode.
|
Local Stage File Directory
| Specifies the directory on your local machine where Google BigQuery Connector stores the files temporarily before writing the data to the staging file in Google Cloud Storage.
This property applies to bulk mode.
|
Allow Quoted Newlines
| Indicates whether Google BigQuery Connector must allow the quoted data sections with newline character in a .csv file.
|
Field Delimiter
| Delimiter character for the fields in a .csv file.
|
Allow Jagged Rows
| Indicates whether Google BigQuery Connector must accept the rows without trailing columns in a .csv file.
|
Pre SQL
| SQL statement that you want to run before writing data to the target.
For example, if you want to select records from the database before you write the records into the table, specify the following pre SQL statement:
SELECT * FROM `api-project-80697026669.EMPLOYEE.RegionNation` LIMIT 1000
|
Pre SQL Configuration
| Specify a pre SQL configuration.
For example,
DestinationTable:PRESQL_TGT2,DestinationDataset:EMPLOYEE,
FlattenResults:False,WriteDisposition:WRITE_TRUNCATE,UseLegacySql:False
|
Post SQL
| SQL statement that you want to run after writing the data into the target.
For example, if you want to update records in a table after you write the records into the target table, specify the following post SQL statement:
UPDATE [api-project-80697026669.EMPLOYEE.PERSONS_TGT_DEL]
SET phoneNumber.number =1000011, phoneNumber.areaCode=100 where fullname='John Doe'
|
Post SQL Configuration
| Specify a post SQL configuration.
For example,
DestinationTable:POSTSQL_SRC,DestinationDataset:EMPLOYEE,
FlattenResults:True,UseLegacySQL:False
|
Success File Directory
| Not applicable for Google BigQuery Connector.
|
Error File Directory
| Not applicable for Google BigQuery Connector.
|