Certain rules and guidelines apply when you enable a mapping for
SQL ELT optimization
to a
Google
BigQuery
database.
When you configure a
Google
BigQuery
source, Google Cloud Storage source, or
Google
BigQuery
target, adhere
to the following guidelines:
The Service Account ID associated with
the
Google BigQuery
V2
connection must have permissions to access Google Cloud
Storage buckets and files.
You cannot enable full
SQL ELT optimization
for a mapping task when the target table contains
columns of record data type or repeated columns.
You cannot enable full
SQL ELT optimization
for a mapping task when the task contains a
mapping with a single transformation connected to multiple transformations
downstream or multiple transformations connected to a single transformation.
You must ensure that the column header
names in the Google Cloud Storage source file does not contain unicode
characters. Otherwise, the mapping fails.
When you enable
SQL ELT optimization
for a mapping with multiple pipelines to write to
the same
Google
BigQuery
target and the Truncate target table option is enabled
in each pipeline, the target table is truncated for each pipeline when data is
inserted into the target.
For example, if there are two pipelines, in pipeline 1 the target table is
truncated and then the data is inserted. Similarly, in Pipeline 2 the target
table is truncated and the data is inserted into the target table. Hence,
the target table contains only the data from pipeline 2.
When you run a mapping without
SQL ELT optimization
and the mapping contains multiple pipelines, the target tables are truncated
at once for all pipelines and then the data is inserted.
When you configure a Filter
transformation or specify a filter condition, you must ensure that you do not
specify special characters. Use the ASCII value for the special character in the
filter condition.
When you parameterize the
Google BigQuery
V2
connection, source, target and provide values in the
mapping task using a parameter file, the default values for the parameter are
not overridden with the values in the parameter file.
If the Google Cloud Storage source file
contains a column of Boolean data type,
SQL ELT optimization
query fails.
You must ensure that the
Google BigQuery source object does not contain any partitions.
When you read from a
Google
BigQuery
source and edit the metadata for the source fields, the Secure
Agent ignores the changes to the metadata.
If the
Google
BigQuery
source
and target object resides in the same region other than US, do not specify the
Region ID
explicitly in the connection.
You must not specify a legacy SQL query
in the
pre SQL
and
post SQL
advanced source or target properties.
A mapping run without
SQL ELT optimization
fails if any of the Pre-SQL and Post-SQL commands
fail in a multi-statement query. Previously mappings were successful.
When you specify custom query as a
source object and specify a dataset name in the
Source Dataset
ID
source advanced property, the mapping runs without full
SQL ELT optimization
.
When you specify custom query as a
source object and specify an SQL override query, you must specify a dataset name
in the
Source Dataset ID
source advanced property.
When you specify custom query as a
source object and specify an SQL override query with different column names,
ensure that the data types and the order of the columns that appear in the SQL
override query matches the data types and order in which they appear in the
custom query.
When you select a view as a source
object that contain columns of the Record data type or repeated columns and
create a new target at runtime, a validation error appears in the session logs
and the mapping runs without full
SQL ELT optimization
.
To load data into columns of date,
time, datetime, or timestamp in a
Google
BigQuery
target,
you must pass the data through the TO_DATE() function as an expression and map
the results to the target column.
When you set SCD Type 2 merge
optimization context for a mapping, you cannot use filter, joiner, and custom
SQL query.
If the mapping contains a Router
transformation output connected to a Sequence Generator transformation, the
mapping does not push down the mapping logic to the point where the
transformation is supported and runs without
SQL ELT optimization
.
When you push down a Router
transformation with IIF and IS_SPACE() functions in mapping that reads from and
writes to
Google
BigQuery
, and the Boolean values are 0 and 1, the mapping
fails. When the Boolean values are true and false, the mapping runs
successfully.
When you use multiple functions within
a transformation and one of the functions cannot be pushed to
Google
BigQuery
, the mapping runs without
SQL ELT optimization
.
When the mapping contains multiple
pipelines and a function within one of transformations cannot be pushed to
Google
BigQuery
, the mapping does not push down the mapping logic to the point
where the transformation is supported and the mapping runs without
SQL ELT optimization
.
When you read from or write data to
Google
BigQuery
objects associated with different projects in different Google
service accounts that resides in different regions, the mapping runs without
SQL ELT optimization
.
When you use the data driven operation
to write data to a
Google
BigQuery
target
and enable the
Disable Duplicate Update Rows
target
advanced property, the Secure Agent ignores the
Disable Duplicate
Update Rows
property.
When you read data from a
Google
BigQuery
source that contains duplicate update keys and enable the
Disable Duplicate Update Rows
target advanced
property, the Secure Agent ignores the
Disable Duplicate Update
Rows
property.
When you configure a mapping that
includes any of the following datetime scenarios, the mapping runs without
SQL ELT optimization
:
Map data from the TIME data
type to any other date/time data type
Map data from the DATE data
type to the TIME data type
Compare data of the TIME and
TIMESTAMP data types with the DATE or DATETIME data types
When you use
$$$SESSSTARTTIME
variable in a custom query, the variable
returns the session start time as a string value. Use the following syntax to
convert the string values to timestamp or datetime:
SELECT
PARSE_DATETIME('%m/%d/%Y %H:%M:%E6S', '$$$SESSSTARTTIME' ) as
t1;
SELECT
cast(substr(cast('$$$SESSSTARTTIME' as string),0,19) as datetime
FORMAT 'MM/DD/YYYY HH24:MI:SS') as datetime;
Ensure that the time zones of the
Google
BigQuery
project and the agent machine are the same.
When you set the
OptimizeCastsInPDO:true
custom property in the advanced
settings for a
Google BigQuery
V2
connection, the SESSSTARTTIME, SYSDATE, SYSTIMESTAMP, and TO_DATE functions
return data of DATETIME data type.
When you configure a native filter in
the Source transformation, ensure that you do not prefix the field name with the
table name and dataset name in the filter expression. Otherwise, the mapping
fails.
When you configure an insert operation
and set the
Write Disposition
property as Write Truncate
in the target transformation properties, the mapping appends the records to the
target table instead of truncating the target table before loading data. To
configure a truncation when you insert records, you need to select the
Truncate target table
option in the target advanced
properties.
When you create a target table at
runtime and perform DML operations, the mapping might fail if the expression
port returns a null value of a non-integer data type.
You cannot configure a mapping enabled with
SQL ELT optimization
to read from an
Amazon S3 source and write to
Google
BigQuery
when
using Workload Identity Federation authentication to connect to
Google
BigQuery
.
You cannot read from and write to BigNumeric
and JSON data types in a mapping enabled with
SQL ELT optimization
.
When you configure the
EnableSingleCommit
custom property in the
Advanced Session Properties
of a mapping task, the
defined flow run order to load the targets from the pipelines in the mapping is
not honored, and the operations from the pipelines run
concurrently.