Consider the following rules and guidelines when you configure a JDBC V2 mapping:
General guidelines
Consider the following general guidelines for JDBC V2 mappings:
You cannot read from or write to tables where the table name exceeds 128 characters. The Integration Service fails to validate the data and an error occurs.
When you specify a native filter, enclose the table name and column name in the filter condition in quotes. For example,
"Supplier".s_suppkey<=5
When you run a JDBC V2 mapping to read or write data that contains time(4), time(5), and time(6) data types, the data is truncated beyond precision 3.
When you import multiple source objects, ensure that the table and column names do not contain Unicode characters.
SAP HANA Database mappings
In a mapping that uses the JDBC V2 connection with the SAP HANA Database subtype, consider the following general guidelines:
You cannot run mappings to read from or write to SAP HANA Database in the native environment.
SAP HANA Database field names with the aggregate function do not display correctly in the Designer.
Do not map the parameter field to the target in the mapping.
Specify multiple values for a parameter separated by a comma. For example, "Package_Joe/CALC_HANDSON"."EMPNAME" = 'Alex' ,"Package_Joe/CALC_HANDSON"."PARAM_M_INPUT_PARAM_HANDSON" = '5000';
When you configure a filter, use only the native expression.
You cannot create dynamic mappings and use the
Create Target
option for SAP HANA Database.
Create Target Option
When you configure a JDBC V2 target in a mapping using the
Create Target
option, consider the following guidelines:
When you run a mapping enabled to create a new Azure SQL Database target at runtime and the source data contains the Time data type, the Integration Service writes the date time value only until microseconds.
By default, the Target Schema Strategy is set to
RETAIN - Load into existing table schema
. You must manually select the
Target Schema Strategy
as
CREATE - Create target if it does not exist
in the Advanced properties in a Write operation for the
Create Target
option to work.
The CREATE TABLE DDL query that generates in the ANSI SQL-92 generic data type format might not run on all databases as the target database might not support the data type or data length. In this case, you must create the table manually and use it as the target in the mapping.
The option to edit the target metadata is subject to the target database support. When unsupported, you can see an error or warning message in the session logs.
Ensure to set the correct field precision and scale depending on the type of JDBC V2 database that you want to access.
When the database subtype used in the JDBC V2 connection is Others, you can write data that contains the BigInt data type only if that database supports the Numeric data type.
When you write to a PostgreSQL target and the source data contains the Time data type, the mapping runs successfully but the time value in microseconds is truncated to milliseconds.
When you write data that contains the Float, Double, Number, and Real data types to an Azure SQL Database target, the mapping fails. You must manually change the data type in the physical data object.
The Date and Time data type is converted to the Timestamp data type.
A JDBC V2 mapping that runs on the Spark engine to write Unicode characters to the target does not display the statistics. To make the summary statistics available on the
Monitoring
tab, before you run the mapping, set the spark.driver.extraJavaOptions and spark.executor.extraJavaOptions Spark advanced properties value to
-Dfile.encoding=UTF-8
in the Spark configuration section in the Hadoop connection.