Metadata Manager 10.1
- Metadata Manager 10.1
- All Products
When you create a mapping using Timestamp with Time Zone data type on SUSE 11, the mapping fails with the following error:
Database driver event...Error occurred loading library [/usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.11' not found (required by /export/home/svcoca/ocaagent/perforce/10/depends/bin/libicutu.so.55)]Database driver event...Error occurred loading library [libpmtimestampTzexpr.so]"
The Web Services Consumer transformation and the REST Web Services Consumer transformation do not support the Timestamp with Time Zone data type.
You cannot preview or run a mapping that contains a Java transformation with an unconnected output port of the Timestamp with Time Zone data type.
When the data type for a column is Timestamp with Time Zone, then the embedded rule and value frequency rule does not work on the column.
When you do not specify the date format in the
Run Configurationsdialog box or when you do not specify the Timestamp with Time Zone formats in the target file, the Data Integration Service rejects the rows randomly during implicit conversion of a large data set.
Workaround: Verify that the data contains the date format specified at the
Run Configurationsand the Timestamp with Time Zone formats in the target file. You can use a data set with less than 100,000 rows.
When the number of input rows is greater than 100,000 and the mapping contains a Java transformation with a Timestamp with Time Zone port, the mapping sometimes fails unexpectedly.
Default value does not always appear for the Timestamp with Time Zone input port in the testing panel of the Expression Editor.
Workaround: Verify that the source data contains the following format for Timestamp with Time Zone:
MM/DD/YYYY HH24:MI:SS TZR
On AIX 6.1, a mapping fails with an unexpected condition when the mapping contains a Timestamp with Time Zone data type.
The Data Integration Service does not apply the cost-based optimization method to the mapping that contains a Timestamp with Time Zone data type even if the mapping is configured with the full optimizer level.
When you use Timestamp with Time Zone data type in the mapping, the data gets truncated if the precision exceeds seconds. The issue occurs when you enable data object caching on the logical data object mappings and the data object caching database is on IBM DB2 or Microsoft SQL Server.
Nanoseconds are ignored for Timestamp with Time Zone data in the expression result at the bottom of the testing panel in the Expression Editor.
Workaround: Do not configure pushdown optimization for the mapping and run the mapping again.
When you configure a mapping that contains a TO_BIGINT function and the function converts decimal values to bigint values for pushdown optimization, the mapping writes incorrect data to the target.
Expression format validation fails for the Timestamp with Time Zone functions: CREATE_TIMESTAMP_TZ, GET_TIMEZONE, GET_TIMESTAMP, and TO_TIMESTAMP_TZ.
You cannot specify a Timestamp with Time Zone data type with a time zone region in Daylight Savings Time (TZD) format.
You cannot use a delimiter other than a colon when specifying the time zone offset with the Timestamp with Time Zone data type.
Workaround: Change the delimiter to a colon for the time zone offset for the Timestamp with Time Zone data type.
Unable to read SAP HANA data for the columns of the Decimal data type with precision from 35 digits to 38 digits.