Data Engineering Integration
All Products
Transformation
| Rules and Guidelines
|
---|---|
Aggregator
| Mapping validation fails in the following situations:
|
Expression
| Mapping validation fails in the following situations:
If an expression results in numerical errors, such as division by zero or SQRT of a negative number, it returns an infinite or an NaN value. In the native environment, the expression returns null values and the rows do not appear in the output.
|
Filter
| Supported without restrictions.
|
Java
| You must copy external .jar files that a Java transformation requires to the Informatica installation directory on the Hadoop cluster at the following location:
[$HADOOP_NODE_INFA_HOME]/services/shared/jars .
To run user code directly on the Spark engine, the JDK version that the Data Integration Service uses must be compatible with the JRE version on the cluster. For best performance, create the environment variable DIS_JDK_HOME on the Data Integration Service in the Administrator tool. The environment variable contains the path to the JDK installation folder on the machine running the Data Integration Service. For example, you might enter a value such as
/usr/java/default .
The Partitionable property must be enabled in the Java transformation. The transformation cannot run in one partition.
For date/time values, the Spark engine supports the precision of up to microseconds. If a date/time value contains nanoseconds, the trailing digits are truncated.
When you enable high precision and the Java transformation contains a field that is a decimal data type, a validation error occurs.
The following restrictions apply to the Transformation Scope property:
The Java code in the transformation cannot write output to standard output when you push transformation logic to Hadoop. The Java code can write output to standard error which appears in the log files.
|
Joiner
| Mapping validation fails in the following situations:
|
Lookup
| Mapping validation fails in the following situations:
The mapping fails in the following situations:
When you use Sqoop and look up data in a Hive table based on a column of the float data type, the Lookup transformation might return incorrect results.
|
Router
| Supported without restrictions.
|
Sorter
| Mapping validation fails in the following situations:
The Data Integration Service logs a warning and ignores the Sorter transformation in the following situations:
The Data Integration Service treats null values as high even if you configure the transformation to treat null values as low.
|
Union
| Supported without restrictions.
|
Transformations not listed in this table are not supported. |
Updated July 03, 2018