You can enable the Stateless advanced property when you run mappings in a Hadoop environment.
The Java code in the transformation cannot write output to standard output when you push transformation logic to Hadoop. The Java code can write output to standard error which appears in the log files.
Some processing rules for the Hive engine differ from the processing rules for the Data Integration Service.
Partitioning
You can optimize the transformation for faster processing when you enable an input port as a partition key and sort key. The data is partitioned across the reducer tasks and the output is partially sorted.
The following restrictions apply to the Transformation Scope property:
The value Transaction for transformation scope is not valid.
If transformation scope is set to Row, a Java transformation is run by mapper script.
If you enable an input port for partition Key, the transformation scope is set to All Input. When the transformation scope is set to All Input, a Java transformation is run by the reducer script and you must set at least one input field as a group-by field for the reducer key.
Using External .jar Files
To use external .jar files in a Java transformation, perform the following steps:
Copy external .jar files to the Informatica installation directory in the Data Integration Service machine at the following location: