Inhaltsverzeichnis

Search

  1. Preface
  2. Einführung in Umwandlungen
  3. Umwandlungsports
  4. Umwandlungs-Caches
  5. Adressvalidator-Umwandlung
  6. Aggregator-Umwandlung
  7. Assoziations-Umwandlung
  8. Ausnahmeumwandlung bei fehlerhaftem Datensatz
  9. Fallumwandlerumwandlung
  10. Klassifizierumwandlung
  11. Vergleichsumwandlung
  12. Konsolidierungsumwandlung
  13. Datenmaskierungsumwandlung
  14. Datenprozessor-Umwandlung
  15. Entscheidungsumwandlung
  16. Ausnahmeumwandlung bei dupliziertem Datensatz
  17. Ausdrucksumwandlung
  18. Filterumwandlung
  19. Umwandlung von hierarchisch in relational
  20. Java-Umwandlung
  21. Java-Umwandlungs-API-Referenz
  22. Java-Ausdrücke
  23. Joiner-Umwandlung
  24. Schlüsselgeneratorumwandlung
  25. Labeler-Umwandlung
  26. Lookup-Umwandlung
  27. Lookup-Caches
  28. Dynamischer Lookup-Cache
  29. Match-Umwandlung
  30. Match-Umwandlungen in der Feldanalyse
  31. Match-Umwandlungen in der Identitätsanalyse
  32. Merge-Umwandlung
  33. Normalizer-Umwandlung
  34. Parser-Umwandlung
  35. Python-Umwandlung
  36. Rang-Umwandlung
  37. Leseumwandlung
  38. Relational-in-hierarchisch-Umwandlung
  39. REST-Webdienst-Verbraucher-Umwandlung
  40. Router-Umwandlung
  41. Sequenzgeneratorumwandlung
  42. Sortierer-Umwandlung
  43. SQL-Umwandlung
  44. Standardisierungs-Umwandlung
  45. Union-Umwandlung
  46. Update-Strategie-Umwandlung
  47. Web-Dienst-Verbraucher-Umwandlung
  48. Parsen von Webdienst-SOAP-Meldungen
  49. Generieren von Webdienst-SOAP-Meldungen
  50. Umwandlung für gewichteten Durchschnitt
  51. Schreibumwandlung
  52. Umwandlungs-Delimiter

Developer Transformation Guide

Developer Transformation Guide

Java Transformation Support on the Spark Engine

Java Transformation Support on the Spark Engine

You can use complex data types to process hierarchical data.
Some processing rules for the Spark engine differ from the processing rules for the Data Integration Service.

General Restrictions

The Java transformation is supported with the following restrictions on the Spark engine:
  • The Java code in the transformation cannot write output to standard output when you push transformation logic to Hadoop. The Java code can write output to standard error which appears in the log files.
  • For date/time values, the Spark engine supports the precision of up to microseconds. If a date/time value contains nanoseconds, the trailing digits are truncated.

Partitioning

The Java transformation has the following restrictions when used with partitioning:
  • The Partitionable property must be enabled in the Java transformation. The transformation cannot run in one partition.
  • The following restrictions apply to the Transformation Scope property:
    • The value Transaction for transformation scope is not valid.
    • If you enable an input port for partition key, the transformation scope must be set to All Input.
    • Stateless must be enabled if the transformation scope is row.

Mapping Validation

Mapping validation fails in the following situations:
  • You reference an unconnected Lookup transformation from an expression within a Java transformation.
  • You select a port of a complex data type as the partition or sort key.
  • You enable nanosecond processing in date/time and the Java transformation contains a port of complex data type with an element of a date/time type. For example, a port of type
    array<data/time>
    is not valid if you enable nanosecond processing in date/time.
  • When you enable high precision, a validation error occurs in the following situations:
    • The Java transformation contains a port of a decimal data type.
    • The Java transformation contains a complex data type with an element of a decimal data type.

Using External .jar Files

To use external .jar files in a Java transformation, perform the following steps:
  1. Copy external .jar files to the Informatica installation directory in the Data Integration Service machine at the following location:
    <Informatic installation directory>/services/shared/jars
  2. Recycle the Data Integration Service.
  3. On the machine that hosts the Developer tool where you develop and run the mapping that contains the Java transformation:
    1. Copy external .jar files to a directory on the local machine.
    2. Edit the Java transformation to include an import statement pointing to the local .jar files.
    3. Update the classpath in the Java transformation.
    4. Compile the transformation.

Setting the JDK Path

To use complex ports in the Java transformation and to run Java user code directly on the Spark engine, you must set the JDK path.
In the Administrator tool, configure the following execution option for the Data Integration Service:
Property
Description
JDK Home Directory
The JDK installation directory on the machine that runs the Data Integration Service. Changes take effect after you recycle the Data Integration Service.
The JDK version that the Data Integration Service uses must be compatible with the JRE version on the cluster.
For example, enter a value such as
/usr/java/default
.
Default is blank.