Table of Contents


  1. Preface
  2. Part 1: Hadoop Integration
  3. Part 2: Databricks Integration
  4. Appendix A: Connections

Update Streaming Objects

Update Streaming Objects

Big Data Streaming uses Spark Structured Streaming to process data instead of Spark Streaming. To support Spark Structured Streaming, some header ports are added to the data objects, and support to some of the data objects and transformations are deferred to a future release. The behavior of some of the data objects is also updated.
After you upgrade, the existing streaming mappings become invalid because of the unavailable header ports, the unsupported transformations or data objects, and the behavior change of some data objects.
Perform this task in the following situations:
  • You upgraded from version 10.1.1, 10.2.0, or 10.2.1.
To use an existing streaming mapping, perform the following tasks:
  • Re-create the physical data objects. After you re-create the physical data objects, the data objects get the required header ports, such as timestamp, partitionID, or key based on the data object.
  • In a Normalizer transformation, if the
    column is set to Auto, re-create the Normalizer transformation. You must re-create the Normalizer transformation because the type configuration property of the complex port refers to the physical data object that you plan to replace.
  • Update the streaming mapping. If the mapping contains Kafka target, Aggregator transformation, Joiner transformation, or Normalizer transformation, replace the data object or transformation, and then update the mapping because of the changed behavior of these transformations and data objects.
  • Verify the deferred data object types. If the streaming mapping contains unsupported transformations or data objects, contact Informatica Global Customer Support.


We’d like to hear from you!