Kafka Connector

Kafka Connector

Rules and guidelines for Kafka mappings

Rules and guidelines for Kafka mappings

Use the following rules and guidelines when you configure a Kafka mapping:
  • You cannot parameterize the source object in a mapping.
  • You cannot configure partitioning in a mapping that reads data from a Kafka source in batch mode.
  • You cannot configure Lookup transformation and Sorter transformation in a Kafka mapping.
  • When you use a Secure Agent group to run a mapping, you cannot configure message recovery.
  • When you configure a mapping that reads hierarchical data types from the source, the mapping might fail at runtime.
  • You can use the Avro formatting option with the Kafka connector that uses the Confluent schema registry.
  • When you select JSON as the format type, you cannot read hierarchical data from a Kafka source or write hierarchical data to a Kafka target.
  • When you abort a mapping task or terminate the Data Transformation Manager (DTM) process, the number of success rows in the session log and the target results page of the mapping task do not match.
  • When you read Avro data from or write Avro data to a Kafka topic with Confluent Schema Registry, the mapping runs successfully with the latest registered schema for the selected topic. In the subsequent runs, if the schema is updated, ensure that you refresh the metadata for the mapping or create a new mapping.
  • When you read from or write to a column of Boolean data type in a Kafka topic with Confluent schema registry, ensure that you specify 0 for False and 1 for True.
  • When you read Avro data from or write Avro data to a Kafka topic with Confluent schema registry, you cannot view the schema in the formatting options.
  • For a Kafka connection used in an existing mapping that reads Avro data from or writes Avro data to a Kafka topic, do not configure the
    Schema Registry URL
    connection property. Otherwise, when you rerun the same mapping rerun, the mapping fails with the following error:
    [ERROR] java.lang.RuntimeException: Failed : HTTP error code : 422
  • When you run a mapping that reads from a Kafka source and writes to a target of format JSON type and the target includes the key field, the mapping fails.
  • The data preview in the Target transformation displays additional fields such as offset and header received from the source data. But these fields are not available in the Kafka target. You cannot map these fields to the target.

0 COMMENTS

We’d like to hear from you!
Florent GERBOLINO - December 02, 2022

What is the meaning of a "Kafka mapping" precisely? Is it a mapping in which all sources and targets are using a Kafka connection or is it a mapping in which we have at least one Kafka connection (in source or target) or else?
Why can't we cannot configure Lookup transformation and Sorter transformation in a Kafka mapping?

Regards,

Florent

Informatica Documentation Team - December 07, 2022

Hi Florent Gerbolino,

Thank you for the query. Here’s a brief explanation about Kafka mappings and answers to the questions you asked.

A Kafka mapping can either contain a Source or Target transformation that uses a Kafka connection to read from or write to Kafka. The mapping can also have both Source and Target transformations that use a Kafka connection. The guidelines conveyed are specific to either the read or write transformation, or both.

Kafka Connector does not support the Lookup and Sorter transformations. If you are looking for the Lookup and Sorter transformations support, we request you to contact Informatica Global Customer Support.

Thanks, 

Informatica Documentation Team


Alessio Giordani - November 22, 2024

Hello documentation team,

I would add here that real-time is not supported for mappings using confluent schema registry mode - CCON-49134

thanks,

Alessio

Informatica Documentation Team - November 23, 2024

Hi Alessio,

We're working to address your comments and will get back to you.