Effective in version 10.4.0, Informatica includes the following functionalities for technical preview:
Connect to a blockchain
For Data Engineering Integration, you can connect to a blockchain to use blockchain sources and targets in mappings that run on the Spark engine.
Databricks delta table as streaming mapping target
For Data Engineering Streaming, you can use Databricks delta table as a target of streaming mapping for the ingestion of streaming data.
Dynamic streaming mapping
You can configure dynamic streaming mappings to change Kafka sources and targets at run time based on the parameters and rules that you define in a Confluent Schema Registry.
HL7 input in intelligent structure models
Intelligent Structure Discovery can process HL7 inputs.
Python transformation on Databricks
For Data Engineering Integration, you can include the Python transformation in mappings configured to run on the Databricks Spark engine.
Snowflake as a streaming mapping target
For Data Engineering Streaming, you can configure Snowflake as a target in a streaming mapping to write data to Snowflake.
Technical preview functionality is supported for evaluation purposes but is unwarranted and is not production-ready. Informatica recommends that you use in non-production environments only. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support.
Technical Preview Lifted
Effective in version 10.4.0, the following functionalities are lifted from technical preview:
Data preview on the Spark engine
For Data Engineering Integration, you can preview data within a mapping that runs on the Spark engine from the Developer tool for mappings configured to run with Amazon EMR, Cloudera CDH, and Hortonworks HDP. Previewing data in mappings configured to run with Azure HDInsight and MapR is still available for technical preview.
PowerExchange for Amazon S3
For Data Engineering Integration, you can use intelligent structure models when importing a data object.
PowerExchange for Microsoft Azure Cosmos DB SQL API
For Data Engineering Integration, you can develop and run mappings in the Azure Databricks environment.
PowerExchange for Microsoft Azure SQL Data Warehouse
For Data Engineering Integration, you can use the following functionalities:
Create and run dynamic mappings.
Use full pushdown optimization when an ODBC connection is used to connect to the Microsoft Azure SQL Data Warehouse database.
SSL-enabled Kafka connections
For Data Engineering Streaming, you can use SSL-enabled Kafka connections for streaming mappings.