Effective in version 10.4.1, Informatica includes the following
functionalities for technical preview:
Cloudera Data Platform Integration
For Data Engineering Integration, you can use Cloudera Data
Platform (CDP) as a compute cluster to execute data engineering jobs in the
Hadoop environment. You can use Cloudera CDP when you run data engineering jobs
on the Spark engine. Cloudera CDP is not supported on the Blaze engine.
Cloudera CDP uses a base cluster and workload clusters to
execute data engineering jobs. This architecture allows you to deploy workloads
and share data among components by utilizing a shared catalog, unified
security, consistent governance, and data life cycle management.
You can use Cloudera CDP clusters when you run a mapping in the
Hadoop environment with the following connections:
PowerExchange for
Amazon Redshift
PowerExchange for
Amazon S3
PowerExchange for
Microsoft Azure Blob Storage
PowerExchange for
Microsoft Azure CosmosDB SQL API
PowerExchange for
Microsoft Azure Data Lake Storage Gen1
PowerExchange for
Microsoft Azure Data Lake Storage Gen2
PowerExchange for
Microsoft Azure SQL Data Warehouse
PowerExchange for
Google Cloud Storage
PowerExchange for
Google BigQuery
PowerExchange for
Google Analytics
PowerExchange for
Google Cloud Spanner
Sqoop
You can also use Cloudera CDP when you run a mapping that uses a
PowerExchange for HDFS connection to read from a source in the native
environment or the Hadoop environment.
For more information, see the
Informatica® Data Engineering 10.4.1 Integration
Guide
.
Rollover parameters for Amazon S3 and ADLS Gen2 targets
For Data Engineering Streaming, you can use different rollover
parameters for Amazon S3 and ADLS Gen2 targets to decide the rollover time or
size for each target.
SAP S/4HANA resource
For Enterprise Data Catalog, you can extract metadata from the
SAP S/4HANA data source. For more information, see the
Informatica® 10.4.1 Enterprise Data Catalog Scanner
Configuration Guide.
Service and Resource Log Collection
Effective in version 10.4.1, you can collect logs for resources
based on the resource name. You can also collect logs for the services such as
HBase, Solr, ingestion, and the Catalog Service. You can collect the logs using
the infacmd CLI, REST APIs, or the Catalog Administrator. For more information,
see the Enterprise Data Catalog Utilities section in the
Informatica® 10.4.1 Catalog Administrator Guide.
Uninterrupted Catalog Backup
Effective in version 10.4.1, you can back up the catalog without
disabling the Catalog Service. You can back up the catalog using the infacmd
CLI, REST APIs, or the Catalog Administrator. For more information, see the
Enterprise Data Catalog Utilities section in the
Informatica® 10.4.1 Catalog Administrator Guide.
Technical preview functionality is supported for evaluation purposes but is unwarranted and is not supported in production environments or any environment that you plan to push to production. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support.
Technical Preview
Lifted
Effective in version 10.4.1, the following functionalities are lifted
from technical preview:
Python transformation on Databricks
For Data Engineering Integration, you can include the Python
transformation in mappings configured to run on the Databricks Spark engine.
SAP BW, SAP BW/4HANA, and IBM InfoSPhere DataStage resources
For Enterprise Data Catalog, you can extract metadata from the
SAP BW, SAP BW/4HANA, and IBM InfoSphere DataStage sources. For more
information, see the
Informatica® 10.4.1 Enterprise Data Catalog Scanner
Configuration Guide.