Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5.5 - 10.5.5.x
  3. Part 2: Version 10.5.4 - 10.5.4.x
  4. Part 3: Version 10.5.3 - 10.5.3.x
  5. Part 4: Version 10.5.2 - 10.5.2.1.x
  6. Part 5: Version 10.5.1 - 10.5.1.1
  7. Part 6: Versions 10.5 - 10.5.0.1
  8. Part 7: Versions 10.4.1 - 10.4.1.3
  9. Part 8: Versions 10.4 - 10.4.0.2
  10. Part 9: Versions 10.2.2 - 10.2.2 HotFix 1
  11. Part 10: Version 10.2.1
  12. Part 11: Version 10.2 - 10.2 HotFix 2

What's New and Changed (10.5.5.1)

What's New and Changed (10.5.5.1)

Technical Preview Support

Technical Preview Support

Technical Preview Initiated

Effective in version 10.4.1, Informatica includes the following functionalities for technical preview:
Cloudera Data Platform Integration
For Data Engineering Integration, you can use Cloudera Data Platform (CDP) as a compute cluster to execute data engineering jobs in the Hadoop environment. You can use Cloudera CDP when you run data engineering jobs on the Spark engine. Cloudera CDP is not supported on the Blaze engine.
Cloudera CDP uses a base cluster and workload clusters to execute data engineering jobs. This architecture allows you to deploy workloads and share data among components by utilizing a shared catalog, unified security, consistent governance, and data life cycle management.
You can use Cloudera CDP clusters when you run a mapping in the Hadoop environment with the following connections:
  • PowerExchange for Amazon Redshift
  • PowerExchange for Amazon S3
  • PowerExchange for Microsoft Azure Blob Storage
  • PowerExchange for Microsoft Azure CosmosDB SQL API
  • PowerExchange for Microsoft Azure Data Lake Storage Gen1
  • PowerExchange for Microsoft Azure Data Lake Storage Gen2
  • PowerExchange for Microsoft Azure SQL Data Warehouse
  • PowerExchange for Google Cloud Storage
  • PowerExchange for Google BigQuery
  • PowerExchange for Google Analytics
  • PowerExchange for Google Cloud Spanner
  • Sqoop
You can also use Cloudera CDP when you run a mapping that uses a PowerExchange for HDFS connection to read from a source in the native environment or the Hadoop environment.
For more information, see the
Informatica® Data Engineering 10.4.1 Integration Guide
.
Rollover parameters for Amazon S3 and ADLS Gen2 targets
For Data Engineering Streaming, you can use different rollover parameters for Amazon S3 and ADLS Gen2 targets to decide the rollover time or size for each target.
SAP S/4HANA resource
For Enterprise Data Catalog, you can extract metadata from the SAP S/4HANA data source. For more information, see the
Informatica® 10.4.1 Enterprise Data Catalog Scanner Configuration Guide.
Service and Resource Log Collection
Effective in version 10.4.1, you can collect logs for resources based on the resource name. You can also collect logs for the services such as HBase, Solr, ingestion, and the Catalog Service. You can collect the logs using the infacmd CLI, REST APIs, or the Catalog Administrator. For more information, see the Enterprise Data Catalog Utilities section in the
Informatica® 10.4.1 Catalog Administrator Guide.
Uninterrupted Catalog Backup
Effective in version 10.4.1, you can back up the catalog without disabling the Catalog Service. You can back up the catalog using the infacmd CLI, REST APIs, or the Catalog Administrator. For more information, see the Enterprise Data Catalog Utilities section in the
Informatica® 10.4.1 Catalog Administrator Guide.
Technical preview functionality is supported for evaluation purposes but is unwarranted and is not supported in production environments or any environment that you plan to push to production. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support.

Technical Preview Lifted

Effective in version 10.4.1, the following functionalities are lifted from technical preview:
Python transformation on Databricks
For Data Engineering Integration, you can include the Python transformation in mappings configured to run on the Databricks Spark engine.
SAP BW, SAP BW/4HANA, and IBM InfoSPhere DataStage resources
For Enterprise Data Catalog, you can extract metadata from the SAP BW, SAP BW/4HANA, and IBM InfoSphere DataStage sources. For more information, see the
Informatica® 10.4.1 Enterprise Data Catalog Scanner Configuration Guide.

0 COMMENTS

We’d like to hear from you!