Table of Contents

Search

  1. Preface
  2. Part 1: Versions 10.5 - 10.5.0.1
  3. Part 2: Versions 10.4.1 - 10.4.1.3
  4. Part 3: Versions 10.4 - 10.4.0.2
  5. Part 4: Versions 10.2.2 - 10.2.2 HotFix 1
  6. Part 5: Version 10.2.1
  7. Part 6: Version 10.2 - 10.2 HotFix 2

What's New and Changed (10.5.0.1)

What's New and Changed (10.5.0.1)

Technical Preview

Technical Preview

Read this section to learn about technical preview functionality that is initiated or lifted in version 10.5.

Technical Preview Initiated

Effective in version 10.5, the following functionality is available for technical preview:
Data Flow Analytics
You can use Data Flow Analytics to view analytical data about PowerCenter mappings in the catalog. You can use the data to improve the mapping quality and efficiency in your enterprise. You can also view mappings as diagrams for ease of understanding.
The analytical data includes details about the following mapping metrics:
  • Similar mappings
  • Duplicate mappings
  • Reusable mapplets, transformations, and user functions for complex expressions
In addition, the data also includes the mapping group details and the mapping discovery details to provide additional insights into the mapping data. You can download the analytical data as reports for further analysis or enhance the reports using third-party reporting solutions.
For more information, see the
Enterprise Data Catalog 10.5 User Guide
.
Graphical Informatica Installer
You can use a graphical user interface based installer to install PowerCenter and Data Engineering Integration on Linux platforms. The installer provides an easy and interactive interface to run the Pre-installation (i10Pi) System Check Tool, install the Informatica domain, and configure the domain and application services.
Informatica Domain on Kubernetes Cluster
You can run Informatica domain on a Microsoft Azure Kubernetes cluster. The auto-scaling of the active-active services such as Data Integration Service and PowerCenter Integration Service are supported.
For more information, see the
Install PowerCenter on Kubernetes with Informatica Deployment Manager
or the
Install Data Engineering Integration on Kubernetes with Informatica Deployment Manager
.
Intelligent structure models based on an XSD file
When you create an intelligent structure model you can select an XSD file to base the model on.
Mapping audits on the Blaze engine
The Blaze engine can run mappings with audit rules and conditions.
For more information, see the
Data Engineering Integration 10.5 User Guide
.
PowerExchange for Microsoft Azure Data Lake Storage Gen2
The following functionality is available for technical preview:
  • You can read objects stored in subdirectories in Microsoft Azure Data Lake Storage Gen2 in the native environment or on the Spark engine.
  • You can use wildcard characters to specify the source directory name or the source file name when you read data from a flat file or complex file in the native environment or on the Spark engine.
Technical preview functionality is supported for evaluation purposes but is unwarranted and is not supported in production environments or any environment that you plan to push to production. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support.

Technical Preview Lifted

Effective in version 10.5, the following functionality is lifted from technical preview:
Catalog Backup
For Enterprise Data Catalog, you can back up the catalog without disabling the Catalog Service. You can back up the catalog using the REST APIs, the Enterprise Data Catalog interface, or the Catalog Administrator. For more information, see the
Catalog Backup and Log Collection Utilities
section in the
Informatica 10.5 Catalog Administrator Guide
or the
Informatica 10.5 Enterprise Data Catalog User Guide
.
Cloudera CDP integration
For Data Engineering Integration, you can use Cloudera Data Platform (CDP) as a compute cluster to execute data engineering jobs in the Hadoop environment. You can use Cloudera CDP when you run data engineering jobs on the Spark engine. Cloudera CDP uses a base cluster and workload clusters to execute data engineering jobs. This architecture allows you to deploy workloads and share data among components by utilizing a shared catalog, unified security, consistent governance, and data life cycle management.
You can use Cloudera CDP when you run a mapping in the Hadoop environment with the following connections:
  • PowerExchange for Google Cloud Storage
  • PowerExchange for Google BigQuery
  • PowerExchange for Google Analytics
  • PowerExchange for Google Cloud Spanner
For more information, see the
Data Engineering 10.5 Integration Guide
.
Compact view
You can choose to display the compact view of the lineage and Impact view. The compact lineage and impact view displays the lineage and impact diagram summarized at the resource level.
For more information, see the "View Lineage and Impact" chapter in the
Informatica 10.5 Enterprise Data Catalog User Guide
.
Rollover parameters for Amazon S3 and Azure Data Lake Storage Gen2 targets
For Data Engineering Streaming, you can use different rollover parameters for Amazon S3 and Azure Data Lake Storage Gen2 targets to decide the rollover time or size for each target. For more information, see
Data Engineering Streaming 10.5 User Guide
.
SAP S/4HANA resource
For Enterprise Data Catalog, you can extract metadata from the SAP S/4HANA data source. For more information, see the
Enterprise Data Catalog 10.5 Scanner Configuration Guide
.
Service and Resource Log Collection
For Enterprise Data Catalog, you can collect logs for resources. Additionally, you can collect logs for the following services:
  • Enterprise Data Catalog services such as Catalog Service, Informatica Cluster Service, EBF history, and PhoneHome
  • Apache Solr
  • MongoDB
  • Nomad orchestration service and scan jobs
  • Apache Zookeeper
  • Postgres
You can collect the logs using the REST APIs, the Enterprise Data Catalog interface, or the Catalog Administrator. For more information, see the
Catalog Backup and Log Collection Utilities
section in the
Informatica 10.5 Catalog Administrator Guide
or the
Informatica 10.5 Enterprise Data Catalog User Guide
.
Unique Key Inference
You can generate unique key candidates from the columns in the source data object. You can view the unique key inference results to understand data quality issues by identifying the columns that have duplicate values. You can accept or reject the inferred unique key inference results. After you accept or reject an inferred unique key inference, you can reset the unique key inference to restore the inferred status.
For more information, see the "View Assets" chapter in the
Informatica 10.4.0 Enterprise Data Catalog User Guide
.

0 COMMENTS

We’d like to hear from you!