Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5.6 - Version 10.5.6.x
  3. Part 2: Version 10.5.5 - 10.5.5.x
  4. Part 3: Version 10.5.4 - 10.5.4.x
  5. Part 4: Version 10.5.3 - 10.5.3.x
  6. Part 5: Version 10.5.2 - 10.5.2.1.x
  7. Part 6: Version 10.5.1 - 10.5.1.1
  8. Part 7: Versions 10.5 - 10.5.0.1
  9. Part 8: Versions 10.4.1 - 10.4.1.3
  10. Part 9: Versions 10.4 - 10.4.0.2
  11. Part 10: Versions 10.2.2 - 10.2.2 HotFix 1
  12. Part 11: Version 10.2.1
  13. Part 12: Version 10.2 - 10.2 HotFix 2

What's New and Changed (10.5.6.1)

What's New and Changed (10.5.6.1)

PowerExchange Adapters for Informatica

PowerExchange Adapters for Informatica

This section describes new Informatica adapter features in 10.2.

PowerExchange for Amazon Redshift

Effective in version 10.2, PowerExchange for Amazon Redshift includes the following new features:
  • You can read data from or write data to the Amazon S3 buckets in the following regions:
    • Asia Pacific (Mumbai)
    • Asia Pacific (Seoul)
    • Canada (Central)
    • China(Beijing)
    • EU (London)
    • US East (Ohio)
  • You can run Amazon Redshift mappings on the Spark engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine, which significantly increases the performance.
  • You can use AWS Identity and Access Management (IAM) authentication to securely control access to Amazon S3 resources.
  • You can connect to Amazon Redshift Clusters available in Virtual Private Cloud (VPC) through VPC endpoints.
  • You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR cluster.
For more information, see the
Informatica PowerExchange for Amazon Redshift 10.2 User Guide
.

PowerExchange for Amazon S3

Effective in version 10.2, PowerExchange for Amazon S3 includes the following new features:
  • You can read data from or write data to the Amazon S3 buckets in the following regions:
    • Asia Pacific (Mumbai)
    • Asia Pacific (Seoul)
    • Canada (Central)
    • China (Beijing)
    • EU (London)
    • US East (Ohio)
  • You can compress data in the following formats when you read data from or write data to Amazon S3 in the native environment and Spark engine:
    Compression format
    Read
    Write
    Bzip2
    Yes
    Yes
    Deflate
    No
    Yes
    Gzip
    Yes
    Yes
    Lzo
    Yes
    Yes
    None
    Yes
    Yes
    Snappy
    No
    Yes
  • You can select the type of source from which you want to read data in the
    Source Type
    option under the advanced properties for an Amazon S3 data object read operation. You can select
    Directory
    or
    File
    source types.
  • You can select the type of the data sources in the
    Resource Format
    option under the Amazon S3 data objects properties. You can read data from the following source formats:
    • Binary
    • Flat
    • Avro
    • Parquet
  • You can connect to Amazon S3 buckets available in Virtual Private Cloud (VPC) through VPC endpoints.
  • You can run Amazon S3 mappings on the Spark engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine.
  • You can choose to overwrite the existing files. You can select the
    Overwrite File(s) If Exists
    option in the Amazon S3 data object write operation properties to overwrite the existing files.
  • You can use AWS Identity and Access Management (IAM) authentication to securely control access to Amazon S3 resources.
  • You can filter the metadata to optimize the search performance in the
    Object Explorer
    view.
  • You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR cluster.
For more information, see the
Informatica PowerExchange for Amazon S3 10.2 User Guide
.

PowerExchange for HBase

Effective in version 10.2, PowerExchange for HBase contains the following new features:
  • You can use PowerExchange for HBase to read from sources and write to targets stored in the WASB file system on Azure HDInsight.
  • You can associate a cluster configuration with an HBase connection. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the
Informatica PowerExchange for HBase 10.2 User Guide
.

PowerExchange for HDFS

Effective in version 10.2, you can associate a cluster configuration with an HDFS connection. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the
Informatica PowerExchange for HDFS 10.2 User Guide
.

PowerExchange for Hive

Effective in version 10.2, you can associate a cluster configuration with an Hive connection. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the
Informatica PowerExchange for Hive 10.2 User Guide
.

PowerExchange for MapR-DB

Effective in version 10.2, PowerExchange for MapR-DB contains the following new features:
  • You can run MapR-DB mappings on the Spark engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine, which significantly increases the performance.
  • You can configure dynamic partitioning for MapR-DB mappings that you run on the Spark engine.
  • You can associate a cluster configuration with an HBase connection for MapR-DB. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the
Informatica PowerExchange for MapR-DB 10.2 User Guide
.

PowerExchange for Microsoft Azure Blob Storage

Effective in version 10.2, you can read data from or write data to a subdirectory in Microsoft Azure Blob Storage. You can use the
Blob Container Override
and
Blob Name Override
fields to read data from or write data to a subdirectory in Microsoft Azure Blob Storage.
For more information, see the
Informatica PowerExchange for Microsoft Azure Blob Storage 10.2 User Guide
.

PowerExchange for Microsoft Azure SQL Data Warehouse

Effective in version 10.2, you can run Microsoft Azure SQL Data Warehouse mappings in a Hadoop environment on Kerberos enabled clusters.
For more information, see the
Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.2 User Guide
.

PowerExchange for Salesforce

Effective in version 10.2, you can use version 39 of Salesforce API to create a Salesforce connection and access Salesforce objects.
For more information, see the
Informatica PowerExchange for Salesforce 10.2 User Guide
.

0 COMMENTS

We’d like to hear from you!