Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Cassandra
  4. PowerExchange for DataSift
  5. PowerExchange for Facebook
  6. PowerExchange for Greenplum
  7. PowerExchange for HBase
  8. PowerExchange for HDFS
  9. PowerExchange for Hive
  10. PowerExchange for JD Edwards EnterpriseOne
  11. PowerExchange for LDAP
  12. PowerExchange for LinkedIn
  13. PowerExchange for Microsoft Azure Blob Storage
  14. PowerExchange for Microsoft Azure SQL Data Warehouse
  15. PowerExchange for Microsoft Dynamics CRM
  16. PowerExchange for MongoDB
  17. PowerExchange for Netezza
  18. PowerExchange for OData
  19. PowerExchange for Salesforce
  20. PowerExchange for SAP NetWeaver
  21. PowerExchange for Teradata Parallel Transporter API
  22. PowerExchange for Twitter
  23. PowerExchange for Web Content-Kapow Katalyst

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for HDFS Known Limitations (10.1)

PowerExchange for HDFS Known Limitations (10.1)

The following table describes known limitations:
CR
Description
461460
Multiple partitions do not work in high availability cluster when you configure the NameService URI in the HDFS connection and run the mapping in the native environment and on the Blaze engine.
Workaround: Use the active NameNode's fs.defaultFS in place of NameService URI in the HDFS connection.
Imapct: NameNode high availability failover capabilities are not honored as the mapping runs using the active NameNode URI.
460921
For HDInsights distribution, read and write operations on remote complex files do not work in the Hive Pushdown mode.
Workaround 1: Set fs.defaultFS to HDFS in Hadoop advanced properties.
Impact: Job history is not available for the mappings that use the Hadoop connection with fs.defaultFS set to HDFS. You can create two Hadoop connections. Use one where fs.defaultFs is pointing to HDFS for complex file and other one for rest of the mappings.
Workaround 2: Set fs.defaultFS to HDFS in hive-site.xml.
Impact: Job history is not available for all the mappings run from the node where fs.defaultFS is set to HDFS in hive-site.xml.
460856
When you configure multiple partitions to read data from a remote Avro or Parquet file, the Data Integration Service and Blaze engine read data only from a single partition. This issue occurs when you use the MapR distribution.
460497
A partitioned mapping that uses a local complex file object reads from and writes only to a single partition when you run the mapping on the Blaze engine.
460280
When you select the Snappy compression codec in the target object, the mapping fails with a Snappy error. this issue occurs when you use MapR and BigInsights distributions.
Workaround: To use the Snappy codec, replace the default Snappy.jar file in the Informatica server installation and in the Hadoop environment with the updated version. The updated
snappy-java-1.1.1.7.jar
file is available at the following link: http://mvnrepository.com/artifact/org.xerial.snappy/snappy-java/1.1.1.7
On the machine where you installed the Informatica server, replace the existing .jar with the
snappy-java-1.1.1.7.jar
at the following path:
<Server_Installation>\services\shared\hadoop\<Hadoop_Distribution>\lib
On the machines where you installed and run Hadoop, Replace the existing .jar with the
snappy-java-1.1.1.7.jar
at the following path:
<Hadoop_rpm>\services\shared\hadoop\<Hadoop_Distribution>\lib
458668
The object import for the Parquet complex file fails if Unicode characters are present in the file content or the Filename port.
458276
A mapping fails when you read from or write to a local Avro file.
Cause: The
fs.defaultFS
property is set in one of the configuration files at
<Informatica installation directory>/services/shared/hadoop/<hadoop distribution name>/conf
.
Workaround: Check the configuration files at
<Informatica installation directory>/services/shared/hadoop/<hadoop distribution name>/conf
and remove the
fs.defaultFS
property.
458147
A complex file mapping that writes a sequence file to the local system fails on the Blaze engine when the file name does not have the .seq extension.
Workaround: Specify the extension as .seq with the file name in the File Name property of the target object.
457648
When you run a complex file mapping, in which dynamic partitioning is enabled, on the Blaze engine and the cluster is Kerberos enabled, the mapping fails. For no partitioning mode, the SPN user privileges are considered.
457526
When you run a mapping on the Blaze engine, even if the node on which data is present is free, the tasklet may or may not use the same node in the cluster.
455123
When you read from or write to a local complex file, the mapping fails if dynamic partitioning is enabled.
452012
Parameter properties of the complex file object are not displayed correctly in the Parameter Usage dialog box when you create a workflow.
400227
When you write data to a local sequence file with an .seq extension, the Data Integration Service writes each value in the key-value pairs into separate text files. This issue occurs when you run the mapping in a Hive environment.
Workaround: Use a different extension for the sequence file.