Table of Contents

Search

  1. Informatica Bug Tracking System Change
  2. Upgrade Support
  3. PowerExchange for Amazon Redshift
  4. PowerExchange for Amazon S3
  5. PowerExchange for Cassandra
  6. PowerExchange for DataSift
  7. PowerExchange for Facebook
  8. PowerExchange for Greenplum
  9. PowerExchange for HBase
  10. PowerExchange for HDFS
  11. PowerExchange for Hive
  12. PowerExchange for JD Edwards EnterpriseOne
  13. PowerExchange for LDAP
  14. PowerExchange for LinkedIn
  15. PowerExchange for MapR-DB
  16. PowerExchange for Microsoft Azure Blob Storage
  17. PowerExchange for Microsoft Azure SQL Data Warehouse
  18. PowerExchange for Microsoft Dynamics CRM
  19. PowerExchange for MongoDB
  20. PowerExchange for Netezza
  21. PowerExchange for OData
  22. PowerExchange for Salesforce
  23. PowerExchange for SAP NetWeaver
  24. PowerExchange for Tableau
  25. PowerExchange for Teradata Parallel Transporter API
  26. PowerExchange for Twitter
  27. PowerExchange for Web Content-Kapow Katalyst
  28. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for HDFS Fixed Limitations (10.1.1)

PowerExchange for HDFS Fixed Limitations (10.1.1)

Review the Release Notes of previous releases for information about previous fixed limitations.
The following table describes fixed limitations:
Bug
Description
PLAT-13749
When you run a complex file mapping, in which dynamic partitioning is enabled, on the Blaze engine and the cluster is Kerberos enabled, the mapping fails. For no partitioning mode, the SPN user privileges are considered. (INFA457648)
OCON-528
When you run a mapping on the Blaze engine, even if the node on which data is present is free, the tasklet may or may not use the same node in the cluster. (INFA457526)
OCON-510
A complex file mapping that writes a sequence file to the local system fails on the Blaze engine when the file name does not have the .seq extension. (INFA458147)
461460
Multiple partitions do not work in high availability cluster when you configure the NameService URI in the HDFS connection and run the mapping in the native environment and on the Blaze engine.
460856
When you configure multiple partitions to read data from a remote Avro or Parquet file, the Data Integration Service and Blaze engine read data only from a single partition. This issue occurs when you use the MapR distribution.
458276
A mapping fails when you read from or write to a local Avro file.
Cause: The
fs.defaultFS
property is set in one of the configuration files at
<Informatica installation directory>/services/shared/hadoop/<hadoop distribution name>/conf
.
Workaround: Check the configuration files at
<Informatica installation directory>/services/shared/hadoop/<hadoop distribution name>/conf
and remove the
fs.defaultFS
property.