Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for Google Analytics
  6. PowerExchange for Google BigQuery
  7. PowerExchange for Google Cloud Spanner
  8. PowerExchange for Google Cloud Storage
  9. PowerExchange for Greenplum
  10. PowerExchange for HBase
  11. PowerExchange for HDFS
  12. PowerExchange for Hive
  13. PowerExchange for JDBC V2
  14. PowerExchange for JD Edwards EnterpriseOne
  15. PowerExchange for Kudu
  16. PowerExchange for LDAP
  17. PowerExchange for Microsoft Azure Blob Storage
  18. PowerExchange for Microsoft Azure Cosmos DB SQL API
  19. PowerExchange for Microsoft Azure Data Lake Storage Gen1
  20. PowerExchange for Microsoft Azure Data Lake Storage Gen2
  21. PowerExchange for Microsoft Azure SQL Data Warehouse
  22. PowerExchange for Microsoft Dynamics CRM
  23. PowerExchange for MongoDB
  24. PowerExchange for Netezza
  25. PowerExchange for OData
  26. PowerExchange for Salesforce
  27. PowerExchange for SAP NetWeaver
  28. PowerExchange for Snowflake
  29. PowerExchange for Tableau
  30. PowerExchange for Tableau V3
  31. PowerExchange for Teradata Parallel Transporter API
  32. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Amazon S3 (10.5)

PowerExchange for Amazon S3 (10.5)

Third-Party Fixed Issues

The following table describes fixed issues:
Issue
Description
OCON-25553
When you run an Amazon S3 mapping on the Spark engine and the Amazon S3 staging bucket name contains a dot (.), the mapping fails with the following error:
Unable to execute HTTP request: Certificate for xxxx doesn't match any of the subject alternative names
This issue occurs when you use the CDP 7.1 distribution.
S3 ticket reference number: 7087613701

Known Issues

The following table describes known issues:
Issue
Description
BDM-11443
When you run an Amazon S3 mapping on the Spark engine to read data from a compressed Amazon S3 file and if the file does not have extension for the different compression formats, the mapping fails.
OCON-27818
When you run an Amazon S3 mapping on the Spark engine and the Amazon S3 bucket is in a different AWS account, the mapping fails with the following error:
S3ServiceException:Access Denied,Status 403,Error AccessDenied
OCON-25450
When you create an Amazon S3 data object for the
China (Hong Kong)
region and run a mapping on the Databricks 5.5 engine, the mapping fails with the following error:
Exception caught when redacting in logging: "null". Log original message instead java.lang.NullPointerException
OCON-25032
When you create an Amazon S3 data object for the
China (Hong Kong)
region and run a mapping on the Spark engine, the mapping fails with the
java.lang.reflect.InvocationTargetException
exception.
This issue occurs when you use the EMR 5.29 distribution
OCON-24683
When you run a mapping in the native environment and read a binary file from a directory, the mapping runs successfully. However, the session log shows the following severe message:
SEVERE: jrt write failed
.
OCON-24554
When you refresh the source or target schema at runtime in a dynamic mapping, the values that you specify for the delimiter, text qualifier, and escape character for a flat file are not honored and the default values are used instead. This might lead to unexpected results in the target.
OCON-20605
When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.
OCON-17443
When you use the
Create Target
option to create an Amazon S3 target and select Flat as the Resource Format, fields are not getting propagated to the target.
Workaround: Create fields manually in the target file and run the mapping.
OCON-12037
When you import an ORC file that contains Binary data type and other data types, a single binary port is created instead of multiple ports.
OCON-12035
When you run a mapping on the Spark engine to read or write ORC files that contains Binary data type, the task fails with the following error message:
(AmazonS3Read, SparkEngine) = java.lang.RuntimeException
OCON-12022
When you run a mapping on the Spark engine to read an ORC file that contains Timestamp data type, the mapping runs successfully. However, the Data Integration Service truncates the nanosecond values that are more than six digits.
OCON-11874
When you run a mapping on the Spark engine to read from a multiline JSON file and write the data to the target, the mapping runs successfully. However, the Data Integration Service does not write the data to the target.
OCON-10032
When you use the Hortonworks 2.6 distribution, Amazon S3 mapping fails on the Spark engine when it reads from or writes to an Amazon S3 cluster that has Version 4 authentication with the following error message:
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: B4AC764FDB8DA642, AWS Error Code: null, AWS Error Message: Bad Request
OCON-10026
When you use the MapR 5.2 secure cluster, Amazon S3 mapping fails on the Spark engine when it reads from or writes to an Amazon S3 cluster that has Version 4 authentication with the following error message:
<CmdExecInProcessTasks-pool-2-thread-29> SEVERE:[Pre_Spark_Task_Command_1][com.informatica.platform.dtm.executor.hadoop.impl.cmdtasks.HadoopFSRmRfTask]
OCON-9765
When you read files within a sub-directory that contains different compression formats, the Data Integration Service does not write the data to the target properly.
OCON-9353
When you create an Amazon S3 data object for the
US East (Ohio)
region and run a mapping on the Spark engine, the task fails. However, the same mapping runs successfully in the native environment.
OCON-8996
Unable to view the list of files available in a bucket when you expand the bucket name list in the
Object Explorer
view.
Workaround: To view the list of files available in a bucket, you must select the bucket name instead of expanding the bucket name list in the Object Explorer view.
OCON-7963
When you run an Amazon S3 mapping on the Blaze engine to read data from or write data to an Amazon S3 bucket that requires Version 4 authentication, the mapping fails. This issue occurs if you use the Hortonworks 2.3 distribution.
OCON-7938
When you run an Amazon S3 mapping in the native environment, the incorrect values in the decimal or bigint data type field are replaced by zeroes. However, the mapping runs successfully and the session log displays the following error message:
2017-03-13 23:46:08.131 <TASK_140116755179264-READER_1_1_1> SEVERE: [APPSDK_Msg_1762] Data for column [age] of type [bigint] should be a of type [java.lang.Number] or its sub-types.
OCON-7911
When you run an Amazon S3 mapping on the Blaze engine to read data from or write data to an Amazon S3 bucket that requires Version 4 authentication, the mapping fails. This issue occurs if you use the MapR 5.2 distribution.
OCON-7743
When you connect to Amazon S3 from the Administrator console for a Kerberos domain, database TLS enabled, or enabled secure communication, the test connection fails with the following error message:
The requested operation could not be performed due to the following error: Connection error: Unable to execute HTTP reuest:
OCON-6887
If the delimiter in the source and the target is a multibyte character, the PowerExchange for Amazon S3 mapping fails.
OCON-5736
When you run an Amazon S3 mapping that writes to an Amazon S3 target on the Blaze engine, the hash symbol (#) precedes the header in the Amazon S3 target.

0 COMMENTS

We’d like to hear from you!