Table of Contents

Search

  1. Abstract
  2. PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon S3
  4. PowerExchange for Cassandra
  5. PowerExchange for DataSift
  6. PowerExchange for Facebook
  7. PowerExchange for Greenplum
  8. PowerExchange for Google BigQuery
  9. PowerExchange for HBase
  10. PowerExchange for HDFS
  11. PowerExchange for Hive
  12. PowerExchange for JD Edwards EnterpriseOne
  13. PowerExchange for LDAP
  14. PowerExchange for LinkedIn
  15. PowerExchange for MapR-DB
  16. PowerExchange for Microsoft Azure Blob Storage
  17. PowerExchange for Microsoft Azure Data Lake Store
  18. PowerExchange for Microsoft Azure SQL Data Warehouse
  19. PowerExchange for Microsoft Dynamics CRM
  20. PowerExchange for MongoDB
  21. PowerExchange for Netezza
  22. PowerExchange for OData
  23. PowerExchange for Salesforce
  24. PowerExchange for SAP NetWeaver
  25. PowerExchange for Tableau
  26. PowerExchange for Tableau V3
  27. PowerExchange for Teradata Parallel Transporter API
  28. PowerExchange for Twitter
  29. PowerExchange for Web Content-Kapow Katalyst
  30. Informatica Global Customer Support

PowerExchange Adapters for Informatica Release Notes

PowerExchange Adapters for Informatica Release Notes

PowerExchange for Amazon S3 Known Limitations (10.2)

PowerExchange for Amazon S3 Known Limitations (10.2)

The following table describes known limitations:
Bug
Description
BDM-11443
When you run a mapping on the Spark engine to read data from a compressed Amazon S3 file and the file does not have extensions for the different compression formats, the mapping fails.
BDM-9750
When you run a mapping on the Blaze engine to write data to a target that contains a semi-colon in the file name, the mapping fails.
Workaround: Do not use semi-colons in the file name.
OCON-25561
When you run a mapping that reads data from or writes data to a flat file and select the text qualifier as
NONE
, the
NONE
text qualifier is not honored. Instead, the default double quotes text qualifier is written to target.
OCON-12610
When you create an Amazon S3 data object, the value of the folder path is displayed incorrectly in the
Resources
tab.
OCON-12394
When you set the
Compression Format
type as none and run a mapping on the Spark engine to write an Avro file to an Amazon S3 target, the mapping runs successfully. However, the Data Integration Service compresses the target Amazon S3 file using the snappy compression format.
OCON-11798
When you select an encryption type and run a mapping in the native environment to read or write Avro and Parquet files, the mapping runs successfully. However, the encryption type is ignored.
OCON-10806
When you run a mapping to read data from a Parquet source and the precision for the string values is greater than 4000, the mapping runs successfully.
However, the Data Integration Service only writes string values with precision up to 4000 in the target and the remaining data gets truncated.
OCON-10804
When you run a mapping to write data to a Parquet target and you do not connect all the ports in the target, the mapping fails with the following error message:
java.lang.Exception: [MPSVCCMN_10094] The Mapping Service Module failed to run the job with ID [YZ-LZro4EeeVGlU8guu1DA] because of the following error: [LDTM_0072] java.lang.RuntimeException:
OCON-10802
When you run an Amazon S3 mapping to read data from a Parquet file that contain null values, the data preview fails with the following error message:
java.lang.RuntimeException: java.lang.RuntimeException:
OCON-10664
When you run an Amazon S3 mapping on the Spark engine to read from or write to an Amazon S3 cluster that has Version 4 authentication, the mapping fails with the following error message:
error Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request for regions which have V4 authentication
This issue occurs when you use the Hortonworks 2.5 distribution.
OCON-10032
When you run an Amazon S3 mapping on the Spark engine to read from or write to an Amazon S3 cluster that has Version 4 authentication, the mapping fails with the following error message:
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: B4AC764FDB8DA642, AWS Error Code: null, AWS Error Message: Bad Request
This issue occurs when you use the Hortonworks 2.6 distribution.
OCON-10028
When you run an Amazon S3 mapping on the Spark engine to read from or write to an Amazon S3 cluster that has Version 4 authentication, the mapping fails with the following error message:
<CmdExecInProcessTasks-pool-2-thread-29> SEVERE:[Pre_Spark_Task_Command_1] [com.informatica.platform.dtm.executor.hadoop.impl.cmdtasks.HadoopFSRmRfTask]
This issue occurs when you use the IBM BigInsights 4.2 distribution.
OCON-10026
When you run an Amazon S3 mapping on the Spark engine to read from or write to an Amazon S3 cluster that has Version 4 authentication, the mapping fails with the following error message:
<CmdExecInProcessTasks-pool-2-thread-29> SEVERE:[Pre_Spark_Task_Command_1][com.informatica.platform.dtm.executor.hadoop.impl.cmdtasks.HadoopFSRmRfTask]
This issue occurs when you use the MapR 5.2 secure cluster.
OCON-9765
When you read files within a sub-directory that contains different compression formats, the Data Integration Service does not write the data to the target properly.
OCON-9353
When you create an Amazon S3 data object for the
US East (Ohio)
region and run a mapping on the Spark engine, the task fails. However, the same mapping runs successfully in the native environment.
OCON-8996
When you expand the bucket name list in the
Object Explorer
view, you cannot view the list of files available in a bucket.
Workaround: To view the list of files available in a bucket, you must select the bucket name instead of expanding the bucket name list in the Object Explorer view.
OCON-7963
When you run an Amazon S3 mapping on the Blaze engine to read data from or write data to an Amazon S3 bucket that requires Version 4 authentication, the mapping fails. This issue occurs if you use the Hortonworks 2.3 distribution.
OCON-7938
When you run an Amazon S3 mapping in the native environment, the incorrect values in the decimal or bigint data type field are replaced by zeroes. However, the mapping runs successfully and the session log displays the following error message:
2017-03-13 23:46:08.131 <TASK_140116755179264-READER_1_1_1> SEVERE: [APPSDK_Msg_1762] Data for column [age] of type [bigint] should be a of type [java.lang.Number] or its sub-types.
OCON-7911
When you run an Amazon S3 mapping on the Blaze engine to read data from or write data to an Amazon S3 bucket that requires Version 4 authentication, the mapping fails. This issue occurs if you use the MapR 5.2 distribution.
OCON-7743
When you connect to Amazon S3 from Informatica Administrator for a Kerberos domain, database TLS enabled, or enabled secure communication, the test connection fails with the following error message:
The requested operation could not be performed due to the following error: Connection error: Unable to execute HTTP reuest:
OCON-7024
An Amazon S3 mapping fails on the Blaze engine when it reads from or writes to an Amazon S3 cluster that uses the IBM BigInsights 4.2 distribution and requires Version 4 authentication.
OCON-6887
If the delimiter in the source and the target is a multibyte character, the Amazon S3 mapping fails.
OCON-5736
When you run an Amazon S3 mapping that writes to an Amazon S3 target on the Blaze engine, the hash symbol (#) precedes the header in the Amazon S3 target.
OCON-293
An Amazon S3 mapping does not read valid rows if there are rows that contain errors in the Amazon S3 source object.