PowerExchange Adapters for Informatica
- PowerExchange Adapters for Informatica 10.4.0
- All Products
Bug
| Description
|
---|---|
BDM-11443
| When you run a mapping on the Spark engine to read data from a compressed Amazon S3 file and if the file does not have extension for the different compression formats, the mapping fails.
|
OCON-24554
| When you refresh the source or target schema at runtime in a dynamic mapping, the values that you specify for the delimiter, text qualifier, and escape character for a flat file are not honored and the default values are used instead. This might lead to unexpected results in the target.
|
OCON-23273
| When you enable Refresh Schema, the mapping fails to run on the Databricks Spark engine.
|
OCON-20605
| When you run a mapping, in the native environment, to read a flat file that has unicode characters, a space, null values, single quotes, or a value that starts with a dollar sign, the Data Integration Service adds double quotes to the values when writing data to the target.
|
OCON-17443
| When you use the
Create Target option to create an Amazon S3 target and select Flat as the Resource Format, fields are not getting propagated to the target.
Workaround: Create fields manually in the target file and run the mapping.
|
OCON-12037
| When you import an ORC file that contains Binary data type and other data types, a single binary port is created instead of multiple ports.
|
OCON-12035
| When you run a mapping on the Spark engine to read or write ORC files that contains Binary data type, the task fails with the following error message:
|
OCON-12022
| When you run a mapping on the Spark engine to read an ORC file that contains Timestamp data type, the mapping runs successfully. However, the Data Integration Service truncates the nanosecond values that are more than six digits.
|
OCON-11874
| When you run a mapping on the Spark engine to read from a multiline JSON file and write the data to the target, the mapping runs successfully. However, the Data Integration Service does not write the data to the target.
|
OCON-10032
| When you use the Hortonworks 2.6 distribution, Amazon S3 mapping fails on the Spark engine when it reads from or writes to an Amazon S3 cluster that has Version 4 authentication with the following error message:
|
OCON-10028
| When you use the IBM BigInsight 4.2 distribution, Amazon S3 mapping fails on the Spark engine when it reads from or writes to an Amazon S3 cluster that has Version 4 authentication with the following error message:
|
OCON-10026
| When you use the MapR 5.2 secure cluster, Amazon S3 mapping fails on the Spark engine when it reads from or writes to an Amazon S3 cluster that has Version 4 authentication with the following error message:
|
OCON-9765
| When you read files within a sub-directory that contains different compression formats, the Data Integration Service does not write the data to the target properly.
|
OCON-9353
| When you create an Amazon S3 data object for the
US East (Ohio) region and run a mapping on the Spark engine, the task fails. However, the same mapping runs successfully in the native environment.
|
OCON-8996
| Unable to view the list of files available in a bucket when you expand the bucket name list in the
Object Explorer view.
Workaround: To view the list of files available in a bucket, you must select the bucket name instead of expanding the bucket name list in the Object Explorer view.
|
OCON-7963
| When you run an Amazon S3 mapping on the Blaze engine to read data from or write data to an Amazon S3 bucket that requires Version 4 authentication, the mapping fails. This issue occurs if you use the Hortonworks 2.3 distribution.
|
OCON-7938
| When you run an Amazon S3 mapping in the native environment, the incorrect values in the decimal or bigint data type field are replaced by zeroes. However, the mapping runs successfully and the session log displays the following error message:
|
OCON-7911
| When you run an Amazon S3 mapping on the Blaze engine to read data from or write data to an Amazon S3 bucket that requires Version 4 authentication, the mapping fails. This issue occurs if you use the MapR 5.2 distribution.
|
OCON-7743
| When you connect to Amazon S3 from the Administrator console for a Kerberos domain, database TLS enabled, or enabled secure communication, the test connection fails with the following error message:
|
OCON-7024
| An Amazon S3 mapping fails on the Blaze engine when it reads from or write to an Amazon S3 cluster that uses IBM BigInsight 4.2 distribution and requires Version 4 authentication.
|
OCON-6887
| If the delimiter in the source and the target is a multibyte character, the PowerExchange for Amazon S3 mapping fails.
|
OCON-5736
| When you run an Amazon S3 mapping that writes to an Amazon S3 target on the Blaze engine, the hash symbol (#) precedes the header in the Amazon S3 target.
|