Table of Contents

Search

  1. Abstract
  2. 2020 - August
  3. 2020 - July
  4. 2020 - April
  5. 2020 - February
  6. 2020 - January
  7. 2019 - November
  8. 2019 - September
  9. 2019 - August
  10. 2019 - July
  11. 2019 - June
  12. 2019 - May
  13. 2019 - April
  14. 2019 - March
  15. Informatica Global Customer Support

Connector Release Notes

Connector Release Notes

Microsoft Azure Data Lake Store V3 Connector Known Limitations

Microsoft Azure Data Lake Store V3 Connector Known Limitations

The following table describes known limitations:
CR
Description
IF-12413
When you edit metadata in the
Fields
tab in a mapping, all native data types change to
Bigint
and you cannot change the scale and precision of data types except string.
CCON-18339
When you run a mapping to read data from or write data to Microsoft Azure Data Lake Store, by default, the scale of a double data type is set to 0 in the target file. This might result in data loss.
CCON-17950
When you create a new object at runtime and use the
%T
time format in the object name, the name of the object created is displayed in the
%H_%M_%S
format.
CCON-17938
When you write Avro, JSON, and Parquet file formats and create a new object at runtime, special characters in the object name are not honored.
CCON-16116
When you run a task to read data from or write data to Microsoft Azure Data Lake Store, the task runs successfully. However, the session log displays unnecessary messages. Example:
"[ERROR] Staging file deletion failed with Exception "
CCON-14117
When you write a Parquet file to a target, the data gets truncated if the Parquet source file has values containing more than 4000 characters even after increasing the precision of the data type.
CCON-13017
When you run a mapping to read a source that contains special characters in the field names and write data to a Microsoft Azure Data Lake Store V3 target, the mapping fails.
CCON-12645
When you use Create Target to write a JSON file, the boolean values are written as
1
or
0
instead of
true
or
false
as in the source file.
CCON-12638
When you write a JSON file of size 1GB or more, the task fails with the java heap space error.
Workaround: Set the JVM options for type DTM to increase the -Xms and -Xmx values in the system configuration details of the Secure Agent.
CCON-12614
When you write a JSON file in which column names include unicode characters, the task fails.
CCON-12547
When you read a JSON file from a Microsoft Azure Data Lake Store source and write the JSON file to a Microsoft Azure Data Lake Store target, the precision of the Decimal data type field is changed in the target file.
CCON-12522
When you write an Avro or a Parquet file that contains null values, the mapping task writes incorrect data in the target for the fields having null values.
CCON-12331
When you use Create Target to write an Avro file, the schema is created with primitive data types without providing an option to allow null values.
Workaround: Manually edit the schema to allow null values as required. For example:
{"type":"record","name":"S3_Avro_CT_N","fields":[
{ "name":"c_custkey" , "type":["int","null"]},
{ "name":"c_name" , "type":"string"}
{ "name":"c_nationkey" , "type":["long","null"]}
]}
CCON-11978
Microsoft Azure Data Lake Store V3 Connector displays the following common error message for all failed mappings:
The Integration Service found Informatica Hadoop distribution directory [/data/home/cloudqa/staging1/apps/Data_Integration_Server/52.0.16/ICS/main/distros/cloudera_cdh5u8/lib and /conf] for Hadoop class loader.
CCON-7728
When you write data to a Microsoft Azure Data Lake Store V3 target and select
Tab
as delimiter in the formatting option, you cannot preview data.


Updated August 02, 2020