Microsoft Azure Blob Storage V3 Connector

Microsoft Azure Blob Storage V3 Connector

Troubleshooting a mapping or mapping task

Troubleshooting a mapping or mapping task

[ERROR] Exception: java.io.IOException: Too many open files
When you run a mapping on a Linux machine to read a large file, the mapping might fail with the following error:
[ERROR] Exception: java.io.IOException: Too many open files
To resolve this issue, perform the following steps:
  1. Increase the value of file-max that is the maximum File Descriptors enforced on a kernel level.
    To change the file descriptor setting, edit the kernel parameter file
    /etc/sysctl.conf
    and add
    fs.file-max=[new value]
    to it.
    For example:
    # vi /etc/sysctl.conf
    fs.file-max = 400000
  2. Set the ulimit. The ulimit must be less than file-max.
    To change the ulimit setting, edit the file
    /etc/security/limits.conf
    and set the hard and soft limits in it.
    For example:
    # vi /etc/security/limits.conf
    * soft nofile 40000
    * hard nofile 40000
When I write a JSON file, the mapping task fails with a Java heap space error.
When you write a JSON file of size 1 GB or more, the task fails with a Java heap space error.
Set the JVM options for type DTM to increase the -Xms and -Xmx values in the system configuration details of the Secure Agent.
When I use the create a new target at runtime to write an Avro file, the schema is created with primitive data types without providing an option to include null values.
You must manually edit the schema to allow null values as required. For example:
{"type":"record","name":"Azure_Avro_CT_N","fields":[
{"name":"c_custkey" , "type":["int","null"]},
{"name":"c_name" , "type":"string"},
{"name":"c_address" , "type":"string"},
{"name":"c_nationkey" , "type":["long","null"]}
The same error message is displayed for every failed mapping.
You can verify the error message in the session log.

0 COMMENTS

We’d like to hear from you!