If you create a password file to access a Sqoop target, to run the mapping successfully, you must ensure that the password file exists on HDFS.
If a column name or table name contains a special character, the Sqoop export process fails.
If you configure the
Maintain Row Order
property for a Sqoop target, the Data Integration Service ignores the property.
When you run a Sqoop mapping on the Blaze engine, verify that you have not deleted any target port from the mapping. Otherwise, the mapping fails.
When you export null data to a Microsoft SQL Server column that is defined as not null, the Data Integration Service fails the Sqoop mapping on the Blaze engine instead of rejecting and writing the null data to the bad file.
When you write data to an Oracle target through Sqoop and run the mapping on the Blaze or Spark engine, Sqoop treats the owner name as case sensitive.
If you configure the --username or --password argument in a JDBC connection or mapping, Sqoop ignores the arguments.