Consider the following rules and guidelines when you configure a Sqoop target in a mapping:
If a column name or table name contains a special character, the Sqoop export process fails.
If you configure the
Maintain Row Order
property for a Sqoop target, the Data Integration Service ignores the property.
If a mapping contains a Sqoop source, an Aggregator transformation, and a flat file target, you must disable the
Maintain Row Order
property for the target. Otherwise, the mapping fails.
When you run a Sqoop mapping on the Blaze engine, verify that you have not deleted any target port from the mapping. Otherwise, the mapping fails.
When you export null data to a Microsoft SQL Server column that is defined as not null, the Data Integration Service fails the Sqoop mapping on the Blaze engine instead of rejecting and writing the null data to the bad file.
When you write data to an Oracle target through Sqoop and run the mapping on the Blaze or Spark engine, Sqoop treats the owner name as case sensitive.
A mapping that contains an Update Strategy transformation cannot use a Sqoop-enabled JDBC connection to write to a target. To run the mapping, disable the Sqoop connector in the Write transformation.
Sqoop uses the values that you configure in the
User Name
and
Password
fields of the JDBC connection. If you configure the --username or --password argument in a JDBC connection or mapping, Sqoop ignores the arguments. If you create a password file to access a database, Sqoop ignores the password file.
When you write data to a Vertica target through Sqoop, the --batch argument is required.