Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Monitoring Mappings in the Hadoop Environment
  7. Mappings in the Native Environment
  8. Profiles
  9. Native Environment Optimization
  10. Data Type Reference
  11. Function Reference
  12. Parameter Reference
  13. Multiple Blaze Instances on a Cluster

Rules and Guidelines for Sqoop Targets

Rules and Guidelines for Sqoop Targets

Consider the following rules and guidelines when you configure a Sqoop target in a mapping:
  • If you create a password file to access a Sqoop target, to run the mapping successfully, you must ensure that the password file exists on HDFS.
  • If a column name or table name contains a special character, the Sqoop export process fails.
  • If you configure the
    Maintain Row Order
    property for a Sqoop target, the Data Integration Service ignores the property.
  • When you run a Sqoop mapping on the Blaze engine, verify that you have not deleted any target port from the mapping. Otherwise, the mapping fails.
  • When you export null data to a Microsoft SQL Server column that is defined as not null, the Data Integration Service fails the Sqoop mapping on the Blaze engine instead of rejecting and writing the null data to the bad file.
  • When you write data to an Oracle target through Sqoop and run the mapping on the Blaze or Spark engine, Sqoop treats the owner name as case sensitive.
  • If you configure the --username or --password argument in a JDBC connection or mapping, Sqoop ignores the arguments.


Updated July 03, 2018