Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings
  4. Sources
  5. Targets
  6. Transformations
  7. Data Preview
  8. Cluster Workflows
  9. Profiles
  10. Monitoring
  11. Hierarchical Data Processing
  12. Hierarchical Data Processing Configuration
  13. Hierarchical Data Processing with Schema Changes
  14. Intelligent Structure Models
  15. Stateful Computing
  16. Appendix A: Connections
  17. Appendix B: Data Type Reference
  18. Appendix C: Function Reference

Rules and Guidelines for Sqoop Targets

Rules and Guidelines for Sqoop Targets

Consider the following rules and guidelines when you configure a Sqoop target in a mapping:
  • If a column name or table name contains a special character, the Sqoop export process fails.
  • If you configure the
    Maintain Row Order
    property for a Sqoop target, the Data Integration Service ignores the property.
  • If a mapping contains a Sqoop source, an Aggregator transformation, and a flat file target, you must disable the
    Maintain Row Order
    property for the target. Otherwise, the mapping fails.
  • When you run a Sqoop mapping on the Blaze engine, verify that you have not deleted any target port from the mapping. Otherwise, the mapping fails.
  • When you export null data to a Microsoft SQL Server column that is defined as not null, the Data Integration Service fails the Sqoop mapping on the Blaze engine instead of rejecting and writing the null data to the bad file.
  • When you write data to an Oracle target through Sqoop and run the mapping on the Blaze or Spark engine, Sqoop treats the owner name as case sensitive.
  • Sqoop uses the values that you configure in the
    User Name
    and
    Password
    fields of the JDBC connection. If you configure the --username or --password argument in a JDBC connection or mapping, Sqoop ignores the arguments. If you create a password file to access a database, Sqoop ignores the password file.
  • When you write data to a Vertica target through Sqoop, the --batch argument is required.

0 COMMENTS

We’d like to hear from you!