Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings
  4. Sources
  5. Targets
  6. Transformations
  7. Data Preview
  8. Cluster Workflows
  9. Profiles
  10. Monitoring
  11. Hierarchical Data Processing
  12. Hierarchical Data Processing Configuration
  13. Hierarchical Data Processing with Schema Changes
  14. Intelligent Structure Models
  15. Stateful Computing
  16. Appendix A: Connections
  17. Appendix B: Data Type Reference
  18. Appendix C: Function Reference

Sqoop Mappings in a Hadoop Environment

Sqoop Mappings in a Hadoop Environment

You can use a JDBC connection that is enabled for Sqoop connectivity to import a Sqoop source or Sqoop target and create a mapping. You can run Sqoop mappings on the Blaze and Spark engines.
If you use Cloudera Connector Powered by Teradata or Hortonworks Connector for Teradata, you can run mappings on the Blaze or Spark engines. If you use MapR Connector for Teradata, you can run mappings on the Spark engine.
In the mapping, you can specify additional Sqoop arguments and disable the Sqoop connector.
If you add or delete a Type 4 JDBC driver .jar file required for Sqoop connectivity from the
externaljdbcjars
directory, changes take effect after you restart the Data Integration Service. If you run the mapping on the Blaze engine, changes take effect after you restart the Data Integration Service and Blaze Grid Manager. When you run the mapping for the first time, you do not need to restart the Data Integration Service and Blaze Grid Manager. You need to restart the Data Integration Service and Blaze Grid Manager only for the subsequent mapping runs.


Updated July 10, 2020