Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Monitoring Mappings in the Hadoop Environment
  7. Mappings in the Native Environment
  8. Profiles
  9. Native Environment Optimization
  10. Data Type Reference
  11. Function Reference
  12. Parameter Reference
  13. Multiple Blaze Instances on a Cluster

Sqoop Mappings in a Hadoop Environment

Sqoop Mappings in a Hadoop Environment

After you enable Sqoop in a JDBC connection and import a Sqoop source or Sqoop target, you can create a mapping. You can then run the Sqoop mapping in the Hadoop run-time environment with a Hadoop connection. You can run Sqoop mappings on the Blaze, Spark, and Hive engines.
You can run Sqoop mappings on the Spark engine only when you want to read data from or write data to Oracle databases. If you use the Cloudera Connector Powered by Teradata or Hortonworks Connector for Teradata, you must run the mappings on the Blaze engine.
In the mapping, you can specify additional Sqoop arguments and disable the Sqoop connector.


Updated July 03, 2018