Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.4.1
  3. Part 2: Version 10.4.0
  4. Part 3: Version 10.2.2
  5. Part 4: Version 10.2.1
  6. Part 5: Version 10.2
  7. Part 6: Version 10.1.1
  8. Part 7: Version 10.1

Release Guide (10.4.1.1)

Release Guide (10.4.1.1)

Sqoop Connectivity for Relational Sources and Targets

Sqoop Connectivity for Relational Sources and Targets

Effective in version 10.1, you can use Sqoop to process data between relational databases and HDFS through MapReduce programs. You can use Sqoop to import and export data. When you use Sqoop, you do not need to install the relational database client and software on any node in the Hadoop cluster.
To use Sqoop, you must configure Sqoop properties in a JDBC connection and run the mapping in the Hadoop environment. You can configure Sqoop connectivity for relational data objects, customized data objects, and logical data objects that are based on a JDBC-compliant database. For example, you can configure Sqoop connectivity for the following databases:
  • Aurora
  • IBM DB2
  • IBM DB2 for z/OS
  • Greenplum
  • Microsoft SQL Server
  • Netezza
  • Oracle
  • Teradata
You can also run a profile on data objects that use Sqoop in the Hive run-time environment.
For more information, see the
Informatica 10.1 Big Data Management User Guide.

0 COMMENTS

We’d like to hear from you!