Table of Contents

Search

  1. Preface
  2. Part 1: Hadoop Integration
  3. Part 2: Databricks Integration
  4. Appendix A: Connections

Verify Run-time Drivers

Verify Run-time Drivers

Verify run-time drivers for mappings that access JDBC-compliant databases in the Hadoop environment. Use any Type 4 JDBC driver that the database vendor recommends.
  1. Download Type 4 JDBC drivers associated with the JCBC-compliant databases that you want to access.
  2. To use Sqoop TDCH Cloudera Connector Powered by Teradata, perform the following tasks:
    • Download all .jar files in the Cloudera Connector Powered by Teradata package from the following location: http://www.cloudera.com/downloads.html. The package has the following naming convention:
      sqoop-connector-teradata-<version>.tar
      If you use Cloudera CDH version 6.x, you must download the
      sqoop-connector-teradata-1.7c6.jar
      file.
    • Download
      terajdbc4.jar
      and
      tdgssconfig.jar
      from the following location: http://downloads.teradata.com/download/connectivity/jdbc-driver
      If you use Cloudera CDH version 6.x, you must also download the
      junit-4.11.jar
      file.
  3. To optimize the Sqoop mapping performance on the Spark engine while writing data to an HDFS complex file target of the Parquet format, download the following .jar files:
  4. Copy all of the .jar files to the following directory on the machine where the Data Integration Service runs:
    <Informatica installation directory>\externaljdbcjars
    Changes take effect after you recycle the Data Integration Service. At run time, the Data Integration Service copies the .jar files to the Hadoop distribution cache so that the .jar files are accessible to all nodes in the cluster.

0 COMMENTS

We’d like to hear from you!