You can use the JDBC driver to configure connectivity to JDBC-compliant databases. You can use the JDBC connection to run mappings on the Spark engine. You can also enable Sqoop in the JDBC connection to run mappings on the Blaze and Spark engines.
You can use the DataDirect drivers shipped with Informatica to import metadata from JDBC-compliant databases. If a DataDirect driver for a specific database is not available, download the JDBC driver that the database vendor recommends. The DataDirect JDBC drivers that Informatica ships are not licensed for Sqoop mappings. To run JDBC mappings on the Spark engine or to run Sqoop mappings on the Blaze or Spark engine, use the JDBC driver that the database vendor recommends.
To connect to sources, targets, or lookups from JDBC-complaint databases, download any Type 4 JDBC driver that the database vendor recommends.
To import metadata in the Developer tool, place the JDBC driver .jar files in the following directory on the Developer tool machine:
To optimize the Sqoop mapping performance on the Spark engine while writing data to an HDFS complex file target of the Parquet format, download the following .jar files:
Changes take effect after you recycle the Data Integration Service. At run time, the Data Integration Service copies the .jar files to the Hadoop distribution cache so that the .jar files are accessible to all nodes in the cluster.