Effective in version 10.2.1, you can use the following new Sqoop features:
Support for MapR Connector for Teradata
You can use MapR Connector for Teradata to read data from or write data to Teradata on the Spark engine. MapR Connector for Teradata is a Teradata Connector for Hadoop (TDCH) specialized connector for Sqoop. When you run Sqoop mappings on the Spark engine, the Data Integration Service invokes the connector by default.
For more information, see the
Informatica Big Data Management 10.2.1 User Guide
.
Spark engine optimization for Sqoop pass-through mappings
When you run a Sqoop pass-through mapping on the Spark engine, the Data Integration Service optimizes mapping performance in the following scenarios:
You read data from a Sqoop source and write data to a Hive target that uses the Text format.
You read data from a Sqoop source and write data to an HDFS target that uses the Flat, Avro, or Parquet format.
For more information, see the
Informatica Big Data Management 10.2.1 User Guide
.
Spark engine support for high availability and security features
Sqoop honors all the high availability and security features such as Kerberos keytab login and KMS encryption that the Spark engine supports.
For more information, see the "Data Integration Service" chapter in the
Informatica 10.2.1 Application Services Guide
and "infacmd dis Command Reference" chapter in the
Informatica 10.2.1 Command Reference Guide
.
Spark engine support for Teradata data objects
If you use a Teradata data object and you run a mapping on the Spark engine and on a Hortonworks or Cloudera cluster, the Data Integration Service runs the mapping through Sqoop.
If you use a Hortonworks cluster, the Data Integration Service invokes Hortonworks Connector for Teradata at run time. If you use a Cloudera cluster, the Data Integration Service invokes Cloudera Connector Powered by Teradata at run time.
For more information, see the
Informatica PowerExchange for Teradata Parallel Transporter API 10.2.1 User Guide