PowerExchange for Teradata Parallel Transporter API Overview
PowerExchange for Teradata Parallel Transporter API Overview
You can use PowerExchange for Teradata Parallel Transporter API to read data from or write data to a Teradata database. You can run Teradata mappings in the native or non-native environment. If you choose the non-native environment, you can run Teradata mappings on the Blaze or Spark engine.
PowerExchange for Teradata Parallel Transporter API uses different integration methods based on the environment in which you run the mappings.
Integration with Teradata Parallel Transporter API
When you run Teradata mappings in the native environment to read or load data in bulk, PowerExchange for Teradata Parallel Transporter API integrates with Teradata Parallel Transporter API (Teradata PT API). The Data Integration Service uses the Teradata PT API infrastructure to connect to Teradata.
The Data Integration Service uses the Export system operator to read data. The Export operator exports large amounts of data from Teradata tables or views.
The Data Integration Service uses one of the following Teradata PT API system operators to load data in bulk:
Load
Loads data in bulk into an empty Teradata table.
Stream
Performs insert, update, upsert, and delete operations against Teradata tables in near real time.
Update
Performs insert, update, upsert, and delete operations against Teradata tables.
Integration with Sqoop and Hortonworks Connector for Teradata
When you run Teradata mappings on a Hortonworks cluster and on the Blaze or Spark engine, the Data Integration Service invokes Hortonworks Connector for Teradata at run time. The Data Integration Service then runs the mapping through Sqoop.
Integration with Sqoop and Cloudera Connector Powered by Teradata
When you run Teradata mappings on a Cloudera cluster and on the Blaze or Spark engine, the Data Integration Service invokes Cloudera Connector Powered by Teradata at run time. The Data Integration Service then runs the mapping through Sqoop.