Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Teradata Parallel Transporter API
  3. PowerExchange for Teradata Parallel Transporter API Configuration
  4. PowerExchange for Teradata Parallel Transporter Connections
  5. PowerExchange for Teradata Parallel Transporter API Data Objects
  6. PowerExchange for Teradata Parallel Transporter API Mappings
  7. Teradata Parallel Transporter API Run-time Processing
  8. Data Type Reference

PowerExchange for Teradata Parallel Transporter API User Guide

PowerExchange for Teradata Parallel Transporter API User Guide

Verifying the Teradata Connector for Hadoop Prerequisites

Verifying the Teradata Connector for Hadoop Prerequisites

Before you install and configure TDCH and run a mapping on the Hive engine, perform the following tasks:
  1. Verify that the Data Integration Service user has the MapR ticket available on all the nodes of the Hadoop cluster.
  2. Create a user account in the Teradata Developer Exchange website so that you can download the TDCH installation package from the website.
  3. To read data by using TDCH, verify that the following conditions are met:
    • The cluster contains the following folder in the HDFS file system:
      /user/yarn/
      If the folder does not exist, you must create it.
    • If the owner of the
      /user/yarn/
      folder is not yarn, the user who runs the mapping has rwxrwxrwx permissions on the
      /user/yarn/
      folder.
  4. Ensure that the Hive Client is configured on all the nodes of the Hadoop cluster.
    To verify, run the
    hive
    command from each node.
  5. If the Hadoop cluster is enabled for Kerberos, ensure that the Data Integration Service user has a valid Kerberos ticket on all the nodes of the cluster. If the ticket is not valid, the mapping fails. To generate a valid ticket, run the
    kinit
    command on each node of the cluster.


Updated October 05, 2018