Table of Contents

Search

  1. Preface
  2. Introduction to Hive Connector
  3. Hive connections
  4. Mappings and mapping tasks with Hive Connector
  5. Migrating a mapping
  6. Data type reference
  7. Troubleshooting

Hive Connector

Hive Connector

Access the DFS for Hive on Cloudera Data Platform and Cloudera Data Warehouse using mappings

Access the DFS for Hive on Cloudera Data Platform and Cloudera Data Warehouse using mappings

To access the DFS file system for Hive endpoints on the Cloudera Data Platform and Cloudera Data Warehouse using Hive mappings, you can configure the required DFS properties in the
core-site.xml
file.
To access the Amazon S3 file system, specify the access key, secret key, and the Amazon S3 property name in the
core-site.xml
file:
fs.s3a.<bucket_name>.access.key=<access key value> fs.s3a.<bucket_name>.secret.key=<secret key value> fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
To access the Azure Data Lake Storage Gen2 file system, specify the authentication type, authentication provider, client ID, client secret, and the client endpoint in the
core-site.xml
file:
fs.azure.account.auth.type=<Authentication type> fs.azure.account.oauth.provider.type=<Authentication_provider> fs.azure.account.oauth2.client.id=<Client_ID> fs.azure.account.oauth2.client.secret=<Client-secret> fs.azure.account.oauth2.client.endpoint=<ADLS Gen2 endpoint>

0 COMMENTS

We’d like to hear from you!