- Cloud Platform Current Version
- All Products
You can select one of the following authentication types:
The JDBC URL to connect to Hive.
Specify the following format based on your requirement:
The JDBC driver class to connect to Hive.
The user name to connect to Hive in LDAP or None mode.
The password to connect to Hive in LDAP or None mode.
The principal name to connect to Hive through Kerberos authentication.
The user name of the user that the Secure Agent impersonates to run mappings on a Hadoop cluster. You can configure user impersonation to enable different users to run mappings or connect to Hive. The impersonation name is required for the Hadoop connection if the Hadoop cluster uses Kerberos authentication.
The path and file name to the Keytab file for Kerberos login.
Configuration Files Path
The directory that contains the Hadoop configuration files for the client.
Copy the site.xml files from the Hadoop cluster and add them to a folder in the Linux box. Specify the path in this field before you use the connection in a mapping to access Hive on a Hadoop cluster:
The URI to access the Distributed File System (DFS), such as Amazon S3, Microsoft Azure Data Lake Storage, and HDFS.
For mappings in advanced mode that run on the
advanced cluster, Azure Data Lake Storage Gen2 is supported on the Azure HDinsight cluster.
Based on the DFS you want to access, specify the required storage and bucket name.
For example, for HDFS, refer to the value of the
fs.defaultFSproperty in the
core-site.xmlfile of the Hadoop cluster and enter the same value in the
DFS Staging Directory
The staging directory in the Hadoop cluster where the Secure Agent stages the data. You must have full permissions for the DFS staging directory.
Specify a transparent encrypted folder as the staging directory.
Hive Staging Database
The Hive database where external or temporary tables are created. You must have full permissions for the Hive staging database.
The additional properties required to access the DFS.
Applies to mappings in advanced mode.
Configure the property as follows:
<DFS property name>=<value>;<DFS property name>=<value>
To access the Amazon S3 file system, specify the access key, secret key, and the Amazon S3 property name, each separated by a semicolon:
To access the Azure Data Lake Storage Gen2 file system, specify the authentication type, authentication provider, client ID, client secret, and the client endpoint, each separated with a semicolon:
*These fields are mandatory parameters.
Updated November 30, 2022