Cloud Data Integration Current Version
- Cloud Data Integration Current Version
- All Products
Name of the Hadoop Files V2 connection.
Description of the connection. The description cannot exceed 765 characters.
Type of connection. Select
Hadoop Files V2.
The name of the runtime environment where you want to run the tasks.
Required to read data from HDFS. Enter a user name that has access to the single-node HDFS location to read data from or write data to.
The URI to access HDFS.
Use the following format to specify the name node URI in Cloudera, Amazon EMR, and Hortonworks distributions:
If the Hadoop cluster is configured for high availability, you must copy the
fs.defaultFSvalue in the
core-site.xmlfile and append
/to specify the name node URI.
For example, the following snippet shows the
fs.defaultFSvalue in a sample
In the above snippet, the
and the corresponding name node URI is
Specify either the name node URI or the local path. Do not specify the name node URI if you want to read data from or write data to a local file system path.
A local file system path to read and write data. Read the following conditions to specify the local path:
Default value for Local Path is NA.
Configuration Files Path
The directory that contains the Hadoop configuration files.
Copy the core-site.xml, hdfs-site.xml, and hive-site.xmlfrom the Hadoop cluster and add them to a folder in Linux Box.
The file that contains encrypted keys and Kerberos principals to authenticate the machine.
Users assigned to the superuser privilege can perform all the tasks that a user with the administrator privilege can perform.
You can enable different users to run mappings in a Hadoop cluster that uses Kerberos authentication or connect to sources and targets that use Kerberos authentication. To enable different users to run mappings or connect to big data sources and targets, you must configure user impersonation.