Table of Contents

Search

  1. Preface
  2. Informatica Developer
  3. The Model Repository
  4. Searches in Informatica Developer
  5. Connections
  6. Physical Data Objects
  7. Flat File Data Objects
  8. Logical View of Data
  9. Viewing Data
  10. Application Deployment
  11. Application Patch Deployment
  12. Application Patch Deployment Example
  13. Continuous Integration and Continuous Delivery (CI/CD)
  14. Object Import and Export
  15. Appendix A: Data Type Reference
  16. Appendix B: Keyboard Shortcuts
  17. Appendix C: Connection Properties

Developer Tool Guide

Developer Tool Guide

HDFS Connection Properties

HDFS Connection Properties

Use a Hadoop File System (HDFS) connection to access data in the Hadoop cluster. The HDFS connection is a file system type connection. You can create and manage an HDFS connection in the Administrator tool, Analyst tool, or the Developer tool. HDFS connection properties are case sensitive unless otherwise noted.
The order of the connection properties might vary depending on the tool where you view them.
The following table describes HDFS connection properties:
Property
Description
Name
Name of the connection. The name is not case sensitive and must be unique within the domain. The name cannot exceed 128 characters, contain spaces, or contain the following special characters:
~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
ID
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name.
Description
The description of the connection. The description cannot exceed 765 characters.
Location
The domain where you want to create the connection. Not valid for the Analyst tool.
Type
The connection type. Default is Hadoop File System.
User Name
User name to access HDFS.
NameNode URI
The URI to access the storage system.
You can find the value for
fs.defaultFS
in the
core-site.xml
configuration set of the cluster configuration.
If you create connections when you import the cluster configuration, the NameNode URI property is populated by default, and it is updated each time you refresh the cluster configuration. If you manually set this property or override the value, the refresh operation does not update this property.

Accessing Multiple Storage Types

Use the NameNode URI property in the connection parameters to connect to various storage types. The following table lists the storage type and the NameNode URI format for the storage type:
Storage
NameNode URI Format
HDFS
hdfs://<namenode>:<port>
where:
  • <namenode>
    is the host name or IP address of the NameNode.
  • <port>
    is the port that the NameNode listens for remote procedure calls (RPC).
hdfs://<nameservice>
in case of NameNode high availability.
MapR-FS
maprfs:///
WASB in HDInsight
wasb://<container_name>@<account_name>.blob.core.windows.net/<path>
where:
  • <container_name>
    identifies a specific Azure Storage Blob container.
    <container_name>
    is optional.
  • <account_name>
    identifies the Azure Storage Blob object.
Example:
wasb://infabdmoffering1storage.blob.core.windows.net/infabdmoffering1cluster/mr-history
ADLS in HDInsight
adl://home
When you create a cluster configuration from an Azure HDInsight cluster, the cluster configuration uses either ADLS or WASB as the primary storage. You cannot create a cluster configuration with ADLS or WASB as the secondary storage. You can edit the NameNode URI property in the HDFS connection to connect to a local HDFS location.

0 COMMENTS

We’d like to hear from you!