Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.5.1 - 10.5.1.1
  3. Part 2: Versions 10.5 - 10.5.0.1
  4. Part 3: Versions 10.4.1 - 10.4.1.3
  5. Part 4: Versions 10.4 - 10.4.0.2
  6. Part 5: Versions 10.2.2 - 10.2.2 HotFix 1
  7. Part 6: Version 10.2.1
  8. Part 7: Version 10.2 - 10.2 HotFix 2

What's New and Changed (10.5.1.1)

What's New and Changed (10.5.1.1)

infacmd isp createConnection

infacmd isp createConnection

This section lists new, changed, and removed Hadoop connection options for the property infacmd isp createConnection in 10.2.

Hadoop Connection Options

The following tables describes new Hadoop connection options available in 10.2:
Option
Description
clusterConfigId
The cluster configuration ID associated with the Hadoop cluster.
blazeJobMonitorURL
The host name and port number for the Blaze Job Monitor.
rejDirOnHadoop
Enables hadoopRejDir. Used to specify a location to move reject files when you run mappings.
hadoopRejDir
The remote directory where the Data Integration Service moves reject files when you run mappings. Enable the reject directory using rejDirOnHadoop.
sparkEventLogDir
An optional HDFS file path of the directory that the Spark engine uses to log events.
sparkYarnQueueName
The YARN scheduler queue name used by the Spark engine that specifies available resources on a cluster.
The following table describes Hadoop connection options that are renamed in 10.2:
Current Name
Previous Name
Description
blazeYarnQueueName
cadiAppYarnQueueName
The YARN scheduler queue name used by the Blaze engine that specifies available resources on a cluster. The name is case sensitive.
blazeExecutionParameterList
cadiExecutionParameterList
Custom properties that are unique to the Blaze engine.
blazeMaxPort
cadiMaxPort
The maximum value for the port number range for the Blaze engine.
blazeMinPort
cadiMinPort
The minimum value for the port number range for the Blaze engine.
blazeUserName
cadiUserName
The owner of the Blaze service and Blaze service logs.
blazeStagingDirectory
cadiWorkingDirectory
The HDFS file path of the directory that the Blaze engine uses to store temporary files.
hiveStagingDatabaseName
databaseName
Namespace for Hive staging tables.
impersonationUserName
hiveUserName
Hadoop impersonation user. The user name that the Data Integration Service impersonates to run mappings in the Hadoop environment.
sparkStagingDirectory
SparkHDFSStagingDir
The HDFS file path of the directory that the Spark engine uses to store temporary files for running jobs.
The following table describes Hadoop connection options that are removed from the UI and imported into the cluster configuration:
Option
Description
RMAddress
The service within Hadoop that submits requests for resources or spawns YARN applications.
Imported into the cluster configuration as the property
yarn.resourcemanager.address
.
defaultFSURI
The URI to access the default Hadoop Distributed File System.
Imported into the cluster configuration as the property
fs.defaultFS
or
fs.default.name
.
The following table describes Hadoop connection options that are deprecated in 10.2 and are no longer available in the UI:
Option
Description
metastoreDatabaseDriver*
Driver class name for the JDBC data store.
metastoreDatabasePassword*
The password for the metastore user name.
metastoreDatabaseURI*
The JDBC connection URI used to access the data store in a local metastore setup.
metastoreDatabaseUserName*
The metastore database user name.
metastoreMode*
Controls whether to connect to a remote metastore or a local metastore.
remoteMetastoreURI*
The metastore URI used to access metadata in a remote metastore setup.
This property is imported into the cluster configuration as the property
hive.metastore.uris
.
jobMonitoringURL
The URL for the MapReduce JobHistory server.
* These properties are deprecated in 10.2. When you upgrade to 10.2, the property values you set in a previous release are saved in the repository, but they do not appear in the connection properties.
The following properties are dropped. If they appear in connection strings, they will have no effect:
  • hadoopClusterInfoExecutionParametersList
  • passThroughSecurityEnabled
  • hiverserver2Enabled
  • hiveInfoExecutionParametersList
  • cadiPassword
  • sparkMaster
  • sparkDeployMode

HBase Connection

The following table describes HBase connection options that are removed from the connection and imported into the cluster configuration:
Property
Description
ZOOKEEPERHOSTS
Name of the machine that hosts the ZooKeeper server.
ZOOKEEPERPORT
Port number of the machine that hosts the ZooKeeper server.
ISKERBEROSENABLED
Enables the Informatica domain to communicate with the HBase master server or region server that uses Kerberos authentication.
hbaseMasterPrincipal
Service Principal Name (SPN) of the HBase master server.
hbaseRegionServerPrincipal
Service Principal Name (SPN) of the HBase region server.

Hive Connection

The following table describes Hive connection options that are removed from the connection and imported into the cluster configuration:
Property
Description
defaultFSURI
The URI to access the default Hadoop Distributed File System.
jobTrackerURI
The service within Hadoop that submits the MapReduce tasks to specific nodes in the cluster.
hiveWarehouseDirectoryOnHDFS
The absolute HDFS file path of the default database for the warehouse that is local to the cluster.
metastoreExecutionMode
Controls whether to connect to a remote metastore or a local metastore.
metastoreDatabaseURI
The JDBC connection URI used to access the data store in a local metastore setup.
metastoreDatabaseDriver
Driver class name for the JDBC data store.
metastoreDatabaseUserName
The metastore database user name.
metastoreDatabasePassword
The password for the metastore user name.
remoteMetastoreURI
The metastore URI used to access metadata in a remote metastore setup.
This property is imported into the cluster configuration as the property
hive.metastore.uris
.

HBase Connection Options for MapR-DB

The ISKERBEROSENABLED connection option is obsolete and imported into the cluster configuration.

0 COMMENTS

We’d like to hear from you!