Table of Contents


  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings in the Hadoop Environment
  4. Mapping Sources in the Hadoop Environment
  5. Mapping Targets in the Hadoop Environment
  6. Mapping Transformations in the Hadoop Environment
  7. Processing Hierarchical Data on the Spark Engine
  8. Configuring Transformations to Process Hierarchical Data
  9. Processing Unstructured and Semi-structured Data with an Intelligent Structure Model
  10. Stateful Computing on the Spark Engine
  11. Monitoring Mappings in the Hadoop Environment
  12. Mappings in the Native Environment
  13. Profiles
  14. Native Environment Optimization
  15. Cluster Workflows
  16. Connections
  17. Data Type Reference
  18. Function Reference
  19. Parameter Reference

Azure HDInsight Advanced Properties for the Create Cluster Task

Azure HDInsight Advanced Properties for the Create Cluster Task

The following table describes the Advanced properties for a Microsoft Azure HDInsight cluster:
Cluster Name
Name of the cluster to create.
Azure Cluster Type
Type of the cluster to be created.
Choose one of the options in the drop-down list.
Default is Hadoop.
HDInsight version
HDInsight version to run on the cluster. Enter the HDInsight version tag string to designate the version.
Default is the latest version supported.
Azure Cluster Location
Use the drop-down list to choose the location in which to create the cluster.
Head Node VM Size
Size of the head node instance to create.
Default is Standard_D12_v2.
Number of Worker Node Instances
Number of worker node instances to create in the cluster.
Default is 2.
Worker Node VM Size
Size of the worker node instance to create.
Default is Standard_D13_v2.
Default Storage Type
Primary storage type to be used for the cluster.
Choose one of the following options:
  • Azure Data Lake Store
  • Azure BLOB storage account
Default is BLOB storage
Default Storage Container or Root Mount Path
Default container for data. Type one of the following paths:
  • For ADLS storage, type the path to the storage. For example, you can type
  • For blob storage, type the path to the container. Format:
Log Location
Optional. Path to the directory to store workflow event logs.
Default is
Attach External Hive Metastore
If you select this option, the workflow attaches an external Hive metastore to the cluster if you configured an external Hive metastore in the cloud provisioning configuration.
Bootstrap JSON String
JSON statement to run during cluster creation. You can use this statement to configure cluster details. For example, you could configure Hadoop properties on the cluster, add tags to cluster resources, or run script actions.
Choose one of the following methods to populate the property:
  • Type the JSON statement. Use the following format:
    { "core-site" : { "<sample_property_key1>": "<sample_property_val1>", "<sample_property_key2>": "<sample_property_val2>" }, "tags": { "<tag_key>": "<tag_val>" }, "scriptActions": [ { "name": "setenvironmentvariable", "uri": "scriptActionUri", "parameters": "headnode" } ] }
  • Provide a path to a file that contains a JSON statement. Format:

Updated October 23, 2019