Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Advanced Properties for Azure HDInsight

Advanced Properties for Azure HDInsight

Set the advanced properties for an Azure HDInsight cluster.
The following table describes the Advanced properties for a Microsoft Azure HDInsight cluster:
Property
Description
Cluster Name
Name of the cluster to create.
Azure Cluster Type
Type of the cluster to be created.
Choose one of the options in the drop-down list.
Default is Hadoop.
HDInsight version
HDInsight version to run on the cluster. Enter the HDInsight version tag string to designate the version.
Default is the latest version supported.
Azure Cluster Location
Use the drop-down list to choose the location in which to create the cluster.
Head Node VM Size
Size of the head node instance to create.
Default is Standard_D12_v2.
Number of Worker Node Instances
Number of worker node instances to create in the cluster.
Default is 2.
Worker Node VM Size
Size of the worker node instance to create.
Default is Standard_D13_v2.
Default Storage Type
Primary storage type to be used for the cluster.
Choose one of the following options:
  • Azure Data Lake Store
  • Azure BLOB storage account
Default is BLOB storage
Default Storage Container or Root Mount Path
Default container for data. Type one of the following paths:
  • For ADLS storage, type the path to the storage. For example, you can type
    storage-name
    or
    storage-name/folder-name
    .
  • For blob storage, type the path to the container. Format:
    /path/
Log Location
Optional. Path to the directory to store workflow event logs.
Default is
/app-logs
.
Attach External Hive Metastore
If you select this option, the workflow attaches an external Hive metastore to the cluster if you configured an external Hive metastore in the cloud provisioning configuration.
Bootstrap JSON String
JSON statement to run during cluster creation. You can use this statement to configure cluster details. For example, you could configure Hadoop properties on the cluster, add tags to cluster resources, or run script actions.
Choose one of the following methods to populate the property:
  • Type the JSON statement. Use the following format:
    { "core-site" : { "<sample_property_key1>": "<sample_property_val1>", "<sample_property_key2>": "<sample_property_val2>" }, "tags": { "<tag_key>": "<tag_val>" }, "scriptActions": [ { "name": "setenvironmentvariable", "uri": "scriptActionUri", "parameters": "headnode" } ] }
  • Provide a path to a file that contains a JSON statement. Format:
    file://<path_to_bootstrap_file>

Using ADLS Gen2 Resources with a Cluster Workflow

When a cluster workflow accesses ADLS Gen2 resources, verify that the Data Integration Service machine is configured to run in the same time zone that the cluster runs in.
Set the time zone environment variable on each Data Integration Service machine to the time zone of the cluster and then restart the domain.
To set the time zone, run the following command on the Data Integration Service machine:
setenv TZ <cluster time zone>

0 COMMENTS

We’d like to hear from you!