Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

General Options

General Options

The following table describes the general options that you can set for a Databricks cluster:
Property
Description
Cluster Name
Name of the cluster to create.
Databricks Runtime Version
The Databricks version to run on the cluster.
Default is the latest supported version.
To manually enter a version, select
(Assign to Task Input)
. The Developer tool creates a Create Cluster Task Input called
Databricks Runtime Version
. Set the value of the input to the Databricks version tag string.
For example, set the value to
5.5.x-scala2.11
to use Databricks version 5.5.
Python Version
The Python version to run if you include a Python transformation.
Default is the latest supported version.
Job Cluster
Select to create a Databricks job cluster.
Default is false.
Warm Pool
Select to create a Databricks cluster using a warm pool instance.
Default is false.
Pool ID
Value of the DatabricksInstancePoolId property that identifies the warm pool resource.
To retrieve the value of this property, go to the warm pool in the Databricks workspace and browse to
Configuration
Tags
.
Required when the Warm Pool option is checked.
Driver Type
The type of node that you want to use for the driver node.
Default is the worker type ID.
Worker Type
The type of node that you want to use for the worker node.
Workers
The number of worker nodes to create for the cluster. If you configure the cluster to scale automatically, this property is ignored.
Default is 1.
Autoscale
Automatically scales the number of worker nodes based on workload.
Min Workers
The minimum number of worker nodes to use when the cluster is configured to scale automatically.
Default is 0.
Max Workers
The maximum number of worker nodes to use when the cluster is configured to scale automatically.
Default is 1.

0 COMMENTS

We’d like to hear from you!