Table of Contents

Search

  1. Preface
  2. Introduction to Data Engineering Administration
  3. Authentication
  4. Running Mappings on a Cluster with Kerberos Authentication
  5. Authorization
  6. Cluster Configuration
  7. Cloud Provisioning Configuration
  8. Data Integration Service Processing
  9. Appendix A: Connections Reference
  10. Appendix B: Monitoring REST API

Databricks Cloud Provisioning Configuration Properties

Databricks Cloud Provisioning Configuration Properties

The properties in the Databricks cloud provisioning configuration enable the Data Integration Service to contact and create resources on the Databricks cloud platform.
The following table describes the Databricks cloud provisioning configuration properties:
Property
Description
Name
Name of the cloud provisioning configuration.
Because the Administrator tool lists cloud provisioning configuration objects with other connections, use a naming convention such as "CPC" as part of the name of the object to help identify it.
ID
The cluster ID of the Databricks cluster.
Description
Optional description of the cloud provisioning configuration.
Databricks domain
Domain name of the Databricks deployment.
Databricks token ID
The token ID created within Databricks required for authentication.
If the token has an expiration date, verify that you get a new token from the Databricks administrator before it expires.
Advanced Properties
Advanced properties that are unique to the Databricks cloud provisioning configuration.

Advanced Properties

Configure the following properties in the
Advanced Properties
of the Databricks configuration section:
infaspark.pythontx.exec
Required to run a Python transformation on the Databricks Spark engine. Set to the location of the Python executable binary on the worker nodes in the Databricks cluster.
When you provision the cluster at run time, set this property in the Databricks cloud provisioning configuration. Otherwise, set on the Databricks connection.
For example, set to:
infaspark.pythontx.exec=/databricks/python3/bin/python3
infaspark.pythontx.executorEnv.PYTHONHOME
Required to run a Python transformation on the Databricks Spark engine. Set to the location of the Python installation directory on the worker nodes in the Databricks cluster.
When you provision the cluster at run time, set this property in the Databricks cloud provisioning configuration. Otherwise, set on the Databricks connection.
For example, set to:
infaspark.pythontx.executorEnv.PYTHONHOME=/databricks/python3

0 COMMENTS

We’d like to hear from you!