Table of Contents

Search

  1. Preface
  2. Introduction to Big Data Management Administration
  3. Authentication
  4. Running Mappings on a Cluster with Kerberos Authentication
  5. Authorization
  6. Cluster Configuration
  7. Cloud Provisioning Configuration
  8. Data Integration Service Processing
  9. Connections
  10. Multiple Blaze Instances on a Cluster
  11. Monitoring REST API

Big Data Management Administrator Guide

Big Data Management Administrator Guide

Create the Import File

Create the Import File

To import the cluster configuration from a file, you must create an archive file.
To create the .xml file for import, you must get required information from the Databricks administrator. You can provide any name for the file and store it locally.
The following table describes the properties required to import the cluster information:
Property Name
Description
cluster_name
Name of the Databricks cluster.
cluster_ID
The cluster ID of the Databricks cluster.
baseURL
URL to access the Databricks cluster.
accesstoken
The token ID created within Databricks required for authentication.
Optionally, you can include other properties specific to the Databricks environment.
When you complete the .xml file, compress it into a .zip or .tar file for import.

Sample Import File

The following text shows a sample import file with the required properties:
<?xml version="1.0" encoding="UTF-8"?><configuration> <property> <name>cluster_name</name> <value>my_cluster</value> </property> <property> <name>cluster_id</name> <value>0926-294544-bckt123</value> </property> <property> <name>baseURL</name> <value>https://provide.adatabricks.net/</value> </property> <property> <name>accesstoken</name> <value>dapicf76c2d4567c6sldn654fe875936e778</value> </property> </configuration>

0 COMMENTS

We’d like to hear from you!