Table of Contents


  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Create the JSON File

Create the JSON File

Use a JSON file to configure cluster properties when you want to create and use an easily edited set of properties.
  1. Ask the Databricks administrator to export the existing cluster configuration as a JSON file.
  2. Edit the file with parameters that reflect required configuration changes.
  3. Verify that the file contains mandatory parameters. See Using a JSON File to Configure Cluster Creation Properties.
  4. Save the file on all of the machines that host the Data Integration Service. For example,
    <Informatica home directory>/data/Informatica/createClusterJson.json
The following examples contains the mandatory parameters and others that you might want to include. Notice that each example uses a "custom_tags" parameter to pass custom parameters.
Example of a JSON file for Databricks on the AWS platform:
{ "cluster_name": "SampleCluster", "spark_version": "7.3.x-scala2.12", "num_workers": 1, "node_type_id": "i3.xlarge", "driver_node_type_id": "i3.xlarge", "custom_tags": { "Infa_Generated_Tag": "Infa_Generated_Tag" }, "spark_env_vars": { "PYSPARK_PYTHON": "/databricks/python3/bin/python3" }, "enable_elastic_disk": false, "aws_attributes": { "ebs_volume_count": 3, "zone_id": "us-east-1e", "ebs_volume_size": 100, "spot_bid_price_percent": 100, "first_on_demand": 1 } }
Example of a JSON file for Databricks on the Azure platform:
{ "cluster_name": "SampleCluster", "spark_version": "7.3.x-scala2.12", "num_workers": 1, "node_type_id": "Standard_DS3_v2", "driver_node_type_id": "Standard_DS3_v2", "custom_tags": { "Infa_Generated_Tag": "Infa_Generated_Tag" }, "spark_env_vars": { "PYSPARK_PYTHON": "/databricks/python3/bin/python3" }, "enable_elastic_disk": false }


We’d like to hear from you!