Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Mappings in the Hadoop Environment
  4. Mapping Sources in the Hadoop Environment
  5. Mapping Targets in the Hadoop Environment
  6. Mapping Transformations in the Hadoop Environment
  7. Processing Hierarchical Data on the Spark Engine
  8. Configuring Transformations to Process Hierarchical Data
  9. Processing Unstructured and Semi-structured Data with an Intelligent Structure Model
  10. Stateful Computing on the Spark Engine
  11. Monitoring Mappings in the Hadoop Environment
  12. Mappings in the Native Environment
  13. Profiles
  14. Native Environment Optimization
  15. Cluster Workflows
  16. Connections
  17. Data Type Reference
  18. Function Reference
  19. Parameter Reference

Big Data Management User Guide

Big Data Management User Guide

How to Develop a Mapping to Process Hierarchical Data

How to Develop a Mapping to Process Hierarchical Data

Develop a mapping with complex ports, operators, and functions to process hierarchical data on the Spark engine.
The tasks and the order in which you perform the tasks to develop the mapping depend on the mapping scenario.
The following list outlines the high-level tasks to develop and run a mapping to read, write, and process hierarchical data in complex files.
Create an HDFS connection.
Create a Hadoop Distributed File System (HDFS) connection to access data in complex files that are stored in the HDFS. You can create and manage an HDFS connection in the Administrator tool or the Developer tool.
Create a complex file data object.
  1. Create a complex file data object to represent the complex files in the HDFS as sources or targets.
    The Developer tool creates the read and write operations when you create the complex file data object.
  2. Configure the complex file data object properties.
  3. In the read and write operations, enable the column file properties to project columns in the complex files as complex data types.
Create a mapping and add mapping objects.
  1. Create a mapping, and add Read and Write transformations.
    To read from and write to a complex file, add Read and Write transformations based on the complex file data object.
    To write to an Avro or Parquet file, you can also create a complex file target from an existing transformation in the mapping.
  2. Based on the mapping logic, add other transformations that are supported on the Spark engine.
Generate struct data.
Based on the mapping scenario, use one of the hierarchical conversion wizards to generate struct data. You can also perform the following steps manually:
Create or import complex data type definitions for struct ports.
  1. Create or import complex data type definitions that represent the schema of the struct data.
    The complex data type definitions are stored in the type definition library, which is a Model repository object. The default name of the type definition library is Type_Definition_Library.
  2. If a mapping uses one or more mapplets, rename the type definition libraries in the mapping and the mapplets to ensure that the names are unique.
Create and configure struct ports in transformations.
  1. Create ports in transformations and assign struct complex data type.
  2. Specify the type configuration for the struct ports.
    You must reference a complex data type definition for the struct port.
  3. Create expressions with complex functions to generate struct data.
Modify struct data.
You can convert struct data to relational or hierarchical data. If the struct data contains elements of primitive data types, you can extract the elements as relational data. If the struct data contains elements of complex data types, you can extract the elements as hierarchical data. Based on the mapping scenario, use one of the hierarchical conversion wizards to modify struct data. You can also perform the following steps manually.
  1. Create output ports with port properties that match the element of the struct data that you want to extract.
  2. Create expressions with complex operators or complex functions to modify the struct data.
Generate array data.
  1. Create ports in transformations and assign array complex data type.
  2. Specify the type configuration for the array ports.
  3. Create expressions with complex functions to generate array data.
Modify array data.
You can convert array data to relational or hierarchical data. If the array data contains elements of primitive data types, you can extract the elements as relational data. If the array data contains elements of complex data types, you can extract the elements as hierarchical data. Based on the mapping scenario, use one of the hierarchical conversion wizards to modify array data. You can also perform the following steps manually:
  1. Create output ports with port properties that match the element of the array data that you want to extract.
  2. Create expressions with complex operators or complex functions to modify the array data.
Configure the transformations.
Link the ports and configure the transformation properties based on the mapping logic.
Configure the mapping to run on the Spark engine.
Configure the following mapping run-time properties:
  1. Select Hadoop as the validation environment and Spark as the engine.
  2. Select Hadoop as the execution environment and select a Hadoop connection.
Validate and run the mapping on the Spark engine.
  1. Validate the mapping to fix any errors.
  2. Optionally, view the Spark engine execution plan to debug the logic.
  3. Run the mapping.

0 COMMENTS

We’d like to hear from you!