After you create an HBase data object operation, you can add the operation to a mapping to read data from or write data to MapR-DB.
To read data from a MapR-DB table, add an HBase data object read operation as a
Read
transformation in the mapping. To write data to a MapR-DB table, add an HBase data object write operation as a
Write
transformation in the mapping.
When you configure the data object columns, you can get data in all columns in a column family to a single column as binary data. Use the Data Processor transformation to convert the binary data into the required data types.
You can validate and run mappings in the native environment or Hadoop environment. When you run a mapping in the native environment, the Data Integration Service runs the mapping from the Developer tool. When you run a mapping on a Hadoop cluster, you can select the Blaze or Spark engine. The Data Integration Service pushes the mapping to the selected engine for processing.
You can deploy the mapping and run it in the selected environment. You can run standalone mappings or mappings that are part of a workflow.