Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Data Engineering Integration
  3. Mappings
  4. Mapping Optimization
  5. Sources
  6. Targets
  7. Transformations
  8. Python Transformation
  9. Data Preview
  10. Cluster Workflows
  11. Profiles
  12. Monitoring
  13. Hierarchical Data Processing
  14. Hierarchical Data Processing Configuration
  15. Hierarchical Data Processing with Schema Changes
  16. Intelligent Structure Models
  17. Blockchain
  18. Stateful Computing
  19. Appendix A: Connections Reference
  20. Appendix B: Data Type Reference
  21. Appendix C: Function Reference

Update Strategy Transformation on the Spark Engine

Update Strategy Transformation on the Spark Engine

Some processing rules for the Spark engine differ from the processing rules for the Data Integration Service.

General Restrictions for Hive Targets

You can use the Update Strategy transformation on the Hadoop distributions that support Hive ACID.
You can also use the Update Strategy transformation to write mapping results to JDBC-compliant relational targets.
The Update Strategy transformation does not forward rejected rows to the next transformation when the target is a Hive table or JDBC-compliant table.
If the Update Strategy transformation receives multiple update rows for the same primary key value, the transformation selects one random row to update the target.
If multiple Update Strategy transformations write to different instances of the same target, the target data might be unpredictable.
If the mapping runs on the Spark engine, you can choose the Use Hive Merge option. The option has the following restrictions:
  • A single row for delete or update cannot match multiple rows in the target. When the mapping violates this restriction, the mapping fails with a runtime error.
  • If you configure the Update Strategy expression to update partitioning or bucketing columns, the mapping ignores the Hive MERGE option and does not update the columns.
The Developer tool and the Data Integration Service do not validate against these restrictions. If the expression or the mapping violates these restrictions, the mapping might run, but the results will not be as expected.
Hive targets always perform Update as Update operations. Hive targets do not support Update Else Insert or Update as Insert.

Using Hierarchical Data in Update Strategy Expressions

You can configure the expression to use the primitive types in a complex data field.
For example, you have a nested array of struct data types defined by the following statement:
addresses array<struct<city:varchar(15), apartment:smallint, pincode:int>>
You can use the city column in the update strategy expression since it is a primitive type using the following expression:
IIF( (addresses[0].city = ’NEW YORK’), DD_INSERT, DD_UPDATE)

Mapping Validation

Update Strategy transformation output fields must exactly match the target input fields.
Mapping validation fails in the following situations.
  • The Update Strategy transformation is connected to more than one target.
  • The Update Strategy transformation is not located immediately before the target.
  • The Update Strategy transformation target is an external ACID table.
  • The target does not contain a connected primary key.
  • The property to enable truncation of the target table at run time is selected.
  • One of the following target strategies for the target table at run time is selected:
    • Create or replace the target table
    • ApplyNewColumns
    • ApplyNewSchema
    • Fail
The mapping fails in the following situations when the target is a Hive target:
  • The target table is not enabled for transactions.
  • The target is not ORC bucketed.

Using Hive Target Tables

To use a Hive target table with an Update Strategy transformation, you must create the Hive target table with the following clause in the Hive Data Definition Language:
TBLPROPERTIES ("transactional"="true")
.
To use an Update Strategy transformation with a Hive target, verify that the following properties are configured in the hive-site.xml configuration set associated with the Hadoop connection:
hive.support.concurrency true hive.enforce.bucketing true hive.exec.dynamic.partition.mode nonstrict hive.txn.manager org.apache.hadoop.hive.ql.lockmgr.DbTxnManager hive.compactor.initiator.on true hive.compactor.worker.threads 1

0 COMMENTS

We’d like to hear from you!