Table of Contents

Search

  1. Preface
  2. Informatica Developer
  3. The Model Repository
  4. Searches in Informatica Developer
  5. Connections
  6. Physical Data Objects
  7. Flat File Data Objects
  8. Logical View of Data
  9. Viewing Data
  10. Application Deployment
  11. Application Patch Deployment
  12. Application Patch Deployment Example
  13. Continuous Integration and Continuous Delivery (CI/CD)
  14. Object Import and Export
  15. Appendix A: Data Type Reference
  16. Appendix B: Keyboard Shortcuts
  17. Appendix C: Connection Properties

Developer Tool Guide

Developer Tool Guide

Customized Data Object Write Properties

Customized Data Object Write Properties

The Data Integration Service uses write properties when it writes data to relational resources. To edit write properties, select the Input transformation in the
Write
view, and then select the
Advanced
properties.
The following table describes the write properties that you configure for customized data objects:
Property
Description
Truncate Hive Target Partition
Overwrites the partition in the Hive target in which the data is being inserted. To enable this option, you must also select the option to truncate target tables.
You can truncate external partitioned tables and non-partitioned tables.
Default is disabled.
Load type
Type of target loading. Select Normal or Bulk.
If you select Normal, the Data Integration Service loads targets normally. You can choose Bulk when you load to DB2, Sybase, Oracle, or Microsoft SQL Server. If you specify Bulk for other database types, the Data Integration Service reverts to a normal load. Bulk loading can increase mapping performance, but it limits the ability to recover because no database logging occurs. When you write to an Oracle target with bulk loading, you can optimize performance by disabling constraints in the Oracle database.
Choose Normal mode if the mapping contains an Update Strategy transformation. If you choose Normal and the Microsoft SQL Server target name includes spaces, configure the following environment SQL in the connection object:
SET QUOTED_IDENTIFIER ON
Update override
Overrides the default UPDATE statement for the target.
Delete
Deletes all rows flagged for delete.
Default is enabled.
Insert
Inserts all rows flagged for insert.
Default is enabled.
Target Schema Strategy
Type of target schema strategy for the relational or Hive target table.
You can select one of the following target schema strategies:
  • RETAIN - Retain existing target schema. The Data Integration Service retains the existing target schema.
  • CREATE - Create or replace table at run time. The Data Integration Service drops the target table at run time and replaces it with a table based on a target table that you identify.
  • Assign Parameter. You can assign a parameter to represent the value for the target schema strategy and then change the parameter at run time.
DDL query for create or replace
The DDL query based on which the Data Integration Service creates or replaces the target table.
This option is applicable when you select the
CREATE - Create or replace table at run time
target schema strategy.
Truncate target table
Truncates the target before it loads data.
Default is disabled.
Update strategy
Update strategy for existing rows. You can select one of the following strategies:
  • Update as update. The Data Integration Service updates all rows flagged for update.
  • Update as insert. The Data Integration Service inserts all rows flagged for update. You must also select the
    Insert
    target option.
  • Update else insert. The Data Integration Service updates rows flagged for update if they exist in the target and then inserts any remaining rows marked for insert. You must also select the
    Insert
    target option.
PreSQL
SQL command the Data Integration Service runs against the target database before it reads the source. The Developer tool does not validate the SQL.
PostSQL
SQL command the Data Integration Service runs against the target database after it writes to the target. The Developer tool does not validate the SQL.

0 COMMENTS

We’d like to hear from you!