Table of Contents

Search

  1. Preface
  2. Mappings
  3. Mapping tutorial
  4. Parameters
  5. CLAIRE recommendations
  6. Data catalog discovery
  7. Visio templates

Mappings

Mappings

Advanced session properties

Advanced session properties

Advanced session properties are optional properties that you can configure in
mapping
tasks
,
dynamic mapping
tasks, and Visio templates
. Use caution when you configure advanced session properties. The properties
are based on PowerCenter advanced session properties and
might not be appropriate for use with all tasks.
You can configure the following types of advanced session properties:
  • General
  • Performance
  • Advanced
  • Error handling
Advanced mode uses a different set of advanced session properties. Mappings in SQL ELT mode don't use advanced session properties.

General options

The following table describes the general options:
General options
Description
Write Backward Compatible Session Log File
Writes the session log to a file.
Session Log File Name
Name for the session log. Use any valid file name.
You can customize the session log file name in one of the following ways:
  • Using a static name. A static log file name is a simple static string with or without a file extension.
    If you use a static name, the log file name is appended with a sequence number each time the task runs, for example samplelog.1, samplelog.2. When the maximum number of log files is reached, the numbering sequence begins a new cycle.
  • Using a dynamic name. A log file name is dynamic when it includes a parameter defined in a parameter file or a system variable. You can include any of the following system variables:
    • $CurrentTaskName. Replaced with the task name.
    • $CurrentTime. Replaced with the current time.
    • $CurrentRunId. Replaced with the run ID for the current job.
    If you use a dynamic name, the file name is unique for every task run. The Maximum Number of Log Files property is not applied. To purge old log files, delete the files manually.
Session Log File Directory
Directory where the session log is saved. Use a directory local to the Secure Agent to run the task.
By default, the session log is saved to the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/logs
$Source Connection Value
Source connection name
for Visio templates
.
$Target Connection Value
Target connection name
for Visio templates
.
Source File Directory
Source file directory path. Use for flat file connections only.
Target File Directory
Target file directory path. Use for flat file connections only.
Treat Source Rows as
When the task reads source data, it marks each row with an indicator that specifies the target operation to perform when the row reaches the target. Use one of the following options:
  • Insert. All rows are marked for insert into the target.
  • Update. All rows are marked for update in the target.
  • Delete. All rows are marked for delete from the target.
  • Data Driven. The task uses the Update Strategy object in the data flow to mark the operation for each source row.
Commit Type
Commit type to use. Use one of the following options.
  • Source. The task performs commits based on the number of source rows.
  • Target. The task performs commits based on the number of target rows.
  • User Defined. The task performs commits based on the commit logic defined in the Visio template.
When you do not configure a commit type, the task performs a target commit.
Commit Interval
Interval in rows between commits.
When you do not configure a commit interval, the task commits every 10,000 rows.
Commit on End of File
Commits data at the end of the file.
Rollback Transactions on Errors
Rolls back the transaction at the next commit point when the task encounters a non-fatal error.
When the task encounters a transformation error, it rolls back the transaction if the error occurs after the effective transaction generator for the target.
Java Classpath
Java classpath to use.
The Java classpath is added to the beginning of the system classpath when the task runs.
Use this option when you use third-party Java packages, built-in Java packages, or custom Java packages in a Java transformation.

Performance settings

The following table describes the performance settings:
Performance settings
Description
DTM Buffer Size
Amount of memory allocated to the task from the DTM process.
By default, a minimum of 12 MB is allocated to the buffer at run time.
Use one of the following options:
  • Auto. Enter Auto to use automatic memory settings. When you use Auto, configure
    Maximum Memory Allowed for Auto Memory Attributes
    .
  • A numeric value. Enter the numeric value that you want to use. The default unit of measure is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB.
You might increase the DTM buffer size in the following circumstances:
  • When a task contains large amounts of character data, increase the DTM buffer size to 24 MB.
  • When a task contains n partitions, increase the DTM buffer size to at least n times the value for the task with one partition.
  • When a source contains a large binary object with a precision larger than the allocated DTM buffer size, increase the DTM buffer size so that the task does not fail.
Incremental Aggregation
Performs incremental aggregation for tasks
based on Visio templates
.
Reinitialize Aggregate Cache
Overwrites existing aggregate files for a task that performs incremental aggregation.
Enable High Precision
Processes the Decimal data type to a precision of 28.
Session Retry on Deadlock
The task retries a write on the target when a deadlock occurs.
SQL ELT Optimization
Type of SQL ELT optimization. Use one of the following options:
  • None. The task processes all transformation logic for the task.
  • To Source. The task pushes as much of the transformation logic to the source database as possible.
  • To Target. The task pushes as much of the transformation logic to the target database as possible.
  • Full. The task pushes as much of the transformation logic to the source and target databases as possible. The task processes any transformation logic that it cannot push to a database.
  • $$PushdownConfig. The task uses the SQL ELT optimization type specified in the user-defined parameter file for the task.
    When you use $$PushdownConfig, ensure that the user-defined parameter is configured in the parameter file.
When you use SQL ELT optimization, do not use the
Error Log Type
property.
For more information, see the help for the appropriate connector.
The SQL ELT optimization functionality varies depending on the support available for the connector. For more information, see the help for the appropriate connector.
Create Temporary View
Allows the task to create temporary view objects in the database when it pushes the task to the database.
Use when the task includes an SQL override in the Source Qualifier transformation or Lookup transformation.
You can also use for a task based on a Visio template that includes a lookup with a lookup source filter.
Create Temporary Sequence
Allows the task to create temporary sequence objects in the database.
Use when the task is based on a Visio template that includes a Sequence Generator transformation.
Enable cross-schema SQL ELT optimization
Enables SQL ELT optimization for tasks that use source or target objects associated with different schemas within the same database.
To see if cross-schema SQL ELT optimization is applicable to the connector you use, see the help for the relevant connector.
This property is enabled by default.
Allow SQL ELT Optimization for User Incompatible Connections
Indicates that the database user of the active database has read permission on idle databases.
If you indicate that the database user of the active database has read permission on idle databases, and it does not, the task fails.
If you do not indicate that the database user of the active database has read permission on idle databases, the task does not push transformation logic to the idle databases.
Session Sort Order
Order to use to sort character data for the task.

Advanced options

The following table describes the advanced options:
Advanced options
Description
Constraint Based Load Ordering
Currently not used in
Informatica Intelligent Cloud Services
.
Cache Lookup() Function
Caches lookup functions in Visio templates with unconnected lookups. Overrides lookup configuration in the template.
By default, the task performs lookups on a row-by-row basis, unless otherwise specified in the template.
Default Buffer Block Size
Size of buffer blocks used to move data and index caches from sources to targets. By default, the task determines this value at run time.
Use one of the following options:
  • Auto. Enter Auto to use automatic memory settings. When you use Auto, configure Maximum Memory Allowed for Auto Memory Attributes.
  • A numeric value. Enter the numeric value that you want to use. The default unit of measure is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB.
The task must have enough buffer blocks to initialize. The minimum number of buffer blocks must be greater than the total number of Source Qualifiers, Normalizers for COBOL sources, and targets.
The number of buffer blocks in a task = DTM Buffer Size / Buffer Block Size. Default settings create enough buffer blocks for 83 sources and targets. If the task contains more than 83, you might need to increase DTM Buffer Size or decrease Default Buffer Block Size.
Line Sequential Buffer Length
Number of bytes that the task reads for each line. Increase this setting from the default of 1024 bytes if source flat file records are larger than 1024 bytes.
Maximum Memory Allowed for Auto Memory Attributes
Maximum memory allocated for automatic cache when you configure the task to determine the cache size at run time.
You enable automatic memory settings by configuring a value for this attribute. Enter a numeric value. The default unit is bytes. Append KB, MB, or GB to the value to specify a different unit of measure. For example, 512MB.
If the value is set to zero, the task uses default values for memory attributes that you set to auto.
Maximum Percentage of Total Memory Allowed for Auto Memory Attributes
Maximum percentage of memory allocated for automatic cache when you configure the task to determine the cache size at run time. If the value is set to zero, the task uses default values for memory attributes that you set to auto.
Additional Concurrent Pipelines for Lookup Cache Creation
Restricts the number of pipelines that the task can create concurrently to pre-build lookup caches. You can configure this property when the
Pre-build Lookup Cache
property is enabled for a task or transformation.
When the
Pre-build Lookup Cache
property is enabled, the task creates a lookup cache before the Lookup receives the data. If the task has multiple Lookups, the task creates an additional pipeline for each lookup cache that it builds.
To configure the number of pipelines that the task can create concurrently, select one of the following options:
  • Auto. The task determines the number of pipelines it can create at run time.
  • Numeric value. The task can create the specified number of pipelines to create lookup caches.
Custom Properties
Configure custom properties for the task. You can override the custom properties that the task uses after the job has started. The task also writes the override value of the property to the session log.
Pre-build Lookup Cache
Allows the task to build the lookup cache before the Lookup receives the data. The task can build multiple lookup cache files at the same time to improve performance.
You can configure this option in a Visio template or in a task. The task uses the task-level setting if you configure the Lookup option as Auto for a Visio template.
Configure one of the following options:
  • Always allowed. The task can build the lookup cache before the Lookup receives the first source row. The task creates an additional pipeline to build the cache.
  • Always disallowed. The task cannot build the lookup cache before the Lookup receives the first row.
When you use this option, configure the
Configure the Additional Concurrent Pipelines for Lookup Cache Creation
property. The task can pre-build the lookup cache if this property is greater than zero.
DateTime Format String
Date time format for the task. You can specify seconds, milliseconds, or nanoseconds.
To specify seconds, enter
MM/DD/YYYY HH24:MI:SS
.
To specify milliseconds, enter
MM/DD/YYYY HH24:MI:SS.MS
.
To specify microseconds, enter
MM/DD/YYYY HH24:MI:SS.US
.
To specify nanoseconds, enter
MM/DD/YYYY HH24:MI:SS.NS
.
By default, the format specifies microseconds, as follows: MM/DD/YYYY HH24:MI:SS.US.
Pre 85 Timestamp Compatibility
Do not use with
Data Integration
.

Error handling

The following table describes the error handling options:
Error handling options
Description
Stop on Errors
Indicates how many non-fatal errors the task can encounter before it stops the session. Non-fatal errors include reader, writer, and DTM errors.
Enter the number of non-fatal errors you want to allow before stopping the session. The task maintains an independent error count for each source, target, and transformation. If you specify 0, non-fatal errors do not cause the session to stop.
Override Tracing
Overrides tracing levels set on an object level.
On Stored Procedure Error
Determines the behavior when a task based on a Visio template encounters pre-session or post-session stored procedure errors. Use one of the following options:
  • Stop Session. The task stops when errors occur while executing a pre-session or post-session stored procedure.
  • Continue Session. The task continues regardless of errors.
By default, the task stops.
On Pre-Session Command Task Error
Determines the behavior when a task that includes pre-session shell commands encounters errors. Use one of the following options:
  • Stop Session. The task stops when errors occur while executing pre-session shell commands.
  • Continue Session. The task continues regardless of errors.
By default, the task stops.
On Pre-Post SQL Error
Determines the behavior when a task that includes pre-session or post-session SQL encounters errors:
  • Stop Session. The task stops when errors occur while executing pre-session or post-session SQL.
  • Continue. The task continues regardless of errors.
By default, the task stops.
Error Log Type
Specifies the type of error log to create. You can specify flat file or no log. Default is none.
You cannot log row errors from XML file sources. You can view the XML source errors in the session log.
Do not use this property when you use the
SQL ELT Optimization
property.
Error Log File Directory
Specifies the directory where errors are logged. By default, the error log file directory is $PMBadFilesDir\.
Error Log File Name
Specifies error log file name. By default, the error log file name is PMError.log.
Log Row Data
Specifies whether or not to log transformation row data. When you enable error logging, the task logs transformation row data by default. If you disable this property, n/a or -1 appears in transformation row data fields.
Log Source Row Data
Specifies whether or not to log source row data. By default, the check box is clear and source row data is not logged.
Data Column Delimiter
Delimiter for string type source row data and transformation group row data. By default, the task uses a pipe ( | ) delimiter.
Verify that you do not use the same delimiter for the row data as the error logging columns. If you use the same delimiter, you may find it difficult to read the error log file.

0 COMMENTS

We’d like to hear from you!