Table of Contents

Search

  1. Preface
  2. Data integration tasks
  3. Mapping tasks
  4. Dynamic mapping tasks
  5. Synchronization tasks
  6. Data transfer tasks
  7. Replication tasks
  8. Masking tasks
  9. Masking rules
  10. PowerCenter tasks

Tasks

Tasks

Configuring runtime options

Configuring runtime options

On the
Runtime Options
page, configure optional runtime options for the task. Expand each area to see the options you can configure.
  1. Specify whether to run the task on a schedule or without a schedule.
    Choose one of the following options:
    • If you want to run the task on a schedule, click
      Run on a schedule
      . Select the schedule you want to use or click
      New
      to create a schedule.
    • If you want to run the task without a schedule, click
      Do not run on a schedule
      .
  2. Configure email notification options for the task.
  3. Optionally, if the mapping contains a Source transformation that incrementally loads files, you can configure the time that the task uses to identify files to load. By default, the task loads files modified after the last job. To load files from a different time, choose one of the following override options:
    • Full load of all files
      to load all of the files in the source directory.
      After you select this option, the mapping task might display the wrong date and time for the last load time, but the mapping task does perform a full load.
    • Load files from selected time
      to load the files modified after the date and time that you select.
    If you want to run one job that reprocesses source files without affecting future jobs, run the mapping task with the advanced options configured to reprocess incrementally-loaded source files. For more information about reprocessing jobs, see Reprocessing incrementally-loaded source files.
  4. Optionally, in the
    Advanced Options
    area, configure the advanced options that are displayed for your connection:
    Field
    Description
    Pre-Processing Commands
    Commands to run before the task.
    Post-Processing Commands
    Commands to run after the task completes.
    Maximum Number of Log Files
    Number of session log files
    and import log files
    to retain. By default,
    Data Integration
    stores each type of log file for 10 runs before it overwrites the log files for new runs.
    If a dollar sign ($) is present in a custom session log file name, for example, MyLog_$CurrentTime, the file name is dynamic. If you customize the session log file name using a dynamic name, this property doesn't apply. To purge old log files, delete the files manually.
    Schema Change Handling
    Determines how
    Data Integration
    picks up changes to the object schema. Select one of the following options:
    • Asynchronous.
      Data Integration
      refreshes the schema when you update the mapping or mapping task, and after an upgrade.
    • Dynamic.
      Data Integration
      refreshes the schema every time the task runs.
    Default is Asynchronous.
    Schema Mismatch Handling
    Determines how
    Data Integration
    responds when records in a file don't conform with the schema.
    This setting is available when schema validation is enabled in the Source transformation. Select one of the following options:
    • Skip mismatched files and continue.
      Data Integration
      skips the entire file if at least one record in the file doesn't conform with the schema. Processing continues for all the remaining files.
    • Stop on first mismatched file.
      Data Integration
      stops processing when it encounters a file with at least one record that doesn't conform with the schema.
    Default is "Skip mismatched files and continue." For more information, see Schema mismatch handling.
    Schema mismatch handling doesn't apply to mapping tasks that are based on mappings in SQL ELT mode.
    Dynamic Schema Handling
    Determines how
    Data Integration
    applies schema changes from upstream transformations to the target object. Available when the schema change handling is dynamic and the field mapping is automatic.
    For each target, select how
    Data Integration
    updates the target schema. The options available are based on the target connection.
    For more information, see Schema change handling or the help for the appropriate connector.
    Not all options appear if the
    mapping
    task is based on a mapping in advanced mode.
  5. Optionally, to create a parameter file based on the parameters and default values specified in the mapping on which the task is based, click
    Download Parameter File Template
    within the parameter file location.
    For more information about parameter file templates, see
    Mappings
    .
  6. Optionally, if the
    mapping
    task contains parameters, you can use parameter values from a parameter file. Choose one of the following options:
    • To use a parameter file on a local machine, select
      Local
      .
      Use this option if the task is based on a mapping in advanced mode.
      Enter the following information:
      Field
      Description
      Parameter File Directory
      In a Secure Agent runtime environment:
      Path for the directory that contains the parameter file, excluding the parameter file name. The Secure Agent must be able to access the directory.
      You can use an absolute file path or a path relative to one of the following $PM system variables:
      • $PMRootDir
      • $PMTargetFileDir
      • $PMSourceFileDir
      • $PMLookupFileDir
      • $PMCacheDir
      • $PMSessionLogDir
      • $PMExtProcDir
      • $PMTempDir
      By default,
      Data Integration
      uses the following parameter file directory:
      <Secure Agent installation directory>/apps/Data_Integration_Server/data/userparameters
      For mappings in advanced mode,
      Data Integration
      uses the following parameter file directory by default:
      <Secure Agent installation directory>/apps/data/userparameters
      In a serverless runtime environment:
      You can only use mounted directories configured in the data disk or their subdirectories for the following $PM system variables:
      • $PMLookupFileDir
      • $PMBadFileDir
      • $PMCacheDir
      • $PMStorageDir
      • $PMTargetFileDir
      • $PMSourceFileDir
      • $PMExtProcDir
      • $PMTempDir
      For more information about these system variables, see
      Runtime Environments
      in the Administrator help.
      Parameter File Name
      Name of the file that contains the definitions and values of user-defined parameters used in the task.
      You can provide the file name or the relative path and file name in this field.
      For the CDC and select mainframe and midrange connectors, you can specify a parameter file that includes connection overrides. In the parameter file, set the connection overrides for the parameters in the format of name=value pairs, using a semicolon (;) as the separator. For example:
      $<ParameterName>=”User Name=jdoe;Password=mypassword”
    • To use a cloud-hosted file, select
      Cloud Hosted
      .
      Use this option if the task runs in a serverless runtime environment.
      Enter the following information about the file:
      Field
      Description
      Connection
      Connection where the parameter file is stored. You can use the following connection types:
      • Amazon S3
      • Google Storage V2
      • Azure Data Lake Store Gen2
      Object
      Name of the file that contains the definitions and values of user-defined parameters used in the task.
  7. If the parameter file is stored on a local machine and the task runs a mapping in advanced mode, you can click
    Download Parameter File
    to preview the parameter file that the task will use.
    In a serverless runtime environment, the parameter file must be on a data disk.
  8. Choose whether to run the task in standard or verbose execution mode, if this option is visible.
    If you select verbose mode, the mapping generates additional data in the logs that you can use for troubleshooting. Select verbose execution mode only for troubleshooting purposes. Verbose execution mode impacts performance because of the amount of data it generates.
    This option does not appear if the
    mapping
    task is based on a mapping in advanced mode.
  9. Optionally, configure SQL ELT optimization.
    If you use the runtime options to configure SQL ELT optimization for a
    mapping
    task that is based on a mapping in advanced mode, you can't enable CLAIRE-powered runtime strategies on the
    Runtime Strategy
    page.
    The following table describes the SQL ELT optimization properties:
    Property
    Description
    SQL ELT Optimization Type
    Type of SQL ELT optimization. Use one of the following options:
    • None. The task processes all transformation logic for the task.
    • To Source. The task pushes as much of the transformation logic to the source database as possible.
      Not available in advanced mode.
    • To Target. The task pushes as much of the transformation logic to the target database as possible.
      Not available in advanced mode.
    • Full. The task pushes all of the transformation logic to the source and target databases. The source and target schema must be the same.
    • $$PushdownConfig. The task uses the SQL ELT optimization type specified in the user-defined parameter file for the task.
      When you use $$PushdownConfig, ensure that the user-defined parameter is configured in the parameter file.
    When you use SQL ELT optimization, do not use the
    Error Log Type
    advanced session property.
    SQL ELT optimization functionality varies depending on the support available for the connector. For more information, see the help for the appropriate connector.
    SQL ELT optimization doesn't apply to mapping tasks that are based on mappings in SQL ELT mode because mappings in SQL ELT mode are automatically configured to push transformation logic to the cloud data warehouse.
    Optimization Context Type
    Provides context about the mapping configuration for SQL ELT optimization. If you select an option other than None,
    Data Integration
    constructs a single query for SQL ELT optimization by combining multiple targets in the mapping based on the target configurations. If you select None, the query is not optimized.
    If
    Data Integration
    cannot apply the selected context,
    Data Integration
    uses the default SQL ELT optimization behavior.
    Select one of the following options:
    • None
    • SCD Type 2 merge
    • Multi-insert
    Not available in advanced mode.
    For more information, see the help for the appropriate connector.
    SQL ELT Optimization Fallback Option
    If full SQL ELT optimization is not available for the connection, choose how Data Integration handles SQL ELT optimization.
    Choose one of the following options:
    • Partial SQL ELT. Default. Data Integration pushes as much transformation logic as possible to the source and target database. The task processes any transformation logic that it can't push to a database.
    • Non SQL ELT. The task runs with no SQL ELT optimization.
    • Fail Task. Data Integration fails the task.
    Not available in advanced mode.
    Create Temporary View
    Allows the task to create temporary view objects in the database when it pushes the task to the database.
    Use when the task includes an SQL override in the Source Qualifier transformation or Lookup transformation.
    Disabled when the SQL ELT optimization type is None.
    Create Temporary Sequence
    Allows the task to create temporary sequence objects in the database.
    Disabled when the SQL ELT optimization type is None.
  10. Optionally, if the task runs in a serverless runtime environment, configure serverless usage properties.
  11. Optionally, for mappings in advanced mode, run CLAIRE tuning to get a set of recommended properties and values for Spark session properties.
  12. If necessary, add other session properties now under
    Advanced Session Properties
    :
    1. Click
      New Session Property
      .
    2. Select a session property from the drop-down list.
    3. Configure the session property value.
  13. Choose to enable cross-schema SQL ELT optimization.
    Cross-schema SQL ELT optimization doesn't apply to mapping tasks that are based on mappings in SQL ELT mode.
  14. If you want to run multiple instances of the task at the same time, enable simultaneous runs of the
    mapping
    task.
    Some mapping features might produce unexpected results in simultaneous task runs.

0 COMMENTS

We’d like to hear from you!