Table of Contents

Search

  1. Preface
  2. Data integration tasks
  3. Mapping tasks
  4. Dynamic mapping tasks
  5. Synchronization tasks
  6. Data transfer tasks
  7. Replication tasks
  8. Masking tasks
  9. Masking rules
  10. PowerCenter tasks

Tasks

Tasks

Configuring a schedule and advanced options

Configuring a schedule and advanced options

On the
Schedule
page, you specify whether to run a
mapping
task manually or schedule it to run at a specific time or interval. You can create a schedule or use an existing schedule.
You can also configure email notification and advanced options for the task on the
Schedule
page.
  1. To specify whether to run the task on a schedule or without a schedule, choose one of the following options:
    • If you want to run the task on a schedule, click
      Run this task on schedule
      . Select the schedule you want to use or click
      New
      to create a schedule.
    • If you want to run the task without a schedule, click
      Do not run this task on a schedule
      .
  2. Configure email notification options for the task.
  3. Optionally, if the mapping contains a Source transformation that incrementally loads files, you can configure the time that the task uses to identify files to load. By default, the task loads files modified after the last load time. To load files from a different time, choose one of the following options:
    • To load all of the files in the source directory, reset the last load time.
    • To load the files modified after a specific date and time, enter the date and time in the format MM DD YYYY HH:MM.
    The following image shows these options:
    The Incremental File Load properties section contains an arrow icon to reset the last load time and a text field to enter a specific date and time.
    1. Reset the last load time.
    2. Enter a load date and time.
    If you want to run a single job that reprocesses source files without affecting future jobs, run the mapping task with the advanced options configured to reprocess incrementally-loaded source files. For more information about reprocessing jobs, see Reprocessing incrementally-loaded source files.
  4. Optionally, enter the following advanced options:
    Field
    Description
    Pre-Processing Commands
    Commands to run before the task.
    Post-Processing Commands
    Commands to run after the task completes.
    Maximum Number of Log Files
    Number of session log files
    and import log files
    to retain. By default,
    Data Integration
    stores each type of log file for 10 runs before it overwrites the log files for new runs.
    If a dollar sign ($) is present in a custom session log file name, for example, MyLog_$CurrentTime, the file name is dynamic. If you customize the session log file name using a dynamic name, the Maximum Number of Log Files property doesn't apply. To purge old log files, delete the files manually.
    Schema Change Handling
    Determines how
    Data Integration
    picks up changes to the object schema. Select one of the following options:
    • Asynchronous.
      Data Integration
      refreshes the schema when you update the mapping or mapping task, and after an upgrade.
    • Dynamic.
      Data Integration
      refreshes the schema every time the task runs.
    Default is Asynchronous.
    Schema Mismatch Handling
    Determines how
    Data Integration
    responds when records in a file don't conform with the schema.
    This setting is available when schema validation is enabled in the Source transformation. Select one of the following options:
    • Skip mismatched files and continue.
      Data Integration
      skips the entire file if at least one record in the file doesn't conform with the schema. Processing continues for all the remaining files.
    • Stop on first mismatched file.
      Data Integration
      stops processing when it encounters a file with at least one record that doesn't conform with the schema.
    Default is "Skip mismatched files and continue." For more information, see Schema mismatch handling.
    Schema mismatch handling doesn't apply to mapping tasks that are based on mappings in SQL ELT mode.
    Dynamic Schema Handling
    Determines how
    Data Integration
    applies schema changes from upstream transformations to the target object. Available when the schema change handling is dynamic and the field mapping is automatic.
    For each target, select how
    Data Integration
    updates the target schema. The options available are based on the target connection.
    For more information, see Schema change handling or the help for the appropriate connector.
    Not all options appear if the
    mapping
    task is based on a mapping in advanced mode.
  5. Optionally, if the
    mapping
    task contains parameters, you can use parameter values from a parameter file. Choose one of the following options:
    • To use a parameter file on a local machine, select
      Local
      .
      Use this option if the task is based on a mapping in advanced mode.
      Enter the following information:
      Field
      Description
      Parameter File Directory
      In a Secure Agent runtime environment:
      Path for the directory that contains the parameter file, excluding the parameter file name. The Secure Agent must be able to access the directory.
      You can use an absolute file path or a path relative to one of the following $PM system variables:
      • $PMRootDir
      • $PMTargetFileDir
      • $PMSourceFileDir
      • $PMLookupFileDir
      • $PMCacheDir
      • $PMSessionLogDir
      • $PMExtProcDir
      • $PMTempDir
      By default,
      Data Integration
      uses the following parameter file directory:
      <Secure Agent installation directory>/apps/Data_Integration_Server/data/userparameters
      For mappings in advanced mode,
      Data Integration
      uses the following parameter file directory by default:
      <Secure Agent installation directory>/apps/data/userparameters
      In a serverless runtime environment:
      You can only use mounted directories configured in the data disk or their subdirectories for the following $PM system variables:
      • $PMLookupFileDir
      • $PMBadFileDir
      • $PMCacheDir
      • $PMStorageDir
      • $PMTargetFileDir
      • $PMSourceFileDir
      • $PMExtProcDir
      • $PMTempDir
      For more information about these system variables, see
      Runtime Environments
      in the Administrator help.
      Parameter File Name
      Name of the file that contains the definitions and values of user-defined parameters used in the task.
      You can provide the file name or the relative path and file name in this field.
    • To use a cloud-hosted file, select
      Cloud Hosted
      .
      Use this option if the task runs in a serverless runtime environment.
      Enter the following information about the file:
      Field
      Description
      Connection
      Connection where the parameter file is stored. You can use the following connection types:
      • Amazon S3
      • Google Storage V2
      • Azure Data Lake Store Gen2
      Object
      Name of the file that contains the definitions and values of user-defined parameters used in the task.
  6. Optionally, if you want to create a parameter file based on the parameters and default values specified in the mapping on which the task is based, click
    Download Parameter File Template
    .
    For more information about parameter file templates, see
    Mappings
    .
  7. Optionally, if the parameter file is stored on a local machine and the task runs a mapping in advanced mode, you can click
    Download Parameter File
    to preview the parameter file that the task will use. In a serverless runtime environment, the parameter file must be on a data disk.
  8. Choose whether to run the task in standard or verbose execution mode.
    If you select verbose mode, the mapping generates additional data in the logs that you can use for troubleshooting. Select verbose execution mode only for troubleshooting purposes. Verbose execution mode impacts performance because of the amount of data it generates.
    This option does not appear if the
    mapping
    task is based on a mapping in advanced mode.
  9. Optionally, configure the following SQL ELT optimization properties:
    Property
    Description
    SQL ELT Optimization
    Type of SQL ELT optimization. Use one of the following options:
    • None. The task processes all transformation logic for the task.
    • To Source. The task pushes as much of the transformation logic to the source database as possible.
      Not available in advanced mode.
    • To Target. The task pushes as much of the transformation logic to the target database as possible.
      Not available in advanced mode.
    • Full. The task pushes all of the transformation logic to the source and target databases provided the source and target schema are the same.
    • $$PushdownConfig. The task uses the SQL ELT optimization type specified in the user-defined parameter file for the task.
      When you use $$PushdownConfig, ensure that the user-defined parameter is configured in the parameter file.
    When you use SQL ELT optimization, do not use the
    Error Log Type
    advanced session property.
    SQL ELT optimization functionality varies depending on the support available for the connector. For more information, see the help for the appropriate connector.
    SQL ELT optimization doesn't apply to mapping tasks that are based on mappings in SQL ELT mode because mappings in SQL ELT mode are automatically configured to push transformation logic to the cloud data warehouse.
    Optimization Context Type
    Provides context about the mapping configuration for SQL ELT optimization. If you select an option other than None,
    Data Integration
    constructs a single query for SQL ELT optimization by combining multiple targets in the mapping based on the target configurations. If you select None, the query is not optimized.
    If
    Data Integration
    cannot apply the selected context,
    Data Integration
    uses the default SQL ELT optimization behavior.
    Select one of the following options:
    • None
    • SCD Type 2 merge
    • Multi-insert
    Not available in advanced mode.
    For more information, see the help for the appropriate connector.
    SQL ELT Optimization Fallback Option
    If full SQL ELT optimization is not available for the selected connection, choose how Data Integration handles SQL ELT optimization.
    Choose one of the following options:
    • Partial SQL ELT. Data Integration pushes as much transformation logic as possible to the source and target database. The task processes any transformation logic that it can't push to a database.
    • Non SQL ELT. The task runs with no SQL ELT optimization.
    • Fail Task. Data Integration fails the task.
    Default is Partial SQL ELT.
    Not available in advanced mode.
    Create Temporary View
    Allows the task to create temporary view objects in the database when it pushes the task to the database.
    Use when the task includes an SQL override in the Source Qualifier transformation or Lookup transformation.
    You can also use for a task based on a Visio template that includes a lookup with a lookup source filter.
    Disabled when the SQL ELT optimization type is None.
    Create Temporary Sequence
    Allows the task to create temporary sequence objects in the database.
    Use when the task is based on a Visio template that includes a Sequence Generator transformation.
    Disabled when the SQL ELT optimization type is None.
  10. Optionally, if the task runs in a serverless runtime environment, configure serverless usage properties.
  11. Optionally, configure advanced session properties.
    1. Click
      Add
      .
    2. Select an advanced session property.
    3. Configure the advanced session property.
  12. Choose to enable cross-schema SQL ELT optimization.
    Cross-schema SQL ELT optimization doesn't apply to mapping tasks that are based on mappings in SQL ELT mode.
  13. If you want to run multiple instances of the task at the same time, enable simultaneous runs of the
    mapping
    task.
    Some mapping features might produce unexpected results in simultaneous task runs.
  14. Click
    Finish
    .

0 COMMENTS

We’d like to hear from you!