Table of Contents

Search

  1. Preface
  2. Analyst Service
  3. Catalog Service
  4. Content Management Service
  5. Data Integration Service
  6. Data Integration Service Architecture
  7. Data Integration Service Management
  8. Data Integration Service Grid
  9. Data Integration Service REST API
  10. Data Integration Service Applications
  11. Data Privacy Management Service
  12. Enterprise Data Preparation Service
  13. Interactive Data Preparation Service
  14. Informatica Cluster Service
  15. Mass Ingestion Service
  16. Metadata Access Service
  17. Metadata Manager Service
  18. Model Repository Service
  19. PowerCenter Integration Service
  20. PowerCenter Integration Service Architecture
  21. High Availability for the PowerCenter Integration Service
  22. PowerCenter Repository Service
  23. PowerCenter Repository Management
  24. PowerExchange Listener Service
  25. PowerExchange Logger Service
  26. SAP BW Service
  27. Search Service
  28. System Services
  29. Test Data Manager Service
  30. Test Data Warehouse Service
  31. Web Services Hub
  32. Application Service Upgrade
  33. Appendix A: Application Service Databases
  34. Appendix B: Connecting to Databases from Windows
  35. Appendix C: Connecting to Databases from UNIX or Linux
  36. Appendix D: Updating the DynamicSections Parameter of a DB2 Database

Application Service Guide

Application Service Guide

Configure Source and Output File Directories for Multiple Nodes

Configure Source and Output File Directories for Multiple Nodes

When the Data Integration Service runs on primary and back-up nodes or on a grid, DTM instances can run jobs on each node with the compute role. Each DTM instance must be able to access the source and output file directories. To run mappings that manage metadata changes in flat file sources, each Data Integration Service process must be able to access the source file directories.
When you configure the source and output file directories for a Data Integration Service that runs on multiple nodes, consider the following guidelines:
  • You can configure the
    Source Directory
    property to use a shared directory to create one directory for source files.
    If you run mappings that manage metadata changes in flat file sources and if the Data Integration Service grid is configured to run jobs in separate remote processes, you must configure the
    Source Directory
    property to use a shared directory.
    If you run other types of mappings or if you run mappings that manage metadata changes in flat file sources on any other Data Integration Service grid configuration, you can configure different source directories for each node with the compute role. Replicate all source files in all of the source directories.
  • If you run mappings that use a persistent lookup cache, you must configure the
    Cache Directory
    property to use a shared directory. If no mappings use a persistent lookup cache, you can configure the cache directory to have a different directory for each node with the compute role.
  • You can configure the
    Target Directory
    ,
    Temporary Directories
    , and
    Reject File Directory
    properties to have different directories for each node with the compute role.
To configure a shared directory, configure the directory in the Execution Options on the
Properties
view. You can configure a shared directory for the home directory so that all source and output file directories use the same shared home directory. Or, you can configure a shared directory for a specific source or output file directory. Remove any overridden values for the same execution option on the
Compute
view.
To configure different directories for each node with the compute role, configure the directory in the Execution Options on the
Compute
view.

0 COMMENTS

We’d like to hear from you!