Table of Contents

Search

  1. Preface
  2. Analyst Service
  3. Catalog Service
  4. Content Management Service
  5. Data Integration Service
  6. Data Integration Service Architecture
  7. Data Integration Service Management
  8. Data Integration Service Grid
  9. Data Integration Service REST API
  10. Data Integration Service Applications
  11. Data Privacy Management Service
  12. Enterprise Data Preparation Service
  13. Interactive Data Preparation Service
  14. Informatica Cluster Service
  15. Mass Ingestion Service
  16. Metadata Access Service
  17. Metadata Manager Service
  18. Model Repository Service
  19. PowerCenter Integration Service
  20. PowerCenter Integration Service Architecture
  21. High Availability for the PowerCenter Integration Service
  22. PowerCenter Repository Service
  23. PowerCenter Repository Management
  24. PowerExchange Listener Service
  25. PowerExchange Logger Service
  26. SAP BW Service
  27. Search Service
  28. System Services
  29. Test Data Manager Service
  30. Test Data Warehouse Service
  31. Web Services Hub
  32. Application Service Upgrade
  33. Appendix A: Application Service Databases
  34. Appendix B: Connecting to Databases from Windows
  35. Appendix C: Connecting to Databases from UNIX or Linux
  36. Appendix D: Updating the DynamicSections Parameter of a DB2 Database

Troubleshooting a Grid

Troubleshooting a Grid

I enabled a Data Integration Service that runs on a grid, but one of the service processes failed to start.
When you enable a Data Integration Service that runs on a grid, a service process starts on each node in the grid that has the service role. A service process might fail to start for the following reasons:
  • The node does not have the service role.
    Enable the service role on the node, and then enable the service process running on that node.
  • Another process running on the machine is using the HTTP port number assigned to the service process.
    On the
    Processes
    view for the Data Integration Service, enter a unique HTTP port number for the service process. Then, enable the service process running on that node.
A job failed to run on a Data Integration Service grid. Which logs do I review?
If the Data Integration Service grid is configured to run jobs in the service process or in separate local processes, review the following logs in this order:
  1. Job log accessible from the
    Monitor
    tab.
    Includes log events about how the DTM instance runs the job.
  2. Data Integration Service log accessible from the
    Service
    view of the
    Logs
    tab.
    Includes log events about service configuration, processing, and failures.
If the Data Integration Service grid is configured to run jobs in separate remote processes, additional components write log files. Review the following logs in this order:
  1. Job log accessible from the
    Monitor
    tab.
    Includes log events about how the DTM instance runs the job.
  2. Data Integration Service log accessible from the
    Service
    view of the
    Logs
    tab.
    Includes log events about service configuration, processing, and failures. The Data Integration Service log includes the following message which indicates the host name and port number of the master compute node:
    INFO: [GRIDCAL_0204] The Integration Service [<MyDISName>] elected a new master compute node [<HostName>:<PortNumber>].
  3. Master compute node log accessible in the
    cadi_services_0.log
    file located in the log directory configured for the master compute node.
    Includes log events written by the Service Manager on the master compute node about managing the grid of compute nodes and orchestrating worker service process requests. The master compute node logs are not accessible from the Administrator tool.
  4. Resource Manager Service log accessible from the
    Service
    view of the
    Logs
    tab.
    Includes log events about service configuration and processing and about nodes with the compute role that register with the service.
  5. Container management log accessible from the
    Domain
    view of the
    Logs
    tab. Select
    Container Management
    for the category.
    Includes log events about how the Service Manager manages containers on nodes with the compute role.
A mapping that ran in a separate remote process has an incomplete log file.
When a mapping runs on a Data Integration Service grid configured to run jobs in separate remote processes, the Data Integration Service writes two files for the mapping log. The worker service process that optimizes and compiles the mapping on the service node writes log events to one log file. The DTM process that runs the mapping on the compute node writes log events to another log file. When you access the mapping log, the Data Integration Service consolidates the two files into a single log file.
A mapping log might be incomplete for the following reasons:
  • The mapping is still running.
    When a DTM process finishes running a mapping, it sends the log file to the master Data Integration Service process. No DTM messages appear in the mapping log until the entire mapping is complete. To resolve the issue, you can wait until the mapping completes before accessing the log. Or, you can find the log file that the DTM process temporarily writes on the worker compute node.
  • The mapping has completed, but the DTM process failed to send the complete log file to the master Data Integration Service process.
    The DTM process might fail to send the complete DTM log because of a network error or because the worker compute node unexpectedly shut down. The DTM process sends the log file to the Data Integration Service process in multiple sections. The DTM section of the log begins and ends with the following lines:
    ### ### <MyWorkerComputeNodeName> ### ### Start Grid Task [gtid-1443479776986-1-79777626-99] Segment [s0] Tasklet [t-0] Attempt [1] .... ### End Grid Task [gtid-1443479776986-1-79777626-99] Segment [s0] Tasklet [t-0] Attempt [1]
    If these lines are not included in the mapping log or if the beginning line is included but not the ending line, then the DTM process failed to send the complete log file. To resolve the issue, you can find the DTM log files written to the following directory on the node where the master Data Integration Service process runs:
    <Informatica installation directory>/logs/<node name>/services/DataIntegrationService/disLogs/logConsolidation/<mappingName>_<jobID>_<timestamp>
    If the job ID folder is empty, find the log file that the DTM process temporarily writes on the worker compute node.
To find the temporary DTM log file on the worker compute node, find the following message in the first section of the mapping log:
INFO: [GCL_5] The grid task [gtid-1443479776986-1-79777626-99] cluster logs can be found at [./1443479776986/taskletlogs/gtid-1443479776986-1-79777626-99].
The listed directory is a subdirectory of the following default log directory configured for the worker compute node:
<Informatica installation directory>/logs/<node name>/dtmLogs/

0 COMMENTS

We’d like to hear from you!