Table of Contents

Search

  1. Preface
  2. Introduction
  3. Accessing Data Archive
  4. Working with Data Archive
  5. Scheduling Jobs
  6. Viewing the Dashboard
  7. Creating Data Archive Projects
  8. Salesforce Archiving
  9. SAP Application Retirement
  10. Creating Retirement Archive Projects
  11. Integrated Validation for Archive and Retirement Projects
  12. Retention Management
  13. External Attachments
  14. Data Archive Restore
  15. Data Discovery Portal
  16. Data Visualization
  17. Data Privacy
  18. Oracle E-Business Suite Retirement Reports
  19. JD Edwards Enterprise Retirement Reports
  20. Oracle PeopleSoft Applications Retirement Reports
  21. Language Settings
  22. Appendix A: Data Vault Datatype Conversion
  23. Appendix B: Special Characters in Data Vault
  24. Appendix C: SAP Application Retirement Supported HR Clusters
  25. Appendix D: Glossary

Troubleshooting Data Archive Projects

Troubleshooting Data Archive Projects

Extraction fails with various Oracle errors.
The archive job fails due to known limitations of the Oracle JDBC drivers. Or, the archive job succeeds, but produces unexpected results. The following scenarios are example scenarios that occur due to Oracle JDBC driver limitations:
  • You receive a
    java.sql.SQLException
    protocol violation exception.
  • You receive an exception that OALL8 is in an inconsistent state.
  • When you run an archive cycle for an entity with a Run Procedure step, an exception occurs due to an incomplete import. The job did not import the CLOB column, even though the job log shows that the column was imported successfully. The
    applimation.log
    includes an exception that an SQL statement is empty or null.
  • The connection to an Oracle database fails.
  • The Copy to Destination step shows an exception that the year is out of range.
  • The select and insert job fails with an ORA-600 exception.
  • The residual tables job fails with an ORA-00942 exception that a table or view does not exist.
To resolve the errors, use a different Oracle JDBC driver version. You can find additional Oracle JDBC drivers in the following directory:
<Data Archive installation directory>/optional
For more information, see Knowledge Base article 109857.
Extraction fails with out of memory or heap errors.
You may receive an out of memory error if several tables include LOB datatypes.
To resolve the errors, perform the following tasks:
  • Lower the JDBC fetch size in the source connection properties. Use a range between 100-500.
  • Reduce the number of Java threads in the
    conf.properties
    file for the
    informia.maxActiveAMThreads
    property. A general guideline is to use 2-3 threads per core.
Archive job runs for days.
To improve the archive job performance, perform the following tasks:
  • Check for timestamp updates in the
    host:port/jsp/tqm.jsp
    or the BCP directory.
  • Check for updates in table AM_JOB_STEP_THREADS in the ILM repository (AMHOME).
  • For UNIX operating systems, run the following UNIX commands to see input, output, and CPU activity:
    • sar -u 1 100
    • iostat-m 1
    • top
    • ps -fu 'whoami' -H
  • Use multiple web tiers to distribute the extraction.
    To use multiple web tiers, perform the following steps:
    1. Create a copy of the web tier.
    2. Change the port in the
      conf.properties
      file.
    3. Start the web tier.
When I use multiple web tiers, I receive an error that the archive job did not find any archive folders.
Run the Data Vault Loader and Update Retention jobs on the same web tier where you ran the Create Archive Folder job.
The archive job fails because the process did not complete within the maximum amount of time.
You may receive an error that the process did not complete if the archive job uses IBM DB2 Connect utilities for data movement. When the archive job uses IBM DB2 Connect utilities to load data, the job waits for a response from the utility for each table that the utility processes. The archive job fails if the job does not receive a response within 3 hours.
Use the IBM DB2 trace files to locate the error.
The archive job fails at the Copy to Staging step with the following error: Abnormal end unit of work condition occurred.
The archive job fails if the source database is IBM DB2 and the data contains the Decfloat data type.
To resolve this issue, perform the following tasks:
  1. Go to
    Workbench
    Manage Archive Projects
    .
  2. Click the edit icon for the archive project.
  3. Set the
    Insert Commit Interval
    value to 1.
  4. Complete the remaining steps in the archive project to archive the data.
The archive job fails at the Build Staging or Validate Destination step.
If the source database is Oracle and a table contains more than 998 columns, the archive job fails.
This issue occurs because of an Oracle database limitation on column number. Oracle supports a maximum number of 1000 columns in a table. The archive job adds two extra columns during execution. If the table contains more than 998 columns on the source database, the archive job fails because it cannot add any more columns.

0 COMMENTS

We’d like to hear from you!