Table of Contents

Search

  1. Preface
  2. Data Replication Overview
  3. Understanding Data Replication
  4. Sources - Preparation and Replication Considerations
  5. Targets - Preparation and Replication Considerations
  6. Starting the Server Manager
  7. Getting Started with the Data Replication Console
  8. Defining and Managing Server Manager Main Servers and Subservers
  9. Creating and Managing User Accounts
  10. Creating and Managing Connections
  11. Creating Replication Configurations
  12. Materializing Targets with InitialSync
  13. Scheduling and Running Replication Tasks
  14. Implementing Advanced Replication Topologies
  15. Monitoring Data Replication
  16. Managing Replication Configurations
  17. Handling Replication Environment Changes and Failures
  18. Troubleshooting
  19. Data Replication Files and Subdirectories
  20. Data Replication Runtime Parameters
  21. Command Line Parameters for Data Replication Components
  22. Updating Configurations in the Replication Configuration CLI
  23. DDL Statements for Manually Creating Recovery Tables
  24. Sample Scripts for Enabling or Disabling SQL Server Change Data Capture
  25. Glossary

Recovery

Recovery

Exceptions or failures can occur at any stage in replication processing.
For example, the Applier might unexpectedly disconnect from the target database while applying changes, or the Server Manager might fail to transmit intermediate files over the network.
To help you recover from these failures, Data Replication stores information that the Applier can use to resume change processing without data loss or duplication in the following ways:
  • For relational target types, the Applier generates a recovery table on the target. A recovery table stores information that prevents the Applier from replicating previously applied data again when the Applier task restarts. Data Replication supports recovery tables for the following target types: Amazon Redshift, DB2, Greenplum, MemSQL, Microsoft SQL Server, MySQL, Netezza, Oracle, PostgreSQL, Teradata, and Vertica.
  • For Apache Kafka targets, the Applier uses a checkpoint file instead of a recovery table to recover from the last checkpoint if an outage occurs. A checkpoint is recorded in the checkpoint file after each message is successfully sent to the target or after each Commit operation, depending on the delivery mode. By default, the Applier uses guaranteed delivery and writes a checkpoint after each message is successfully acknowledged as received by Kafka. This mode avoids message loss or duplication but slows apply processing. If guaranteed delivery is not used, a checkpoint is written after each Commit operation. In this case, duplicate or missing messages might occur on the target.
For Cloudera, Flat File, and Hortonworks targets, Data Replication does not provide a recovery mechanism.

0 COMMENTS

We’d like to hear from you!