Table of Contents

Search

  1. Preface
  2. Data Replication Overview
  3. Understanding Data Replication
  4. Sources - Preparation and Replication Considerations
  5. Targets - Preparation and Replication Considerations
  6. Starting the Server Manager
  7. Getting Started with the Data Replication Console
  8. Defining and Managing Server Manager Main Servers and Subservers
  9. Creating and Managing User Accounts
  10. Creating and Managing Connections
  11. Creating Replication Configurations
  12. Materializing Targets with InitialSync
  13. Scheduling and Running Replication Tasks
  14. Implementing Advanced Replication Topologies
  15. Monitoring Data Replication
  16. Managing Replication Configurations
  17. Handling Replication Environment Changes and Failures
  18. Troubleshooting
  19. Data Replication Files and Subdirectories
  20. Data Replication Runtime Parameters
  21. Command Line Parameters for Data Replication Components
  22. Updating Configurations in the Replication Configuration CLI
  23. DDL Statements for Manually Creating Recovery Tables
  24. Sample Scripts for Enabling or Disabling SQL Server Change Data Capture
  25. Glossary

Replication Considerations for Oracle Sources

Review the following information about Data Replication support for Oracle sources:
  • Oracle must run in ARCHIVELOG mode and have minimal global supplemental logging enabled.
  • Data Replication can extract change data from both Oracle online redo logs and archived logs in a Real Application Cluster (RAC) or non-RAC environment.
  • Data Replication can extract change data from Oracle logs that are managed by Automatic Storage Management (ASM).
  • Data Replication can extract change data from Oracle archived logs over the network.
  • Data Replication can extract change data from Recovery Manager (RMAN) backup sets regardless of the endianness of the system where the RMAN utility created these backup sets. The Extractor does not support compressed and encrypted backup sets.
    To process redo logs from backup sets, set the extract.oracle.bsc.enable runtime parameter to 1. The Extractor processes backup sets in the locations that are specified on the
    Extract Range
    tab.
    For faster change data capture from backup sets, ensure that the extract.oracle.use_log_file_header_cache runtime parameter is set to 1. Also, you can use the extract.oracle.bsc.number_of_threads_to_build_indexes runtime parameter to adjust the number of the threads that the Extractor uses to index redo logs in the backup sets.
  • For Oracle sources that use ASM, Data Replication can read online redo logs over the network. The Extractor makes requests to ASM for data in the online redo logs. The Extractor can run on a system other than the Oracle source system.
  • For Oracle sources without ASM, Data Replication can read online redo logs that are on a remote Network File System (NFS) in the following cases:
    • The Oracle database files reside on the NFS file system. Oracle accesses these files by means of the Oracle Direct NFS Client.
    • The Oracle database files reside on Network Attached Storage (NAS) devices that are certified by the Oracle Storage Compatibility Program (OSCP).
    • For NFS Version 3 and Version 4 file systems, the directory that contains the online redo logs is mounted with the following options: bg, hard, nointr, noac, and forcedirectio. If the forcedirectio option is not available, use the no_wdelay option or another mount option that provides direct I/O.
  • Data Replication can extract change data from Amazon Relational Database Service (RDS) for Oracle deployments of Oracle databases in the cloud.
    To process redo logs from an Amazon RDS for Oracle instance, set the extract.oracle.rds_read_enable runtime parameter to 1.
    For Amazon RDS for Oracle instances, the Extractor processes only the redo logs that are located in the following directories:
    • ARCHIVELOG_DIR for archived redo log files
    • ONLINELOG_DIR for online redo log files
  • If multiple Oracle Extractors extract data from an Amazon RDS for Oracle instance at the same time, the instance might return the following error:
    ORA-27477: "SYS"."RDS_REFRESH_FILE_LIST" already exists.
    This issue does not cause data loss. To continue data replication, restart the Extractor.
  • To extract change data from online redo logs remotely, export the directory that contains the online redo logs and mount the directory to the system where you run the Extractor. Use one of the following strategies to specify the location of the online redo logs for the Extractor:
    • On the system where you run the Extractor, mount the directory with the online redo logs to the same location that Oracle uses to access online redo logs. To get the location of the online redo logs for Oracle, execute the following statement:
      select * from v$logfile;
    • Use the ONLINE_FILES_PREFIX command line parameter for the Oracle Extractor to map the location of the online redo logs that Oracle uses to the location of online redo logs that the Extractor uses. Use this parameter if Oracle stores all of the online redo logs in one directory.
  • Data Replication can extract changes from Oracle physical standby databases that are open in read-only mode and from Oracle logical standby databases.
    If you use InitialSync to materialize an Oracle target based on a configuration that has an Oracle physical standby source, ensure that InitialSync does not use database links (dblinks) to unload the source data. Use the OCI library instead.
  • Data Replication cannot extract change data from Oracle snapshot standby databases.
  • Data Replication can extract change data from Oracle materialized views.
  • Data Replication can extract change data from Oracle index-organized tables (IOTs), except partitioned IOTs defined with both the OVERFLOW and INCLUDING clauses.
  • The Extractor cannot process Quick Multi Insert (QMI) operations for IOTs. Oracle produces QMI bulk-insert operations internally when you issue statements such as INSERT… SELECT and CREATE TABLE … AS SELECT. Therefore, the Extractor cannot extract INSERT … SELECT and CREATE TABLE … AS SELECT operations that use QMIs for IOTs.
  • Data Replication can extract change data from Oracle compressed tables with the following limitations:
    • In Oracle 11
      g
      Release 1, Data Replication can only capture inserts, with or without the append hint, and direct path loads for tables that use FOR DIRECT_LOAD compression. Data Replication can only capture insert append operations and direct path loads for tables with OLTP compression in Oracle 11
      g
      Release 1. In Oracle 11
      g
      Release 2 and later, Data Replication can capture inserts, updates, and deletes, as normal, for tables with OLTP and FOR DIRECT_LOAD compression.
    • Data Replication does not support Oracle Exadata Hybrid Columnar Compression (EHCC).
  • Data Replication supports Oracle Transparent Data Encryption (TDE) of source columns and tablespaces. Data Replication supports TDE with all of the Oracle encryption algorithms, including 3DES and AES with 128-bit, 192-bit, and 256-bit key lengths. Data Replication also supports TDE with MAC integrity checking and with SALT enhanced security.
    Before replicating change data from encrypted columns and tablespaces, you must import the required master keys from the Oracle wallet into the replication configuration. For more information, see Enabling Replication from Oracle Tables and Tablespaces That Use Transparent Data Encryption.
    When the Extractor loads the replication configuration, it decrypts the Oracle master keys that were imported into the configuration by using the internal master key. The Extractor uses the decrypted master keys to decrypt all of the TDE table keys that were also imported into the configuration. The Extractor then uses the TDE table keys to decrypt change data from encrypted columns.
    If the Extractor encounters an encrypted tablespace key in the redo log header, the Extractor decrypts this key by using the Oracle master key. The Extractor uses the TDE tablespace keys to decrypt redo records from encrypted tablespaces.
    The Extractor writes decrypted data to intermediate files. The Applier then reads the decrypted values and writes them to the target.
    The decryption of data from encrypted Oracle columns and tablespaces can significantly degrade Extractor performance.
    If you change table encryption keys, Data Replication handles the changes automatically. You do not need to save the configuration file again. However, if you add a new encryption master key to the wallet, you must open the Oracle wallet and save the configuration again. Otherwise, the Extractor ends with an error because it cannot decrypt the encrypted column data.
  • Data Replication can extract change data from Oracle online redo logs and archived logs on a raw device on Linux.
  • If an Oracle RESETLOGS event occurs on a source database, Data Replication can continue extracting change data from the redo logs across the RESETLOGS boundary. A RESETLOGS event occurs in situations that require you to open the database with the RESETLOGS option, such as after a flashback database operation, incomplete point-in-time recovery, or point-in-time recovery with a backup control file. A RESETLOGS event can also occur transparently in a Data Guard environment with a physical standby database after a failover or after a switchover that is preceded by an incomplete recovery and followed by an ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE operation. A RESETLOGS operation archives the current online redo logs, resets the log sequence number to 1, creates a new database incarnation, creates a new timestamp and SCN for the online redo logs, and updates all the current data files with the new RESETLOGS SCN.
    When creating a processing queue for redo logs, the Extractor checks whether the database has undergone a RESETLOGS event. If a RESETLOGS event occurred, the Extractor determines whether it has processed redo records past the RESETLOGS boundary. If the Extractor has not passed the RESETLOGS boundary yet, the Extractor will process the remaining redo records for the old database incarnation and then, starting from the RESETLOGS boundary, will automatically start processing redo logs for the new database incarnation.
    If the Extractor has already passed the RESETLOGS boundary, the Extractor ends with an error. The error occurs because the Extractor already extracted change data that the Oracle source database rolled back after the RESETLOGS event. Consequently, data inconsistencies might occur.
    If the Extractor has already passed the RESETLOGS boundary, but the Applier has not applied the source data past this boundary yet, you can clean the replication configuration to resolve the problem. If the Applier has also passed the RESETLOGS boundary, you must run InitialSync to resynchronize the source and target.
  • For configurations that have an Oracle source and target, Data Replication supports piecewise operations on LOB columns in SQL Apply mode.
    If the Oracle source uses inline LOBs, Data Replication supports piecewise operations in all apply modes for any supported target type. For targets other than Oracle, the Applier skips the following PL/SQL operations on inline and outline LOB columns:
    • APPEND, if the redo record does not contain the complete LOB data
    • ERASE
    • TRIM
    If the Oracle source uses piecewise operations that the Applier cannot process, set the
    apply.skip_oracle_piecewise_operations
    parameter to 1 or unmap the LOB columns that might be manipulated in pieces.
  • Data Replication supports SecureFiles and BasicFiles storage of LOB columns.
    Data Replication does not support SECUREFILE LOB columns with the following options:
    • DEDUPLICATE
    • FILESYSTEM_LIKE_LOGGING, if the size of LOB data exceeds 4 KB
    • NOLOGGING, if the size of LOB data exceeds 4 KB
  • Data Replication does not support tables that were created with the ROWDEPENDENCIES setting.
  • Data Replication supports virtual columns. The Data Replication Console shows virtual columns on the
    Map Columns
    tab.
  • Data Replication does not support cluster, object, and XML tables.
  • Data Replication does not support the replication of triggers and stored procedures.
  • Oracle 11
    g
    Enterprise Edition introduced the Oracle Advanced Compression Option. For Oracle 11
    g
    Release 1 sources, Data Replication does not support the Advanced Compression Option. However, for later Oracle releases, Data Replication can extract change data from source tables that use the Advanced Compression Option.
  • Data Replication cannot use a unique function-based index to uniquely identify rows when applying Updates and Deletes. If a source table does not have a primary key and includes only a unique function-based index, add a virtual index with the
    Primary key
    option that the Applier uses to uniquely identify target rows.
  • AIX 6.1 introduced the open flag O_CIOR for file open system calls. Oracle 11
    g
    Release 2 (11.2.0.2) uses this flag to open redo log files. However, the O_CIOR flag cannot be used to open files on file systems that are mounted with the Concurrent IO (CIO) option. For Oracle 11
    g
    Release 2 sources on AIX 6.1 and later, ensure that the JFS2 file system that contains Oracle redo logs is mounted
    without
    the CIO option.
  • Data Replication can extract change data from partitioned tables. However, if an ALTER TABLE partition operation occurs in the source, Data Replication might incorrectly replicate data to the target. The following DDL partition operations can affect data integrity on the target:
    • If a DROP PARTITION operation occurs on a source table, Data Replication does not delete the data that belonged to the dropped partition from the target.
    • If a SPLIT PARTITION operation occurs on a source table partition that contains data, Data Replication might load duplicate records to the target.
    • If a TRUNCATE PARTITION operation occurs on a source table, Data Replication does not delete the data that belonged to the truncated partition from the target.
    Perform partition-related DDL operations on mapped source tables that are defined with the NOLOGGING option. The Extractor then skips these operations.
  • If you are replicating a partitioned source table and you perform an Oracle partition exchange operation between one of the partitions and a separate table, the following change data extraction limitations apply:
    • Data Replication does not replicate any DML changes that were written to the separate table before the partition exchange operation.
    • After the partition exchange, Data Replication replicates any new DML changes on a partition in the source table to the target.
    • If any rows are added to the partitioned table as a result of the partition exchange, Data Replication does not replicate these rows to the target because they are not logged in the redo log during the partition exchange. If you change or delete these rows and then use SQL Apply mode for data replication, apply errors occur because Data Replication detects a mismatch in the rows. However, you can configure Data Replication to ignore these errors on the
      Runtime Settings
      Error Handling
      view. For more information, see Configuring Applier Handling of Error Codes.
    • If any rows are removed from the partitioned table as a result of the partition exchange, Data Replication does not remove these rows from the target because they are not logged in the redo log during the partition exchange.
    • If the partition exchange operation changes the content of the table and is not a simple reorganization or rebuild operation, Informatica recommends that you run InitialSync to resynchronize the source and target in the mapping.
  • If an ALTER TABLE MOVE operation occurs on the source, Data Replication loads duplicate records to the target.
    Perform ALTER TABLE MOVE operations on mapped source tables that are defined with the NOLOGGING option. The Extractor then skips these operations.
  • Data Replication supports the following Oracle 12
    c
    features:
    • Columns with extended datatypes of up to 32,767 bytes in size, including NVARCHAR2, RAW, and VARCHAR2 columns.
    • Tables for which Oracle In-Database Archiving is enabled. However, Data Replication does not replicate the ROW ARCHIVAL clause and any data or DDL operations for the hidden system column ORA_ARCHIVE_STATE.
    • Columns that are defined as invisible.
    • Numeric columns that are defined with the IDENTITY clause.
      If a source DDL statement contains the IDENTITY clause, the Applier executes the equivalent DDL statement on the target but without the IDENTITY clause.
    • Databases that use multitenant architecture. The source tables must reside in a single pluggable database (PDB) within a multitenant container database (CDB). In the Data Replication Console, you enter the name of the PDB on the
      Source Database
      tab in the
      Instance
      field.
  • To replicate CREATE TABLE and CREATE INDEX operations from Oracle 12
    c
    sources, ensure that the _ORACLE_SCRIPT parameter is set to false for the Data Replication session that includes the Oracle source. To set this parameter to false, execute the following SQL statement:
    ALTER SESSION SET _ORACLE_SCRIPT=false;

0 COMMENTS

We’d like to hear from you!