Data Ingestion and Replication
- Data Ingestion and Replication
- All Products
Property
| Description
|
---|---|
Use Table Name as Topic Name
| Indicates whether
Database Ingestion and Replication writes messages that contain source data to separate topics, one for each source table, or writes all messages to a single topic.
Select this check box to write messages to separate table-specific topics. The topic names match the source table names, unless you add the source schema name, a prefix, or a suffix in the
Include Schema Name ,
Table Prefix , or
Table Suffix properties.
By default, this check box is cleared. With the default setting, you must specify the name of the single topic to which all messages are written in the
Topic Name property.
|
Include Schema Name
| When
Use Table Name as Topic Name is selected, this check box appears and is selected by default. This setting adds the source schema name in the table-specific topic names. The topic names then have the format
schemaname _tablename .
If you do
not want to include the schema name, clear this check box.
|
Table Prefix
| When
Use Table Name as Topic Name is selected, this property appears so that you can optionally enter a prefix to add to the table-specific topic names. For example, if you specify myprefix_, the topic names have the format myprefix_tablename . If you omit the underscore (_) after the prefix, the prefix is prepended to the table name.
|
Table Suffix
| When
Use Table Name as Topic Name is selected, this property appears so that you can optionally enter a suffix to add to the table-specific topic names. For example, if you specify _mysuffix, the topic names have the format
tablename _mysuffix. If you omit the underscore (_) before the suffix, the suffix is appended to the table name.
|
Topic Name
| If you do
not select
Use table name as topic name , you must enter the name of the single Kafka topic to which all messages that contain source data will be written.
|
Output Format
| Select the format of the output file. Options are:
The default value is
CSV .
Output files in CSV format use double-quotation marks ("") as the delimiter for each field.
If your Kafka target uses Confluent Schema Registry to store schemas for incremental load jobs, you must select
AVRO as the format.
|
JSON Format
| If
JSON is selected as the output format, select the level of detail of the output. Options are:
|
Avro Format
| If you selected
AVRO as the output format, select the format of the Avro schema that will be created for each source table. Options are:
The default value is
Avro-Flat .
|
Avro Serialization Format
| If
AVRO is selected as the output format, select the serialization format of the Avro output file. Options are:
The default value is
Binary .
If you have a Confluent Kafka target that uses Confluent Schema Registry to store schemas, select
None . Otherwise, Confluent Schema Registry does not register the schema. Do not select
None if you are not using Confluent Scheme Registry.
|
Avro Schema Directory
| If
AVRO is selected as the output format, specify the local directory where
Database Ingestion and Replication stores Avro schema definitions for each source table. Schema definition files have the following naming pattern:
If this directory is not specified, no Avro schema definition file is produced.
If a source schema change is expected to alter the target, the Avro schema definition file is regenerated with a unique name that includes a timestamp, in the following format:
This unique naming pattern ensures that older schema definition files are preserved for audit purposes.
|
Avro Compression Type
| If
AVRO is selected as the output format, select an Avro compression type. Options are:
The default value is
None , which means no compression is used.
|
Deflate Compression Level
| If
Deflate is selected in the
Avro Compression Type field, specify a compression level from 0 to 9. The default value is 0.
|
Property
| Description
|
---|---|
Add Operation Type
| Select this check box to add a metadata column that includes the source SQL operation type in the output that the job propagates to the target.
The job writes "I" for insert, "U" for update, or "D" for delete.
By default, this check box is selected.
|
Add Operation Time
| Select this check box to add a metadata column that records the source SQL operation timestamp in the output that the job propagates to the target.
By default, this check box is not selected.
|
Add Operation Owner
| Select this check box to add a metadata column that records the owner of the source SQL operation in the output that the job propagates to the target.
By default, this check box is not selected.
This property is not available for jobs that have a MongoDB or
PostgreSQL source. This property is not supported for jobs
that have a SQL Server source and use the CDC Tables capture
method. |
Add Operation Transaction Id
| Select this check box to add a metadata column that includes the source transaction ID in the output that the job propagates to the target for SQL operations.
By default, this check box is not selected.
|
Add Before Images
| Select this check box to include UNDO data in the output that a job writes to the target.
By default, this check box is not selected.
|
Async Write
| Controls whether to use synchronous delivery of messages to Kafka.
By default, this check box is selected.
|
Producer Configuration Properties
| Specify a comma-separated list of
key =value pairs to enter Kafka producer properties for Apache Kafka, Confluent Kafka, Amazon Managed Streaming for Apache Kafka (MSK), or Kafka-enabled Azure Event Hubs targets.
If you have a Confluent target that uses Confluent Schema Registry to store schemas, you must specify the following properties:
You can specify Kafka producer properties in either this field or in the
Additional Connection Properties field in the Kafka connection.
If you enter the producer properties in this field, the properties pertain to the database ingestion jobs associated with this task only. If you enter the producer properties for the connection, the properties pertain to jobs for all tasks that use the connection definition, unless you override the connection-level properties for specific tasks by also specifying properties in the
Producer Configuration Properties field.
For information about Kafka producer properties, see the Apache Kafka, Confluent Kafka, Amazon MSK, or Azure Event Hubs documentation.
|