Data Ingestion and Replication 
			
			- Data Ingestion and Replication
- All Products
 
           
      	
            
	
      | Property  | Description  | 
|---|---|
| Use Table Name as Topic Name 
				 | Indicates whether 
					  Database Ingestion and
							Replicationwrites messages that contain source data to separate topics, one for each source table, or writes all messages to a single topic. Select this check box to write messages to separate table-specific topics. The topic names match the source table names, unless you add the source schema name, a prefix, or a suffix in the 
					  Include Schema Name, Table Prefix, or Table Suffixproperties. By default, this check box is cleared. With the default setting, you must specify the name of the single topic to which all messages are written in the 
					  Topic Nameproperty. | 
| Include Schema Name 
				 | When 
					  Use Table Name as Topic Nameis selected, this check box appears and is selected by default. This setting adds the source schema name in the table-specific topic names. The topic names then have the format schemaname_ tablename. If you do 
					  notwant to include the schema name, clear this check box. | 
| Table Prefix 
				 | When 
				   Use Table Name as Topic Nameis selected, this property appears so that you can optionally enter a prefix to add to the table-specific topic names. For example, if you specify myprefix_, the topic names have the format myprefix_ tablename. If you omit the underscore (_) after the prefix, the prefix is prepended to the table name. | 
| Table Suffix 
				 | When 
				   Use Table Name as Topic Nameis selected, this property appears so that you can optionally enter a suffix to add to the table-specific topic names. For example, if you specify _mysuffix, the topic names have the format tablename_mysuffix. If you omit the underscore (_) before the suffix, the suffix is appended to the table name. | 
| Topic Name 
				 | If you do 
				   notselect Use table name as topic name, you must enter the name of the single Kafka topic to which all messages that contain source data will be written. | 
| Output Format  | Select the format of the output file. Options are:  
 The default value is  CSV. Output files in CSV format use
											double-quotation marks ("") as the delimiter for each
											field.  If your Kafka target uses Confluent Schema Registry to
											store schemas for incremental load jobs, you must select
												 AVROas the format. | 
| JSON Format  | If  JSONis selected as the output
											format, select the level of detail of the output.
											Options are: 
 | 
| Avro Format 
				 | If you selected 
					  AVROas the output format, select the format of the Avro schema that will be created for each source table. Options are: 
 The default value is 
					  Avro-Flat. | 
| Avro Serialization Format  | If  AVROis selected as the output
											format, select the serialization format of the Avro
											output file. Options are: 
 The default value is  Binary. If you have a Confluent Kafka target that uses Confluent
											Schema Registry to store schemas, select
												 None. Otherwise, Confluent
											Schema Registry does not register the schema. Do not
											select Noneif you are not using
											Confluent Scheme Registry. | 
| Avro Schema Directory  | If  AVROis selected as the output
											format, specify the local directory where Database Ingestion and
							Replicationstores Avro schema
											definitions for each source table. Schema definition
											files have the following naming pattern: 
 If this
											directory is not specified, no Avro schema definition
											file is produced.   If a source schema change is expected to alter the
											target, the Avro schema definition file is regenerated
											with a unique name that includes a timestamp, in the
											following format:  
 This unique naming pattern ensures that older schema
											definition files are preserved for audit purposes.  | 
| Avro Compression Type 
				 | If 
				   AVROis selected as the output format, select an Avro compression type. Options are: 
 The default value is 
					  None, which means no compression is used. | 
| Deflate Compression Level 
				 | If 
				   Deflateis selected in the Avro Compression Typefield, specify a compression level from 0 to 9. The default value is 0. | 
| Property  | Description  | 
|---|---|
| Add Operation Type 
				   
				 | Select this check box to add a metadata column that includes the source SQL operation type in the output that the job propagates to the target. 
				   The job writes "I" for insert, "U" for update, or "D" for delete. 
				   By default, this check box is selected. 
				   | 
| Add Operation Time 
				   
				 | Select this check box to add a metadata column that records the source SQL operation timestamp in the output that the job propagates to the target. 
				   By default, this check box is not selected. 
				   | 
| Add Operation Owner 
				   
				 | Select this check box to add a metadata column that records the owner of the source SQL operation in the output that the job propagates to the target. 
				   By default, this check box is not selected. 
				   This property is not available for jobs that have a MongoDB or PostgreSQL source.
				   This property is not supported for jobs that have a SQL Server source and use the CDC Tables capture method.
				   | 
| Add Operation Transaction Id 
				   
				 | Select this check box to add a metadata column that includes the source transaction ID in the output that the job propagates to the target for SQL operations. 
				   By default, this check box is not selected. 
				   | 
| Add Orderable Sequence | Select this check box to add a metadata column that records a combined epoch value and an
								incremental numeric value for each change operation that the job
								inserts into the target tables. The sequence value is always
								ascending, but not guaranteed to be sequential and gaps may exist.
								The sequence value is used to identify the order of activity in the
								target records.  By default, this check box is not selected.  | 
| Add Before Images 
				   
				 | Select this check box to include UNDO data in the output that a job writes to the target. 
				   By default, this check box is not selected. 
				   | 
| Async Write 
				 | Controls whether to use synchronous delivery of messages to Kafka. 
				   
 By default, this check box is selected. 
				   | 
| Producer Configuration Properties 
				 | Specify a comma-separated list of 
					  key= valuepairs to enter Kafka producer properties for Apache Kafka, Confluent Kafka, Amazon Managed Streaming for Apache Kafka (MSK), or Kafka-enabled Azure Event Hubs targets. If you have a Confluent target that uses Confluent Schema Registry to store schemas, you must specify the following properties: 
				   
 You can specify Kafka producer properties in either this field or in the 
					  Additional Connection Propertiesfield in the Kafka connection. If you enter the producer properties in this field, the properties pertain to the database ingestion jobs associated with this task only. If you enter the producer properties for the connection, the properties pertain to jobs for all tasks that use the connection definition, unless you override the connection-level properties for specific tasks by also specifying properties in the 
					  Producer Configuration Propertiesfield. For information about Kafka producer properties, see the Apache Kafka, Confluent Kafka, Amazon MSK, or Azure Event Hubs documentation. 
				   | 
| Property | Description | 
|---|---|
| Writer Helper Thread Count | The number of writer helper threads that are used to convert incoming
                            change data rows or initial unload rows to the output format configured
                            for the target, such as Avro, CSV, or Parquet. Default value is 2. If
                                two threads can’t keep up with the incoming volume of data, you can
                                increase the number of threads. Consider increasing the number of
                                threads in the following situations: 1) the incoming volume is high,
                                2) multiple writer distributors are in use, or 3) some rows are very
                                large or wide, which increases conversion time. | 
| Custom | Select this option to manually enter the name of a property and its
                            value. Use this option to enter properties that Informatica Global
                            Customer Support or a technical staff member has provided to you for a
                            special case. Available for any supported load type. |