Data Ingestion and Replication
- Data Ingestion and Replication
- All Products
Property | Description |
---|---|
Load Type | Type of load operation that you want the
application ingestion and replication task to perform. You can select one
of the following load types for the task:
|
Oracle Fusion Replication Approach | Select one of the following replication approaches
|
Oracle Fusion Application | Select the application from which you want to replicate data. |
Property | Description |
---|---|
List Objects by Rule Type | Generate and download a list of the source
objects that match the object selection criteria. If you
used rule-based object selection, you can select the
type of selection rules to use. Options are:
Select the Include Fields check
box to include fields in the list, regardless of which
object selection method you used. Click the Download icon to
download the list. |
Include Child Objects | Get the child object data of an object
using an Oracle Fusion Cloud source. This applies only for
the REST replication approach and for all load types only
when the target is Google Big Query. |
Initial Start Point for Incremental Load | For incremental load jobs,
customize the position in the source logs from which the
application ingestion and replication job starts reading
change records the first time it runs. You must specify the date and time in the time zone
configured for the Oracle Fusion Cloud
instance. |
CDC Interval | For incremental load and combined initial and incremental load
jobs, specify the time interval in which the The
default interval is 5 minutes. application ingestion and replication job runs to retrieve
the change records for incremental load. |
Fetch Size | Enter the number of records that the The default value is 50000.application ingestion and replication job associated with
the task reads at a time from the source. |
Enable chunking | Select this checkbox to chunk data during the extraction process in
an application ingestion and replication task. Chunking applies to
initial load tasks, and in the initial portion in
combined initial and incremental load tasks that use the
BICC replication approach. Chunking is not applicable
for CDC tasks. Options are:
|
Number of Rows | If you select the By Primary Key chunking
option, specify the number of rows to chunk extracts. Enter
a positive integer. Make sure that the combination of total
rows and chunk size results in no more than 250 chunks. For
example, if you have 2500 rows with a chunk size of 10, it
results in 250 chunks, which is the maximum chunks
allowed. |
Number of Days | If you select the By Creation Date chunking
option, enter the number of days to set the interval for
data extraction. For example, if you specify 365 days, the
data is divided into segments that each cover a 365-day
period, starting from the initial date. To prevent
performance issues, consider specifying a larger number of
days for the interval. |
Property | Description |
---|---|
Read Event Batch Size | The number of payload events written in batch to the internal event
queue during CDC processing. When the event queue is implemented as an
internal ring buffer, this value is the number of payload events
that the reader writes to a single internal buffer slot. A batch size that's
too small might increase contention between threads. A larger batch
size can provide for more parallelism but consume more
memory. |
Reader Helper Thread Count | The number of reader helper threads used during CDC processing to
convert change data into a canonical format that can be passed to the
target. Default value is 3. You can enter a larger value to allow
more threads to be available for performing conversion processing in
parallel. |
Custom | Select this option to manually enter the name of a property and its
value. Use this option to enter properties that Informatica Global
Customer Support or a technical staff member has provided to you for a
special case. Available for any supported load type. |