Table of Contents

Search

  1. Preface
  2. Connectors and Connections
  3. Data Ingestion and Replication connectors
  4. Data Ingestion and Replication connection properties

Connectors and Connections

Connectors and Connections

Database Ingestion and Replication connectors

Database Ingestion and Replication
connectors

Before you begin defining connections for
database ingestion and replication
tasks, verify that the connectors for your source and target types are available in
Informatica Intelligent Cloud Services
.
The following table lists the connectors that
Database Ingestion and Replication
requires to connect to a source or target that can be configured in a
database ingestion and replication
task:
Source or target type
Connector
Use for
Amazon Redshift
Amazon Redshift V2
Targets in initial load, incremental load, and initial and incremental load jobs
Amazon S3
Amazon S3 V2
Targets in initial load and incremental load jobs
Databricks
Databricks
Targets in initial load, incremental load, and initial and incremental load jobs
Db2 for i
Db2 for i Database Ingestion
Sources in initial load, incremental load, and initial and incremental load jobs
Db2 for Linux, UNIX, and Windows
Db2 for LUW Database Ingestion
Sources in initial load jobs
Db2 for z/OS
Db2 for zOS Database Ingestion
Sources in initial load and incremental load jobs
Flat file
No connector required
Targets in initial load jobs
Google BigQuery
Google BigQuery V2
Targets in initial load, incremental load, and initial and incremental load jobs
Google Cloud Storage
Google Cloud Storage V2
Targets in initial load and incremental load jobs
Kafka, including Apache Kafka, Confluent Kafka, Amazon Managed Streaming for Apache Kafka, and Kafka-enabled Azure Event Hubs
Kafka
Targets in incremental load jobs
Microsoft Azure Data Lake Storage Gen2
Microsoft Azure Data Lake Storage Gen2
Targets in initial load and incremental load jobs
Microsoft SQL Server, including on-premises SQL Server, RDS for SQL Server, Azure SQL Database, and Azure SQL Managed Instance
SQL Server
Sources in initial load, incremental load, and combined initial and incremental load jobs. For Azure SQL Database sources, you must use the
Query-based
or
CDC Tables
capture method for incremental load and combined load jobs.
Targets in initial load, incremental load, and initial and incremental load jobs.
Microsoft Azure Synapse Analytics
1
Microsoft Azure Synapse Analytics Database Ingestion
Targets in initial load, incremental load, and initial and incremental load jobs
Microsoft Fabric OneLake
Microsoft Fabric OneLake
Targets in initial load, incremental load, and initial and incremental load jobs
MongoDB
MongoDB Mass Ingestion
Sources in initial load and incremental load jobs
MySQL, including RDS for MySQL
MySQL
Sources in initial load and incremental load jobs. RDS for MySQL in initial load jobs only.
Netezza
Netezza
Sources in initial load jobs
Oracle, including RDS for Oracle
Oracle Database Ingestion
Sources in initial load, incremental load, and initial and incremental load jobs
Targets in initial load, incremental load, and initial and incremental load jobs
Oracle Cloud Infrastructure (OCI) Object Storage
Oracle Cloud Object Storage
Targets in initial load, incremental load, and initial and incremental load jobs
PostgreSQL, including on-premises PostgreSQL, Amazon Aurora PostgreSQL, Azure Database for PostgreSQL - Flexible Server, RDS for PostgreSQL, and Cloud SQL for PostgreSQL
PostgreSQL
Sources in initial load, incremental load, and initial and incremental load jobs
Targets in initial load, incremental load, and initial and incremental load jobs (Amazon Aurora PostgreSQL only)
SAP HANA, including on-premises SAP HANA and SAP HANA Cloud
SAP HANA Database Ingestion
Sources in initial load and incremental load jobs
Snowflake
Snowflake Data Cloud
Targets in initial load, incremental load, and initial and incremental load jobs
Teradata Data Warehouse Appliance
Teradata
Sources in initial load jobs
1. For the Microsoft Azure Synapse Analytics target type,
Database Ingestion and Replication
uses Microsoft Azure SQL Data Lake Storage Gen2 to store staging files. Ensure that you have Microsoft Azure SQL Data Lake Storage Gen2 installed.

0 COMMENTS

We’d like to hear from you!