consists of the libraries, plug-ins, and sample code to assist you in developing a connector for Cloud Data Intregration.
To build a connector for Cloud Data Integration, perform the following tasks:
Define the connection attributes to connect to the data source. Implement the methods to open connection or close connection to the data source, validate connection, and specify attribute dependencies. Before you define the type system for the connector, you can test and debug the connection components of the connector.
You can also use connection pooling to reuse the connections instead of creating a new connection each time and optimize the performance.
Define the type system for the connector. Specify the data types supported by the data source and the corresponding data types supported by Informatica.
Define the connector metadata, create endpoint object metadata, operations for the endpoint metadata object, and partition methods for the operations. Implement the methods to fetch metadata from the data source. You can also test and debug the metadata components of the connector.
Enable read and write capabilities for mappings on the advanced cluster
To enable read and write capabilities for a mapping that runs on the advanced cluster, complete the following prerequisites:
Click the following link to download and install the Scala binaries:
Add the SCALA_HOME environment variable under system variables in the agent machine.
Set the value of the variable to the installation directory where you installed the Scala binaries.
Define the connector runtime behavior that defines how the connector reads from and writes to the data source. Before you deploy the connector, you can test and debug the read capability and write capability of the connector.
Deploy the connector to the Cloud Data Integration service.