Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Connector Toolkit
  3. Before you begin
  4. Develop a connector for Cloud Data Integration
  5. Develop a connector for Data Loader
  6. Import a connector
  7. Connection attributes
  8. Type system
  9. Connector metadata
  10. Partitioning capability
  11. Pushdown capability
  12. Mappings in advanced mode
  13. Manual changes to Informatica Connector Toolkit source code
  14. Runtime behavior
  15. Connector example: MySQL_Cloud
  16. Version control integration
  17. Appendix A: Metadata models
  18. Appendix B: ASO model
  19. Appendix C: Connector project migration
  20. Appendix D: Frequently used generic APIs in Informatica Connector Toolkit
  21. Appendix E: Frequently asked questions

Cloud Data Integration Connector Toolkit Developer Guide

Cloud Data Integration Connector Toolkit Developer Guide

Develop a connector for Cloud Data Integration

Develop a connector for Cloud Data Integration

The
Informatica Connector Toolkit
consists of the libraries, plug-ins, and sample code to assist you in developing a connector for Cloud Data Intregration.
To build a connector for Cloud Data Integration, perform the following tasks:
  1. Define the connection attributes to connect to the data source. Implement the methods to open connection or close connection to the data source, validate connection, and specify attribute dependencies. Before you define the type system for the connector, you can test and debug the connection components of the connector.
    You can also use connection pooling to reuse the connections instead of creating a new connection each time and optimize the performance.
  2. Define the type system for the connector. Specify the data types supported by the data source and the corresponding data types supported by Informatica.
  3. Define the connector metadata, create endpoint object metadata, operations for the endpoint metadata object, and partition methods for the operations. Implement the methods to fetch metadata from the data source. You can also test and debug the metadata components of the connector.
    Enable read and write capabilities for mappings on the advanced cluster
    To enable read and write capabilities for a mapping that runs on the advanced cluster, complete the following prerequisites:
    1. Click the following link to download and install the Scala binaries:
    2. Add the SCALA_HOME environment variable under system variables in the agent machine.
      Set the value of the variable to the installation directory where you installed the Scala binaries.
  4. Define the connector runtime behavior that defines how the connector reads from and writes to the data source. Before you deploy the connector, you can test and debug the read capability and write capability of the connector.
  5. Deploy the connector to the Cloud Data Integration service.

0 COMMENTS

We’d like to hear from you!