JDBC V2 Connector

JDBC V2 Connector

Use the serverless runtime environment

Use the serverless runtime environment

You can use a serverless runtime environment hosted on AWS or Azure to connect to JDBC-compliant databases.
Before you configure a JDBC V2 connection using the serverless runtime environment, perform the following tasks:
  • Add the JDBC driver JAR files in the Amazon S3 bucket or Azure container in your AWS or Azure account.
  • Configure the .yml serverless configuration file.
Add the JDBC driver JAR files in the Amazon S3 bucket or Azure container in your AWS or Azure account
Perform the following steps to configure a JDBC V2 connection in a serverless runtime environment:
  1. Create the following structure for the serverless agent configuration in AWS or Azure:
    <Supplementary file location>/serverless_agent_config
  2. Add the JDBC driver files in the Amazon S3 bucket or Azure container in the following location in your AWS or Azure account:
    <Supplementary file location>/serverless_agent_config/common
  3. For mappings in advanced mode, additionally add the JDBC driver files in the following location in the Amazon S3 bucket or Azure container:
    <Supplementary file location>/serverless_agent_config/spark
Configure the .yml serverless configuration file
Perform the following steps to configure the .yml serverless configuration file in the serverless runtime environment:
  1. Copy the following code snippet to a text editor based on the mappings that you want to run in a serverless environment:
    • For mappings that do not apply in advanced mode, add the following code snippet:
      version: 1 agent: dataIntegrationServer: autoDeploy: jdbcv2: common: - fileCopy: sourcePath: common/<Driver_filename> - fileCopy: sourcePath: common/<Driver_filename>
    • For mappings in advanced mode, add the following code snippet:
      version: 1 agent: elasticServer: autoApply: jdbcv2: common: - fileCopy: sourcePath: common/<Driver_filename> - fileCopy: sourcePath: common/<Driver_filename> spark: - fileCopy: sourcePath: spark/<Driver_filename> - fileCopy: sourcePath: spark/<Driver_filename>
    where the source path is the directory path of the driver files in AWS or Azure.
  2. Ensure that the syntax and indentations are valid, and then save the file as
    serverlessUserAgentConfig.yml
    in the following AWS or Azure location:
    <Supplementary file location>/serverless_agent_config
    When the .yml file runs, the JDBC driver files are copied from the AWS or Azure location to the serverless agent directory.
For more information about how to configure and use the serverless environment, see "Serverless runtime environment setup" in
Runtime Environments
in the Administrator help.

0 COMMENTS

We’d like to hear from you!