Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for JDBC V2
  3. PowerExchange for JDBC V2 Configuration Overview
  4. JDBC V2 Connections
  5. PowerExchange for JDBC V2 Data Objects
  6. JDBC V2 Mappings
  7. PowerExchange for JDBC V2 Dynamic Mappings
  8. JDBC V2 Lookup
  9. JDBC V2 Run-Time Processing
  10. JDBC V2 Datatype Reference

PowerExchange for JDBC V2 User Guide

PowerExchange for JDBC V2 User Guide

JDBC V2 Mapping Example

JDBC V2 Mapping Example

You work in the Human Resources department and you manage employee information. You want to filter the employee details from the engineering department and write the results to a flat file.
You can use the following objects in the mapping:
Aurora PostgreSQL Input
The input file is an Aurora PostgreSQL table that contains the employee details from all the departments in the organization.
Create an Aurora PostgreSQL data object read operation. Configure the Aurora PostgreSQL connection and specify the table that contains the employee data as a source for the data object. Drag the source data object into the mapping and specify the data object access as read.
The input files are two Aurora PostgreSQL tables, one containing the employee details and the other containing the department details from the entire organization. Use the Join query to get data from the employee and the department tables. The Join query joins the employee and the department tables and passes the data to the next transformation.
Transformations
Add a Filter transformation to filter details of employees from the
Engineering
department.
Specify the filter: dept=Engineering to filter the engineering employee details.
The Filter transformation filters the source data based on the value you specify for the
Dept
column. The Data Integration Service returns the rows that meet the filter condition.
Flat File Output
Create a flat file data object write operation. Configure the flat file connection and specify the flat file object as the target data object. Drag the target data object into the mapping and specify the data object access as write.
The output file is a flat file that contains the details of the employees working in the Engineering department.
The following image shows the mapping example:
You can view the mapping that uses a filter Transformation to filter employee details from the engineering department to a flat file target.
When you run the mapping, the Data Integration Service reads the employee records from the Aurora PostgreSQL table, filters out the Engineering employee details, and writes that data to a flat file.

0 COMMENTS

We’d like to hear from you!