Table of Contents


  1. Preface
  2. Introduction to Big Data Streaming
  3. Big Data Streaming Administration
  4. Sources in a Streaming Mapping
  5. Targets in a Streaming Mapping
  6. Streaming Mappings
  7. Window Transformation
  8. Connections
  9. Sample Files

Relational Data Objects

Relational Data Objects

Create a relational data object to write to Hive tables or JDBC-compliant database. To write to Hive tables, create a relational data object with a Hive connection. To write to a JDBC-compliant database, create a relational data object with a JDBC connection.
To use the relational data object as a target in streaming mappings, configure the relational data object write operation properties. You can select the mapping environment and run the mappings on the Spark engine of the Hadoop environment.

Hive targets

When you write to a Hive target in a streaming mapping, you write to a Hive table. You can write to the following types of tables:
  • Managed or internal tables. When you write to a managed table or an internal table, Hive writes data to the Hive warehouse directory. If you enable the
    Truncate target table
    property in the Advanced properties while writing to the Hive table, the data in the table is overwritten. If you do not select this property, data is appended.
  • External tables. When you write to an external table, you must truncate the target table to overwrite data. You can write to external partitioned tables but you cannot truncate external partitioned tables.
Truncation of tables happens only once in the beginning when you write data.

JDBC targets

You can include a JDBC-compliant database as a target in an Informatica mapping. Use the JDBC drivers in the JDBC connection to configure the JDBC parameters. JDBC drivers use the values that you configure in the
User Name
fields of the JDBC connection. If you want to write to a JDBC target, you must create a target table .


We’d like to hear from you!