Create a relational data object to write to Hive tables or JDBC-compliant database. To write to Hive tables, create a relational data object with a Hive connection. To write to a JDBC-compliant database, create a relational data object with a JDBC connection.
To use the relational data object as a target in streaming mappings, configure the relational data object write operation properties. You can select the mapping environment and run the mappings on the Spark engine of the Hadoop environment.
Hive targets
When you write to a Hive target in a streaming mapping, you write to a Hive table. You can write to the following types of tables:
Managed or internal tables. When you write to a managed table or an internal table, Hive writes data to the Hive warehouse directory. If you enable the
Truncate target table
property in the Advanced properties while writing to the Hive table, the data in the table is overwritten. If you do not select this property, data is appended.
External tables. When you write to an external table, you must truncate the target table to overwrite data. You can write to external partitioned tables but you cannot truncate external partitioned tables.
Truncation of tables happens only once in the beginning when you write data.
Data Engineering Streaming does not support complex data types for hive targets.
For Hadoop HDP version 3.1, HDInsight version 4.0, and Cloudera CDP version 7.1, you can write to Hive tables with only the following properties:
ACID
Full Transactional
Optimized Row Columnar (ORC) file format
Managed Table
JDBC targets
You can include a JDBC-compliant database as a target in an Informatica mapping. Use the JDBC drivers in the JDBC connection to configure the JDBC parameters. JDBC drivers use the values that you configure in the
User Name
and
Password
fields of the JDBC connection. If you want to write to a JDBC target, you must create a target table .