Create a relational data object to write to Hive tables or JDBC-compliant database. To write to Hive tables, create a relational data object with a Hive connection. To write to a JDBC-compliant database, create a relational data object with a JDBC connection.
To use the relational data object as a target in streaming mappings, configure the relational data object write operation properties. You can select the mapping environment and run the mappings on the Spark engine of the Hadoop environment.
When you write to a Hive target in a streaming mapping, you write to a Hive table. You can write to the following types of tables:
Managed or internal tables. When you write to a managed table or an internal table, Hive writes data to the Hive warehouse directory. If you enable the
Truncate target table
property in the Advanced properties while writing to the Hive table, the data in the table is overwritten. If you do not select this property, data is appended.
External tables. When you write to an external table, you must truncate the target table to overwrite data. You can write to external partitioned tables but you cannot truncate external partitioned tables.
Truncation of tables happens only once in the beginning when you write data.
You can include a JDBC-compliant database as a target in an Informatica mapping. Use the Sqoop arguments in the JDBC connection to configure the JDBC parameters. Sqoop uses the values that you configure in the
fields of the JDBC connection. If you configure the --username or --password argument in a JDBC connection or mapping, Sqoop ignores the arguments. If you create a password file to access a database, Sqoop ignores the password file.