Table of Contents

Search

  1. Preface
  2. Part 1: Version 10.4.1
  3. Part 2: Version 10.4.0
  4. Part 3: Version 10.2.2
  5. Part 4: Version 10.2.1
  6. Part 5: Version 10.2
  7. Part 6: Version 10.1.1
  8. Part 7: Version 10.1

Release Guide (10.4.1.1)

Release Guide (10.4.1.1)

Hive Warehouse Connector and Hive LLAP

Hive Warehouse Connector and Hive LLAP

Effective in version 10.4.0.1, you can enable the Hive Warehouse Connector and Hive LLAP when you run mappings on the Spark engine. Previously, these were available for technical preview.
Use the Hive Warehouse Connector and Hive LLAP for faster execution of Hive queries when you read and write to Hive tables with the Spark engine. You can use the Hive Warehouse Connector and Hive LLAP with Hortonworks HDP 3.x and Microsoft Azure HDInsight 4.x clusters on the Spark engine.
To enable the Hive Warehouse Connector and Hive LLAP, configure the following properties in the Spark advanced properties for the Hadoop connection:
  • infaspark.useHiveWarehouseAPI=true
  • spark.datasource.hive.warehouse.load.staging.dir=<HDFS temporary directory for writing to Hive tables>
  • spark.datasource.hive.warehouse.metastoreUri=<URI for the Hive metastore>
    Use the value for
    hive.metastore.uris
    from the hive_site_xml cluster configuration properties.
  • spark.hadoop.hive.llap.daemon.service.hosts=<Application name for the LLAP service>
    Use the value for
    hive.llap.daemon.service.hosts
    from the hive_site_xml cluster configuration properties.
  • spark.hadoop.hive.zookeeper.quorum=<Zookeeper hosts used by Hive LLAP>
    Use the value for
    hive.zookeeper.quorum
    from the hive_site_xml cluster configuration properties.
  • spark.sql.hive.hiversever2.jdbc.url=<URL for HiveServer2 Interactive.
    Use the value in Ambari for HiveServer2 JDBC URL.
For more information, refer to the
Data Engineering Integration 10.4.1 User Guide
.

0 COMMENTS

We’d like to hear from you!