Table of Contents

Search

  1. About the Enterprise Data Preparation Administrator Guide
  2. Introduction to Enterprise Data Preparation Administration
  3. Getting Started
  4. Administration Process
  5. User Account Setup
  6. Search Configuration
  7. Roles, Privileges, and Profiles
  8. Data Asset Access and Publication Management
  9. Masking Sensitive Data
  10. Monitoring Enterprise Data Preparation
  11. Backing Up and Restoring Enterprise Data Preparation
  12. Managing the Data Lakehouse
  13. Schedule Export, Import and Publish Activities
  14. Interactive Data Preparation Service
  15. Enterprise Data Preparation Service

Enterprise Data Preparation Administrator Guide

Enterprise Data Preparation Administrator Guide

Provide Access to Data in the Data Lake

Provide Access to Data in the Data Lake

Enterprise Data Preparation
user accounts must be authorized to access the Hive tables in the Hadoop cluster designated as the data lake.
Enterprise Data Preparation
user accounts access Hive tables in the Hadoop cluster when they preview data, upload data, and publish prepared data.
As an administrator, grant access to the data assets that an
Enterprise Data Preparation
user.
When an
Enterprise Data Preparation
user requests access to data assets, use established processes and follow best practices to grant permission to data assets. You must grant analysts the appropriate permissions to the data lake Hadoop cluster. You can set up user impersonation to run mappings in the Hadoop cluster when an analyst publishes prepared data.
You use the tools provided with the security framework used by the Hadoop distribution to provide access to data in the data lake. For example, if you use Hortonworks with security managed by Apache Ranger, you can use the Ranger console to set up policies to grant access to Hive schemas and HDFS locations through ranger web UI. If you use Cloudera with security provided through Apache Sentry, use the Hue web application to create and manage roles and privileges.

Tools to complete this step:

  • Third-party Hadoop tools