Table of Contents

Search

  1. About the Enterprise Data Lake Administrator Guide
  2. Introduction to Enterprise Data Lake Administration
  3. Administration Process
  4. User Account Setup
  5. Application Configuration
  6. Roles, Privileges, and Profiles
  7. Data Asset Access and Publication Management
  8. Monitoring Enterprise Data Lake
  9. Backing Up and Restoring Enterprise Data Lake
  10. Managing the Data Lake
  11. Schedule Export, Import and Publish Activities
  12. Data Preparation Service
  13. Enterprise Data Lake Service

Enterprise Data Lake Administrator Guide

Enterprise Data Lake Administrator Guide

Providing and Managing Access to Data

Providing and Managing Access to Data

Enterprise Data Lake
user accounts must be authorized to access the Hive tables on the Hadoop cluster designated as the data lake.
Enterprise Data Lake
user accounts access Hive tables on the Hadoop cluster when they preview data, upload data, and publish prepared data.
HDFS permissions
Grant each user account the appropriate HDFS permissions on the Hadoop cluster. HDFS permissions determine what a user can do to files and directories stored in HDFS. To access a file or directory, a user must have permission or belong to a group that has permission to the file or directory.
A Hive database corresponds to a directory in HDFS. Each Hive table created in the database corresponds to a subdirectory. You grant
Enterprise Data Lake
user accounts permission on the appropriate directory, based on whether you want to provide permission on a Hive database or on a specific Hive table in the database.
As a best practice, you can set up private/shared/public databases (or schemas) in a single Hive resource and grant users appropriate permissions on those corresponding HDFS directories.
Data asset preview, import, and export
To access and preview non-lake data assets, grant users permission to connections including Oracle and Teradata Import.
For more information about Sqoop connectivity through JDBC for external databases, see the chapters on Sqoop sources and targets in the
Informatica Big Data Management User Guide
.
Hadoop user accounts
Create a user account on the Hadoop cluster for each
Enterprise Data Lake
user account.
User impersonation
User impersonation allows different user accounts to run mappings in a Hadoop cluster that uses Kerberos authentication. When users upload and publish prepared data in
Enterprise Data Lake
, the Data Integration Service runs mappings in the Hadoop environment. The Data Integration Service pushes the processing to nodes on the Hadoop cluster. The Data Integration Service uses the credentials you have specified to impersonate the user accounts that publish and upload the data.
When the Data Integration Service impersonates a user account to submit a mapping, the mapping can only access Hadoop resources that the impersonated user has permissions on. Without user impersonation, the Data Integration Service uses its credentials to submit a mapping to the Hadoop cluster. Restricted Hadoop resources might be accessible.