Table of Contents

Search

  1. About the Intelligent Data Lake Administrator Guide
  2. Introduction to Intelligent Data Lake Administration
  3. Administration Process
  4. User Account Setup
  5. Data Preparation Service
  6. Intelligent Data Lake Service
  7. Application Configuration
  8. Roles, Privileges, and Profiles
  9. Data Asset Access and Publication Management
  10. Monitoring Intelligent Data Lake
  11. Backing Up and Restoring Intelligent Data Lake
  12. Data Type Reference

Intelligent Data Lake Administrator Guide

Intelligent Data Lake Administrator Guide

Providing and Managing Access to Data

Providing and Managing Access to Data

Intelligent Data Lake
user accounts must be authorized to access the Hive tables in the Hadoop cluster designated as the data lake.
Intelligent Data Lake
user accounts access Hive tables in the Hadoop cluster when they preview data, upload data, and publish prepared data.
HDFS permissions
Grant each user account the appropriate HDFS permissions in the Hadoop cluster. HDFS permissions determine what a user can do to files and directories stored in HDFS. To access a file or directory, a user must have permission or belong to a group that has permission to the file or directory.
A Hive database corresponds to a directory in HDFS. Each Hive table created in the database corresponds to a subdirectory. You grant
Intelligent Data Lake
user accounts permission on the appropriate directory, based on whether you want to provide permission on a Hive database or on a specific Hive table in the database.
As a best practice, you can set up private/shared/public databases (or schemas) in a single Hive resource and grant users appropriate permissions on those corresponding HDFS directories.
Data asset preview, import, and export
To access and preview non-lake data assets, grant users permission to connections including Oracle and Teradata Import.
For more information about Sqoop connectivity through JDBC for external databases, see the chapters on Sqoop sources and targets in the
Informatica Big Data Management User Guide
.
Set the custom flag
DoNotUseOwnerNameForSqoop
to true if the database does not require the owner name while connecting through Sqoop. For more information, see the
Informatica Big Data Management User Guide
.
User impersonation
User impersonation allows different user accounts to run mappings in a Hadoop cluster that uses Kerberos authentication. When users upload and publish prepared data in the
Intelligent Data Lake application
, the Data Integration Service runs mappings in the Hadoop environment. The Data Integration Service pushes the processing to nodes in the Hadoop cluster. The Data Integration Service uses the credentials you have specified to impersonate the user accounts that publish and upload the data. Create a user account in the Hadoop cluster for each
Intelligent Data Lake
user account.
When the Data Integration Service impersonates a user account to submit a mapping, the mapping can only access Hadoop resources that the impersonated user has permissions on. Without user impersonation, the Data Integration Service uses its credentials to submit a mapping to the Hadoop cluster. Restricted Hadoop resources might be accessible.