Table of Contents

Search

  1. Preface
  2. Introduction to Hadoop Integration
  3. Before You Begin
  4. Amazon EMR Integration Tasks
  5. Azure HDInsight Integration Tasks
  6. Cloudera CDH Integration Tasks
  7. Hortonworks HDP Integration Tasks
  8. MapR Integration Tasks
  9. Appendix A: Connections

Configure the Files for Hive Tables on S3

Configure the Files for Hive Tables on S3

To run mappings with Hive sources or targets on S3, you need to configure the files from the master node to the Data Integration Service machine.
Perform this task in the following situations:
  • You are integrating for the first time.
  • You upgraded Big Data Management and changed the distribution version.
You can perform one of the following steps to configure the files:
Copy the .jar file
Get .jar files from the Hadoop administrator. The following files are on the master node in the Hadoop cluster:
  • For integration with EMR 5.10, copy
    emrfs-hadoop-assembly-2.20.0.jar
    .
  • For integration with EMR 5.14, copy
    emrfs-hadoop-assembly-2.23.0.jar
Copy the .jar files to the following directory on each Data Integration Service machine:
/<Informatica installation directory>/services/shared/hadoop/EMR_<version number>/lib
.
For integration with EMR 5.14, also copy
emrfs-hadoop-assembly-2.23.0.jar
to the following path:
/<Informatica installation directory>/services/shared/hadoop/EMR_<version number>/extras/hive-auxjars
.
If you have upgraded from EMR 5.10 to EMR 5.14, the part of the filepath that includes
EMR_<version number>
remains
EMR_5.10
.
Create a file
Create a
~/.aws/config
on the Data Integration Service machine. The file must contain AWS location.
For example,
[default] region=us-west-2
Create an environment variable
Create
AWS_CONFIG_FILE
environment variable on the Data Integration Service machine. Set the value to
<EMR_5.10>/conf/aws.default

0 COMMENTS

We’d like to hear from you!