Table of Contents

Search

  1. Preface
  2. Introduction to Informatica Big Data Management
  3. Connections
  4. Mappings in the Hadoop Environment
  5. Mapping Objects in the Hadoop Environment
  6. Processing Hierarchical Data on the Spark Engine
  7. Stateful Computing on the Spark Engine
  8. Monitoring Mappings in the Hadoop Environment
  9. Mappings in the Native Environment
  10. Profiles
  11. Native Environment Optimization
  12. Data Type Reference
  13. Complex File Data Object Properties
  14. Function Reference
  15. Parameter Reference

Creating a Connection to Access Sources or Targets

Creating a Connection to Access Sources or Targets

Create an HBase, HDFS, Hive, or JDBC connection before you import data objects, preview data, and profile data.
  1. Click
    Window
    Preferences
    .
  2. Select
    Informatica
    Connections
    .
  3. Expand the domain in the
    Available Connections
    list.
  4. Select the type of connection that you want to create:
    • To select an HBase connection, select
      NoSQL
      HBase
      .
    • To select an HDFS connection, select
      File Systems
      Hadoop File System
      .
    • To select a Hive connection, select
      Database
      Hive
      .
    • To select a JDBC connection, select
      Database
      JDBC
      .
  5. Click
    Add
    .
  6. Enter a connection name and optional description.
  7. Click
    Next
    .
  8. Configure the connection properties. For a Hive connection, you must choose the
    Access Hive as a source or target
    option to use Hive as a source or a target. The
    Access Hive to run mappings in Hadoop cluster
    options is no more applicable. To use the Hive driver to run mappings in the Hadoop cluster, use a Hadoop connection.
  9. Click
    Test Connection
    to verify the connection.
  10. Click
    Finish
    .


Updated December 13, 2018