Table of Contents

Search

  1. Preface
  2. Introduction to Intelligent Streaming
  3. Intelligent Streaming Configuration
  4. Connections
  5. Sources and Targets in a Streaming Mapping
  6. Intelligent Streaming Mappings
  7. Window Transformation
  8. Data Type Reference

Connections Overview

Connections Overview

Define the connections that you want to use to access data in Kafka brokers, JMS servers, HDFS files, Hive tables, Amazon Kinesis streams, MapR streams or HBase resources. You can create the connections using the Developer tool and infacmd.
You can create the following types of connections:
Hadoop
Create a Hadoop connection to run mappings on the Hadoop cluster. Select the Hadoop connection if you select the Hadoop run-time environment. You must also select the Hadoop connection to validate a mapping to run on the Hadoop cluster.
For more information about the Hadoop connection properties, see the
Informatica Big Data Management User Guide
.
HBase
Create an HBase connection to write data to an HBase resource.
HDFS
Create an HDFS connection to write data to an HDFS binary or sequence file.
Hive
Create a Hive connection to write data to Hive tables.
For more information, see the
Informatica Big Data Management Administrator Guide
.
JDBC
Create a JDBC connection when you perform a lookup on a relational database using Sqoop.
For more information about the JDBC connection properties, see the
Informatica Big Data Management User Guide
.
Messaging
Create a Messaging connection to access data as it becomes available, and to run a streaming mapping on a Spark engine. You can create the following types of messaging connections:
  • Kafka. Create a Kafka connection to read from or write to a Kafka broker.
  • JMS. Create a JMS connection to read from or write to a JMS server.
  • AmazonKinesis. Create an AmazonKinesis connection to read from Amazon Kinesis Streams or write to Amazon Kinesis Firehose Delivery Streams.
  • MapRStreams. Create a MapRStreams connection read from or write to MapR Streams.