Confirm Deletion
Are you sure you want to delete the saved search?
The Informatica Big Data Streaming User Guide provides information about how to configure and run streaming mappings on a Spark engine in a Hadoop environment.
The Big Data Management™ Administrator Guide is written for Informatica administrators. The guide contains information that you need to administer the integration of the Informatica domain with the Hadoop cluster. It includes information about security, connections, and cluster configurations. This guide assumes that you are familiar with the …
The Informatica Big Data Management™ Integration Guide is written for the system administrator who is responsible for integrating the native environment of the Informatica domain with a non-native environment, such as Hadoop or Databricks. This guide contains instructions to integrate the Informatica and non-native environments. …
The Informatica Big Data Management® User Guide provides information about configuring and running mappings in the native and Hadoop run-time environments.
The Informatica PowerExchange® for HBase User Guide provides information about extracting data from and loading data to HBase. The guide is written for database administrators and developers who are responsible for developing mappings that read data from HBase tables and write data to HBase tables. This guide assumes that you have …
Additional Content
Basic information about Informatica 10.2.1 Big Data products: Big Data Management, Big Data Quality, Enterprise Data Lake, Big Data Streaming, and Enterprise Data Catalog.
Click through this primer to get basic information about each Big Data product, along with the services, tools, documentation, and resources associated with the product.
You can use a streaming mapping to read from streaming sources, such as Apache Kafka. This article describes about configuring disaster recovery for the streaming sources to restore the processing after a mapping failure.
When you install Intelligent Streaming on MapR, you can run streaming mappings to read from any streaming source or write data to any streaming target without any additional configuration except for Kafka. This article provides the configuration steps that you need to perform to run streaming mappings to read from or write to Kafka on MapR.
You can use Kerberos-enabled Kafka as a source or target in a streaming mapping. This article describes how to configure Kerberos-enabled Kafka for a streaming mapping in Big Data Streaming 10.2.2.
Create and deploy an application that contains mappings, workflows, and other application objects to make the objects accessible to users that want to leverage the data outside of the Developer tool. You can deploy the application to a Data Integration Service to run the objects, or to an application archive file to save a copy of the …
Apache Kafka clients and brokers can communicate over SSL using a dedicated port. However, to use SSL authentication in Apache Kafka you must enable it. This article describes the steps to enable SSL authentication in Apache Kafka using Portecle.
You can read from Kafka sources in streaming mappings. When you configure a Kafka data object to read from a Kafka topic, you can specify the topic name or use a regular expression for the topic name pattern. This article describes the how to specify a topic pattern and the guidelines to use for the topic pattern while creating the data objects.
You can read from MapR Streams in streaming mappings. When you configure the MapRStreams data object to read from a stream, you can specify the stream name or use a regular expression for the stream name pattern. This article describes the how to specify a stream pattern and the guidelines to use for the stream pattern while creating …
An applicaton patch can inherit direct, indirect, and remote dependencies. You can identify direct dependencies based on design-time objects, but you must use both the design-time and run-time objects to identify indirect and remote dependencies. This article will present scenarios to demonstrate how you can use the application object …
You can use stateful variables to perform stateful computing in a streaming mapping. This article presents a use case about using stateful variables to version data. You can navigate the use case to understand how stateful variables can achieve stateful computing and to think about how you can employ stateful variables based on your unique requirements.
You can use Data Engineering Streaming mappings to capture data from multiple data sources and ingest it into a target. This article describes how you can ingest CDC data from multiple Kafka topics into Data Engineering systems in a single mapping and publish the data to either HDFS or Hive partitioned tables.
You can tune Intelligent Streaming for better performance. This article provides recommendations that you can use to tune hardware, memory, Spark configuration, and mapping configuration.
When you use Lookup transformations in Streaming mappings, multiple factors impact mapping performance. You can optimize performance when you configure Lookup transformation properties and execution parameters in the Streaming mapping. This article provides recommendations that you can use to optimize mapping performance.
You can tune Big Data Streaming for better performance. This article provides recommendations that you can use to tune hardware, Spark configuration, mapping configuration, and the Kafka cluster.
You can tune Data Engineering Streaming for better performance. This article provides recommendations that you can use to tune hardware, Spark configuration, mapping configuration, and the Kafka cluster.
You can tune Intelligent Streaming for better performance. This article provides recommendations that you can use to tune hardware, Spark configuration, mapping configuration, and the Kafka cluster.
You can tune the hardware and the Hadoop cluster for better performance of Informatica big data products. This article provides tuning recommendations for Hadoop administrators and system administrators who set up the Hadoop cluster and hardware for Informatica big data products.
Informatica 10.2.1 Service Pack 1 contains various improvements and enhancements to the Informatica domain. Informatica provides a list of supported upgrade paths for users who want to upgrade their product. This article describes the supported upgrade paths to upgrade to Informatica 10.2.1 Service Pack 1.
You can enable users to log into Informatica web applications using single sign-on. This article explains how to configure single sign-on in an Informatica 10.5 domain using Security Assertion Markup Language (SAML) v2.0 and the Azure Active Directory identity provider.
You can enable users to log into Informatica web applications using single sign-on. This article explains how to configure single sign-on in an Informatica domain using Security Assertion Markup Language (SAML) v2.0 and the Okta SSO identity provider.
You can enable users to log into Informatica web applications using single sign-on. This article explains how to configure single sign-on in an Informatica 10.5 domain using Security Assertion Markup Language (SAML) v2.0 and the Oracle Access Manager version 12.2.1 identity provider.
When you upgrade from a previous version, follow the supported upgrade paths to ensure a smooth and successful upgrade. This article includes upgrade paths for all products supported in the 10.5.1 Informatica installer.
You can deploy Data Engineering Streaming on the Amazon Web Services (AWS) Marketplace. This deployment reference includes step-by-step instructions for deploying Data Engineering Streaming on the Amazon Web Services (AWS) Marketplace. It also includes information on prerequisites and how to troubleshoot common issues.
You can deploy Data Engineering Streaming on the Amazon Web Services (AWS) Marketplace. This deployment reference includes step-by-step instructions for deploying Data Engineering Streaming on the Amazon Web Services (AWS) Marketplace. It also includes information on prerequisites, and how to troubleshoot common issues.
You can deploy Data Engineering Streaming on the Amazon Web Services (AWS) U.S. Intelligence Community Marketplace. This deployment reference includes step-by-step instructions for deploying Data Engineering Streaming on the AWS U.S. Intelligence Community Marketplace. It also includes information on prerequisites and troubleshooting.
This deployment reference provides step-by-step instructions for deploying Informatica® Data Engineering Streaming on the Amazon Web Services Cloud Marketplace. Automated reference deployments use AWS CloudFormation templates to launch, configure, and run the AWS compute, network, storage, and other services required to deploy a specific …
The automated marketplace solution uses Azure Resource Manager to launch, configure, and run the Azure virtual machine, virtual network, and other services required to deploy a specific workload on Azure. This deployment reference provides step-by-step instructions for deploying Informatica Data Engineering Streaming on the Microsoft …
Effective in version 10.2.2, Informatica dropped support for the Hive engine. You can run mappings on the Blaze and Spark engines in the Hadoop environment or on the Databricks Spark engine in the Databricks environment. This article tells how to change the validation and run-time environments for mappings, and it describes processing …
FAQ: Authentication on HDInsight with Enterprise Security Package