Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon Redshift Configuration
  4. Amazon Redshift Connections
  5. PowerExchange for Amazon Redshift Data Objects
  6. Amazon Redshift Mappings
  7. Pushdown Optimization
  8. Amazon Redshift Lookup
  9. Appendix A: Amazon Redshift Datatype Reference
  10. Appendix B: Troubleshooting

PowerExchange for Amazon Redshift User Guide

PowerExchange for Amazon Redshift User Guide

Prerequisites

Prerequisites

Before you can use PowerExchange for Amazon Redshift, perform the following tasks:
  1. Verify that the domain has a Data Integration Service and a Model Repository Service.
  2. Verify that you can connect to Amazon Redshift with an SQL client that uses the PostgreSQL driver.
    For example, you can use SQL Workbench/J to connect to Amazon Redshift.
  3. To run mappings on Hortonworks, Amazon EMR, and MapR distributions that use non-Kerberos authentication, configure user impersonation.
    For information about configuring user impersonation, see the
    Data Engineering Integration Guide
    .
  4. To run mappings on MapR secure clusters, configure the MapR secure clusters on all the nodes.
    For information about configuring MapR secure clusters, see the
    I
    Data Engineering Integration Guide
    .
  5. To run mappings on Spark engine and Blaze engine using a secure domain, you must import the CA certificates available in the Redshift Certificate Authority bundle and the
    Baltimore CyberTrust Root
    certificate file.
    To import the CA certificates available in the Redshift Certificate Authority bundle, perform the following steps:
    • Download the
      redshift-keytool.jar
      file.
    • Run the following command to import the CA certificates available in the Redshift Certificate Authority bundle into the Informatica TrustStore location:
      java -jar redshift-keytool.jar -k <infa_trust_store_location> -p <keystore_password>
    • Restart the Data Integration Service.
    To import the
    Baltimore CyberTrust Root
    certificate file, perform the following steps:
    • Download the
      Baltimore CyberTrust Root
      certificate file.
    • Provide the read, write, and execute permissions to the certificate file.
    • Run the following command to import the certificate file into the Informatica TrustStore location:
      <INFA_HOME>/java/jre/bin/keytool -keystore <infa_trust_store_location> -importcert -alias <Alias_Name> -file <BaltimoreCyberTrustRoot certificate file path>/<certificate_filename> -storepass <Truststore_Password>
    • Restart the Data Integration Service.
  6. To enable Metadata Access Service to import metadata from Amazon Redshift, see the following KB article:

Configure Databricks Connection Advanced Properties

Verify that a Databricks connection is created in the domain. If you want to read NULL values from or write NULL values to an Amazon Redshift table, configure the following advanced properties in the Databricks connection:
  • infaspark.flatfile.reader.nullValue=True
  • infaspark.flatfile.writer.nullValue=True
For more information about product requirements and supported platforms, see the Product Availability Matrix.

0 COMMENTS

We’d like to hear from you!