Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Amazon Redshift
  3. PowerExchange for Amazon Redshift Configuration
  4. Amazon Redshift Connections
  5. PowerExchange for Amazon Redshift Data Objects
  6. Amazon Redshift Mappings
  7. Pushdown Optimization
  8. Amazon Redshift Lookup
  9. Appendix A: Amazon Redshift Datatype Reference
  10. Appendix B: Troubleshooting

PowerExchange for Amazon Redshift User Guide

PowerExchange for Amazon Redshift User Guide

Troubleshooting for PowerExchange for Amazon Redshift

Troubleshooting for PowerExchange for Amazon Redshift

How to enable Metadata Access Service for PowerExchange for Amazon Redshift?
You can optionally enable Metadata Access Service to import metadata from Amazon Redshift. For information on how to enable Metadata Access Service, see https://knowledge.informatica.com/s/article/HOW-TO-Enable-Metadata-Access-Service-to-import-metadata-from-Amazon-S3-and-Amazon-Redshift?language=en_US
How to configure performance tuning and sizing guidelines for PowerExchange for Amazon Redshift on the Spark engine?
How to solve the following error that occurs while running an Amazon Redshift mapping on the Spark engine to write a table that contains more than 500 columns: "java.lang.StackOverflowError"
How to solve the following error that occurs while running an Amazon Redshift mapping on the Spark engine to read from a table that contains more than 510 columns: "java.lang.StackOverflowError"
How to solve the following error that occurs while running an Amazon Redshift mapping on the Spark engine​ to read or write data: "No space available in any of the local directories"
How to solve the following error that occurs while running an Amazon Redshift mapping on the Spark engine to read or write data: "Container is running beyond physical memory limits in EMR cluster"
How to solve the following error that occurs while running an Amazon Redshift mapping on the Spark engine to read data: "com.amazonaws.AmazonClientException: Unable to execute HTTP request: Read timed out"
How to solve the out of disk space error that occurs when you use PoweExchange for Amazon Redshift to read and preview data?
Mapping on the Spark engine fails with an error when you use the Amazon S3 staging bucket with a dot (.) in the bucket name for CDP 7.1 distribution.
If you run a mapping on the Spark engine with such configurations, the mapping fails with the following error message:
Unable to execute HTTP request: Certificate for xxxx doesn't match any of the subject alternative names
Perform the following steps to run the mapping successfully:
  1. In the CDP cluster, go to
    HDFS
    .
  2. Click
    Configuration
    .
  3. In
    Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml
    , add the property
    fs.s3a.path.style.access
    and set the value to
    true
    .
  4. Restart the cluster.
  5. Refresh the cluster configuration object.
  6. Restart the Data Integration Service.

0 COMMENTS

We’d like to hear from you!