Table of Contents

Search

  1. Preface
  2. Introduction to PowerExchange for Amazon S3
  3. PowerExchange for Amazon S3 Configuration Overview
  4. Amazon S3 Connections
  5. PowerExchange for Amazon S3 Data Objects
  6. PowerExchange for Amazon S3 Mappings
  7. PowerExchange for Amazon S3 Lookups
  8. Appendix A: Amazon S3 Data Type Reference
  9. Appendix B: Troubleshooting

PowerExchange for Amazon S3 User Guide

PowerExchange for Amazon S3 User Guide

Troubleshooting for PowerExchange for Amazon S3

Troubleshooting for PowerExchange for Amazon S3

How to enable Metadata Access Service for PowerExchange for Amazon S3?
You can optionally enable Metadata Access Service to import metadata from Amazon S3. For information on how to enable Metadata Access Service, see https://knowledge.informatica.com/s/article/HOW-TO-Enable-Metadata-Access-Service-to-import-metadata-from-Amazon-S3-and-Amazon-Redshift?language=en_US
How to solve the following error that occurs while running an Amazon S3 mapping on the Spark engine to write a Parquet file and then run another Amazon S3 mapping or preview data in the native environment to read that Parquet file: "The requested schema is not compatible with the file schema."
What are the performance tuning guidelines to read data from or write data to Amazon S3?
How to solve the out of disk space error that occurs when you use PoweExchange for Amazon S3 to read and preview data?
How to solve the following error that occurs when you enable server-side encryption with KMS and run an Amazon S3 mapping on the Spark engine with EMR 5.16 distribution: " [java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.amazon.ws.emr.hadoop.fs.EmrFileSystem not found]"
How to solve the following error that occurs when you run an Amazon S3 mapping on the Spark engine and then run another mapping in the native environment with Server-side Encryption with KMS enabled: "The encryption method specified is not supported"
Mapping on the Spark engine fails with an error when you use the Amazon S3 bucket without enabling the KMS policy and use server-side encryption with KMS enabled on EMR 5.20 or later distributions.
If you run a mapping on the Spark engine with such configurations, the mapping fails with the following error message:
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:Invalid arn us-west-1 (Service: Amazon S3; Status Code: 400; ErrorCode: KMS.NotFoundException;
If you use server-side encryption with KMS enabled on EMR 5.20 or later distributions, you must enable the KMS policy for the Amazon S3 bucket to run the mapping successfully.
Mapping on the Spark engine fails with an error when you use the Amazon S3 bucket with a dot (.) in the bucket name on CDP 7.1 distribution.
If you run a mapping on the Spark engine with such configurations, the mapping fails with the following error message:
Unable to execute HTTP request: Certificate for xxxx doesn't match any of the subject alternative names
Perform the following steps to run the mapping successfully:
  1. In the CDP cluster, go to
    HDFS
    .
  2. Click
    Configuration
    .
  3. In
    Cluster-wide Advanced Configuration Snippet (Safety Valve) for core-site.xml
    , add the property
    fs.s3a.path.style.access
    and set the value to
    true
    .
  4. Restart the cluster.
  5. Refresh the cluster configuration object.
  6. Restart the Data Integration Service.
Decimal point is shifted when you write decimal data to an Avro file.
If the precision and scale in the source file are different from the precision and scale in the schema, the decimal point is shifted when you write the source data to an Avro target.
The issue occurs on distributions that use Avro 1.8.x and 1.9.x versions.
Use Avro 1.10.0 version to fix the issue. For more information, see https://issues.apache.org/jira/browse/AVRO-2837.

0 COMMENTS

We’d like to hear from you!