You can read data from or write data to the following regions:
AWS GovCloud (US)
You can use Amazon S3 objects as dynamic sources and target in a mapping.
When you run a mapping in the native environment or on the Spark engine to read data from an Avro, flat, JSON, ORC, or Parquet file, you can use wildcard characters to specify the source directory name or the source file name.
You can add a single or multiple tags to the objects stored on the Amazon S3 bucket to categorize the objects. Each tag contains a key value pair. You can either enter the key value pairs or specify the absolute file path that contains the key value pairs.
You can specify the maximum threshold size to download an Amazon S3 object in multiple parts.
When you upload an object to Amazon S3, you can specify the minimum size of the object and the number of threads to upload the objects in parallel as a set of independent parts.
When you create a data object read or write operation, you can read data present in the FileName port that contains the endpoint name and source path of the file.
You can add new columns or modify the columns in the Port tab directly when you create a data object read or write operation.
You can copy the columns of the source transformations, target transformations, or any other transformations from the Port tab and paste the columns in the data object read or write operation directly when you create a mapping to read or write an Avro, JSON, ORC, or Parquet file.
You can update the Amazon S3 file format without losing the column metadata in the Schema field of the column projection properties even after you configure the column projection properties for another Amazon S3 file format.
You can use Server-Side Encryption with AWS KMS (AWS Key Management Service) on Amazon EMR version 5.16 and Cloudera CDH version 5.15 and 5.16.
PowerExchange for Amazon S3 supports AWS SDK for Java version 1.11.354.