After you create an Amazon S3 data object read or write operation, you can create a mapping.
You can create an Informatica mapping containing an Amazon S3 data object read operation as the input, and a relational or flat file data object operation as the target. You can create an Informatica mapping containing objects such as a relational or flat file data object operation as the input, transformations, and an Amazon S3 data object write operation as the output to load data to Amazon S3 buckets.
Validate and run the mapping. You can deploy the mapping and run it or add the mapping to a Mapping task in a workflow. An Amazon S3 mapping does not read valid rows if there are rows that contain errors in the Amazon S3 source object.
To successfully run a mapping on the Spark and Databricks Spark engine when you select multiple objects from different Amazon S3 buckets, ensure that all the Amazon S3 buckets belong to the same region and use the same credentials to access the Amazon S3 buckets.