Table of Contents

Search

Amazon S3 Data Object Write Operation Properties

Amazon S3 Data Object Write Operation Properties

Amazon S3 data object write operation properties include run-time properties that apply to the Amazon S3 data object.
The Developer tool displays advanced properties for the Amazon S3 data object operation in the Advanced view.
By default, the Data Integration Service uploads the Amazon S3 file in multiple parts.
The following table describes the Advanced properties for an Amazon S3 data object write operation:
Property
Description
Folder Path
Bucket name that contains the Amazon S3 target file.
If applicable, include the folder name that contains the target file in the
<bucket_name>/<folder_name>
format.
If you do not provide the bucket name and specify the folder path starting with a slash (/) in the
/<folder_name>
format, the folder path appends with the folder path that you specified in the connection properties.
For example, if you specify the
<my_bucket1>/<dir1>
folder path in the connection property and
/<dir2>
folder path in this property, the folder path appends with the folder path that you specified in the connection properties in
<my_bucket1>/<dir1>/<dir2>
format.
If you specify the
<my_bucket1>/<dir1>
folder path in the connection property and
<my_bucket2>/<dir2>
folder path in this property, the Data Integration Service writes the file in the
<my_bucket2>/<dir2>
folder path that you specify in this property.
File Name
Name of the Amazon S3 file to which you want to write the source data.
Encryption Type
Method you want to use to encrypt data. Select one of the following values:-
  • None. The data is not encrypted.
  • Client Side Encryption. The Data Integration Service uses the master symmetric key you specify in the Amazon S3 connection properties to encrypt data.
  • Server Side Encryption. Amazon S3 encrypts data while uploading the files to Amazon buckets.
Staging Directory
Amazon S3 staging directory. Applicable to the native environment. Ensure that the user has write permissions on the directory. In addition, ensure that there is sufficient space to enable staging of the entire file.
Default staging directory is the
/temp
directory on the machine that hosts the Data Integration Service.
File Merge
Enable File Merge to merge the target files into a single file. Applicable when you run a mapping on the Blaze engine.
Hadoop Performance Tuning Options
Provide semicolon separated name-value attribute pairs to optimize performance when you copy large volumes of data between Amazon S3 and HDFS. Applicable to the Amazon EMR cluster.
Compression Format
Compresses data when you write data to Amazon S3. You can compress the data in the following formats:
  • None
  • Deflate
  • Gzip
  • Bzip2
  • Lzo
  • Snappy
Default is None. Applicable when you run a mapping in the native environment or on the Spark engine. The gzip compression format is applicable when you run a mapping in the native environment.
When you write an Avro file, you can compress the file using the none, deflate, and snappy compression formats. When you read a Parquet file, you can compress the file using the none, gzip, lzo, and snappy compression formats.
Overwrite File(s) If Exists
You can choose to overwrite the existing files.
Select the check box if you want to overwrite the existing files. Default is true.
For more information, see Overwriting Existing Files.


Updated July 30, 2020