You can use the data in the FileName port when you create a data object read or write operation.
When you run a mapping to read or write an Amazon S3 file using the FileName port, the result varies based on the type of the Amazon S3 file that you use and the engine where you run the mapping. For example, when you run a mapping in the native environment to read or write an Avro, Binary, JSON, ORC, or Parquet file, the Data Integration Service appends a period (.) to the directory name and appends an underscore (_) to the directory name when you run a mapping on the Spark engine.
When you run a mapping in the native environment to read or write a flat file using the FileName port, the Data Integration Service creates separate files for each entry in the FileName port in the following format:
When you run a mapping in the native environment or on the Spark and Databricks Spark engine to read or write an Avro, Binary, JSON, ORC, or Parquet file using the FileName port, the Data Integration Service creates separate directories for each entry in the FileName port and adds the files within the directories in the following format:
On the Spark and Databricks Spark engine, the Data Integration Service creates the directory in the