Effective in version 10.5, PowerExchange for HDFS includes the following features:
When you run a mapping on the Spark engine, you can read data from and write data to Avro, ORC, and Parquet files that are partitioned based on directories.
You can perform audits for read operations for complex files such as Avro, Parquet, and JSON in HDFS mappings that run in the native environment or on the Spark engine.
For more information, see the
Informatica 10.5 PowerExchange for HDFS User Guide