You can run mappings in the Azure Databricks environment.
You can use complex data types, such as array, struct, and map, in mappings that run in the Hadoop environment. With complex data types, the respective engine directly reads, processes, and writes hierarchical data in Avro, JSON, and Parquet complex files. For an intelligent structure source, you can configure only the read operation.
You can create mappings to read and write Avro and Parquet files that contain only primitive data types in the native environment.
You can select a directory as a source in a mapping to read multiple files from the directory.
You can use Microsoft Azure Data Lake Store objects as dynamic sources and targets in a mapping.
You can create a Microsoft Azure Data Lake Store target using the Create Target option.