Aggregator Transformation Support on the Blaze Engine
Aggregator Transformation Support on the Blaze Engine
Some processing rules for the Blaze engine differ from the processing rules for the Data Integration Service.
Mapping Validation
Mapping validation fails in the following situations:
The transformation contains stateful variable ports.
The transformation contains unsupported functions in an expression.
Aggregate Functions
If you use a port in an expression in the Aggregator transformation but you do not use the port within an aggregate function, the Blaze engine might use any row in the port to process the expression.
The row that the Blaze engine uses might not be the last row in the port. Hadoop execution is distributed, and thus the Blaze engine might not be able to determine the last row in the port.
Data Cache Optimization
The data cache for the Aggregator transformation is optimized to use variable length to store binary and string data types that pass through the Aggregator transformation. The optimization is enabled for record sizes up to 8 MB. If the record size is greater than 8 MB, variable length optimization is disabled.
When variable length is used to store data that passes through the Aggregator transformation in the data cache, the Aggregator transformation is optimized to use sorted input and a pass-through Sorter transformation is inserted before the Aggregator transformation in the run-time mapping.
To view the Sorter transformation, view the optimized mapping or view the execution plan in the Blaze validation environment.
During data cache optimization, the data cache and the index cache for the Aggregator transformation are set to Auto. The sorter cache for the Sorter transformation is set to the same size as the data cache for the Aggregator transformation. To configure the sorter cache, you must configure the size of the data cache for the Aggregator transformation.