Precision and Scale in Multiplication User-Defined Functions
Precision and Scale in Multiplication User-Defined Functions
When the Hive engine processes multiplication user-defined functions in high precision mode, the Hive engine calculates the precision and scale of the result based on the precision and scale of the input ports.
The Hive engine uses the following rules to calculate the precision and scale of the result of the user-defined function:
If the difference between the precision and scale is greater than or equal to 32, the maximum scale of the result can be 6.
If the difference between the precision and scale is less than 32, the maximum scale can be greater than 6.
If the scale is greater than 6, the maximum difference between the precision and scale is 32.
If the scale is less than 6, the difference between the precision and scale can be greater than 32 but less than 38.
If the Hive engine cannot represent the result, data overflow occurs. When data overflow occurs, the Hive engine writes NULL to the target.
For example, you might use a user-defined function to multiply the following decimal inputs, dec(38,10) and dec(38,6):
The precision and scale of the result is (38,6), but the multiplication result is a decimal with more than 38 digits of precision. Since the Hive engine cannot represent the multiplication result as a decimal data type dec(38,6), the Hive engine writes NULL to the target.