You can choose the Blaze engine or Spark engine to run the profiles in the Hadoop run-time environment.
After you choose the Blaze or Spark, you can select a Hadoop connection. The Data Integration Service pushes the profile logic to the Blaze or Spark Engine on the Hadoop cluster to run profiles.
When you run a profile in the Hadoop environment, the Analyst tool submits the profile jobs to the Profiling Service Module. The Profiling Service Module then breaks down the profile jobs into a set of mappings. The Data Integration Service pushes the mappings to the Hadoop environment through the Hadoop connection. The Blaze engine or Spark engine processes the mappings and the Data Integration Service writes the profile results to the profiling warehouse.