Effective in version 10.2.2, you can run an incremental load to ingest incremental data. When you run an incremental load, the Spark engine fetches incremental data based on a timestamp or an ID column and loads the incremental data to the Hive or HDFS target. If you ingest the data to a Hive target, the Spark engine can also propagate the schema changes that have been made on the source tables.
If you ingest incremental data, the Mass Ingestion Service leverages Sqoop's incremental import mode.
For more information, see the
Informatica Big Data Management 10.2.2 Mass Ingestion Guide