Hot-Swapping Reference Data for Address Verification (On-Premises)

Hot-Swapping Reference Data for Address Verification (On-Premises)

Bulk Jobs and Hot-Swapping

Bulk Jobs and Hot-Swapping

Address Verification supports bulk processing, where a single process call for a job instance can contain up to 1000 inputs.
It might happen that a file set hot swap replaces the data files in the middle of a bulk job, so that some inputs are processed with
FileSetA
while others are processed with
FileSetB
.
In many cases this does not pose an issue. However, there may be cases where you must process the complete set of inputs with exactly the same reference data, such as in certified runs of address verification.
To guarantee a consistent file set for a job, you can create the job with the following POST command:
IDVE_Post(" ... ", "AV/v1/Jobs/createLockedFileSet", ...)
Any job created with this command rather than the regular
/create
command will be bound to the file set that was active when the command ran. A hot swap can still take place, and regular jobs will use the new data, but any
createLockedFileSet
job will use the old data files until the job is deleted.
Multiple jobs can exist at the same time. Address Verification releases the previously-used reference data files after you delete the
createLockedFileSet
jobs. The call to delete the last of the jobs will cause the previous batch of function servers to shut down and the data files to unlock.
Both batches of function servers (those using
FileSetA
and those using
FileSetB
) must stay active while a
createLockedFileSet
job runs, and there is a corresponding impact on resources. Delete any
createLockedFileSet
job as soon as possible.
While a job remains locked to a previous file set, the contents of
FileSetsInfo.json
will reflect that state accordingly:
{ "FileSetAState": "StillInUse", "FileSetBState": "InUse" }
The flag switches to Unused after the last locked job is deleted.

0 COMMENTS

We’d like to hear from you!