Table of Contents

Search

  1. Preface
  2. Informatica Intelligent Cloud Services REST API
  3. Platform REST API version 2 resources
  4. Platform REST API version 3 resources
  5. Data Integration REST API
  6. Mass Ingestion Files REST API
  7. Mass Ingestion Streaming REST API
  8. Model Serve REST API
  9. RunAJob utility
  10. ParamSetCli utility
  11. REST API codes
  12. REST API resource quick references

REST API Reference

REST API Reference

Generating predictions

Generating predictions

Use the deployment resource to generate predictions from a quick start model or a user-defined model.
Before you generate predictions, make sure that the deployment is available. To make a deployment available, start the deployment and wait until its status is
Available
.
To generate predictions from a quick start model, use the predictUrl value from the request for information about a single quick start model.
To generate predictions from a user-defined machine learning model, use the predictUrl value from the request to monitor a model deployment.

POST request

To generate a prediction, include the deployment ID in the URI. Use the following URI:
/mlops/api/v1/deployment/request/<deployment ID>
Get the ID for a quick start model from the response to get information about a quick start model. For more information, see Getting information about a quick start model.
Get the deployment ID for a model deployment from the response to monitor a model deployment. For more information, see Monitoring model deployments.
Include the following fields in the request body:
Field
Type
Description
deployment_id
String
ID of the quick start model or model deployment.
request
String
Input fields needed to generate a prediction. The request should consist of key-value pairs in a serialized JSON string.
To generate predictions from a quick start model, use the input fields defined for that model.
To generate predictions from a user-defined model, use the input fields that you specified when you registered the machine learning model.

POST response

Returns the prediction from the machine learning model.
When you generate predictions from a user-defined model, you specify the output fields when you register the machine learning model. The model returns each output field as an attribute of the response.

0 COMMENTS

We’d like to hear from you!