Hi, I'm Ask INFA!
What would you like to know?
ASK INFAPreview
Please to access Ask INFA.

Simple RAG Consumption with Azure OpenAI and AI Search

Simple RAG Consumption with Azure OpenAI and AI Search

Step 5. Invoke the process

Step 5. Invoke the process

When you invoke the Query LLM with Context using Embeddings Model process, the user sees the answer matching the context that was used in the LLM request.
You can run the process using one of the following options:
  • Run Using by passing the input parameters
  • REST or SOAP API endpoints in any API client such as cURL, Postman, SOAP UI, or any programming language
  • Web browser by passing the input parameters
For example, you can pass the input parameters with the Run Using option as shown in the following image:
The image shows the sample input parameters that you can pass using the Run
					Using option.

0 COMMENTS

We’d like to hear from you!