Azure OpenAI Prompt Engineering

Azure OpenAI Prompt Engineering

Step 3. Configure and publish the process

Step 3. Configure and publish the process

Configure and publish the process so that it can submit prompts to Azure OpenAI.
  1. Open the
    Prompt Engineering AZURE Open AI
    process.
  2. In the
    Start
    step, use the
    Temp Fields
    tab to enter values for the following fields:
    • In the
      api_version
      field, optionally enter the API version of the LLM model. Default is
      2024-06-01
      .
    • In the
      deployment_id
      field, enter the user-specific deployment ID.
    For information about changing the API version, see the Azure OpenAI documentation.
  3. Optionally, in the
    Configure Request Parameters
    step, use the
    Assignments
    tab to edit the top_p, temperature, and max tokens for each request.
    You can edit the code in the Expression Editor. For example:
    <GenerationConfig_AzureAI> <top_p>1</top_p> <temperature>0.5</temperature> <max_tokens>500</max_tokens> </GenerationConfig_AzureAI>
    The following table describes the properties:
    Property
    Description
    top_p
    Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds top_p. For example, if top_p is set to 0.1, the model considers only the top 10% most probable tokens at each step.
    temperature
    Controls the randomness of the model's output. A lower value makes the output more deterministic, while a higher value increases randomness and creativity. For example, a temperature of 0.5 balances between deterministic and creative outputs.
    max_tokens
    Defines the maximum number of tokens that the model can generate in its response. Setting a limit ensures that the response is concise and fits within the desired length constraints.
    For more information about these properties, see the Azure OpenAI documentation.
  4. Save and publish the process.

0 COMMENTS

We’d like to hear from you!