Azure OpenAI Chat with File using Guide

Azure OpenAI Chat with File using Guide

Step 3: Configure and publish the processes

Step 3: Configure and publish the processes

Configure the deployment details of the LLM model and publish the processes.
  1. To publish the
    Get Content from File
    process, click
    Actions
    in the row that contains the process and select
    Publish
    .
  2. Open the
    Chat with File
    process.
  3. On the
    Temp Fields
    tab of the Start step, enter values for the following fields:
    • In the
      deployment_id
      field, enter the deployment ID of the LLM model.
    • In the
      api_version
      field, enter the API version of the LLM model.
  4. Optionally, in the
    Configure Request Parameters
    step, enter the prompt instructions in the
    Assignments
    field by updating the
    Prompt_Configuration
    field using the Expression Editor, as shown in the following sample code:
    For Prompt_Configuration: <GenerationConfig_AzureAI> <temperature>0.5</temperature> <topP>1</topP> <max_tokens>1000</max_tokens> </GenerationConfig_AzureAI>
    For the
    Prompt_Configuration
    field, enter values for the following properties:
    Property
    Description
    temperature
    Controls the randomness of the model's output. A lower value close to 0 makes the output more deterministic, while a higher value close to 1 increases randomness and creativity. For example, if
    temperature
    is set to 0.5, the model balances between deterministic and creative outputs.
    topP
    Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds
    topP
    . For example, if
    topP
    is set to 0.1, the model considers only the top 10% most probable tokens at each step.
    max_tokens
    Defines the token count of your prompt. The max_tokens can't exceed the model's context length. Most of the models have a context length of 2048 tokens.
  5. Optionally, in the
    Prepare Request
    step, enter the prompt instructions in the
    Assignments
    field by updating the
    Request
    field using the Expression Editor, as shown in the following sample code:
    For Request: <CreateChatCompletionRequest> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <top_p>{$temp.Prompt_Configuration[1]/top_p }</top_p> <max_tokens>{$temp.Prompt_Configuration[1]/max_tokens }</max_tokens> <messages> <role>user</role> <content>You are a helpful assistant.{$temp.Content_From_File}</content> </messages> <messages> <role>user</role> <content>{$input.User_Prompt}</content> </messages> </CreateChatCompletionRequest>
  6. Save and publish the process.

0 COMMENTS

We’d like to hear from you!