GeminiAI Chat with History

GeminiAI Chat with History

Configuring and publishing the processes

Configuring and publishing the processes

  1. Open the
    Read History from File
    process.
  2. On the
    Start
    tab of the
    Start
    step, select the same Secure Agent that you had selected in the
    Run On
    field for the
    FileConnectionChatHistorydefault
    connection.
  3. Save and publish the process.
  4. Open the
    Write Chat History in File
    process.
  5. Optionally, in the
    Prepare History to save in File
    step, click the
    Assignments
    tab. Open the Expression Editor for the
    File_Name
    field and enter the format to save the file.
    The following image shows the assignments of the
    File_Name
    input field:
    The image shows the assignments of the File_Name field.
  6. Save and publish the process.
  7. Open the
    Chat with History
    main process.
    The process is called by an HTTP request using the following three input parameters mentioned in the
    Input Fields
    tab of the
    Start
    step:
    • User_Prompt
    • Path_To_File_With_History
    • User_ID
      The User_ID is used as the file name.
  8. On the
    Temp Field
    tab of the
    Start
    step, the
    Model_LLM
    is set to
    gemini-1.5 pro
    . You can optionally edit the model version. For information about changing the model version, see the Gemini documentation.
  9. In the
    Configure Request Parameters
    step, click the
    Assignments
    tab. Open the Expression Editor for the
    Prompt_Configuration
    field and enter the prompt instructions, as shown in the following sample code:
    <generationConfig> <stopSequences>.</stopSequences> <candidateCount>1</candidateCount> <maxOutputTokens>200</maxOutputTokens> <temperature>0.5</temperature> <topP>0.1</topP> <topK>2</topK> </generationConfig>
    For the
    Prompt_Configuration
    field, enter values for the following properties:
    Property
    Description
    stopSequences
    Contains sequences of characters or strings that stop the model's output. This property controls where the model must end its response.
    candidateCount
    Specifies the number of response candidates that the model must generate. For example, if the value is set to 1, the model generates one response. If set to a higher number, the model generates that many alternative responses for the same input.
    maxOutputTokens
    Defines the maximum number of tokens the model can generate in its response. Setting a limit ensures that the response is concise and fits within the desired length constraints.
    temperature
    Controls the randomness of the model's output. A lower value makes the output more deterministic, while a higher value increases randomness and creativity. For example, a temperature of 0.5 balances between deterministic and creative outputs.
    topP
    Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds topP. For example, if topP is set to 0.1, the model considers only the top 10% most probable tokens at each step.
    topK
    Limits the number of the highest-probability tokens to consider during response generation. For example, if topK is set to 2, the model considers only the top 2 tokens at each step, controlling output diversity and quality.
  10. After configuring the prompt instructions, the process reads the file and if it has a history, you must prepare a request with history. The process reads the entire history from the file, but uses only the last question and its response.
    The following image shows the assignments of the
    Previous_Question
    and
    Previous_Answer
    input fields:
    The image shows the assignments of the Previous Question and Previous Answer fields.
    The process reads the file with history. If no file exists or the file is empty, the process prepares a request without history, and then creates or updates the file. It writes the last question and its LLM response in the following format:
    Qustion_to_LLM: user question Answer_LLM: Gemini answer
    If the file contains history, the process prepares the request with history but using only the last question and response. The process then updates the file and appends the new question and answer. You can include additional parsing in a specific location and also include new changes to the request.
  11. Save and publish the process.

0 COMMENTS

We’d like to hear from you!