GeminiAI Prompt Chaining

GeminiAI Prompt Chaining

Configuring and publishing the process

Configuring and publishing the process

  1. Open the
    Prompt Chaining Gemini
    process.
  2. On the
    Temp Fields
    tab of the Start step, the default
    Model_LLM
    is
    gemini-1.5 pro
    . You can optionally edit the model version. For information about changing the model version, see the Gemini documentation.
  3. In the
    Create Prompt
    step, enter the prompt instructions in the
    Assignments
    field by updating the
    Prompt_Configuration
    and
    Prompt_Request
    fields using the Expression Editor, as shown in the following sample code:
    For Prompt_Configuration: <generationConfig> <stopSequences>.</stopSequences> <candidateCount>1</candidateCount> <maxOutputTokens>200</maxOutputTokens> <temperature>0.5</temperature> <topP>0.5</topP> <topK>2</topK> </generationConfig>
    For Prompt_Request: <Generate_Content_Request> <contents> <parts> <text>Your behaviour is {$input.First_System_Prompt}, Topic is {$input.First_User_Prompt}</text> </parts> <role>user</role> </contents> <generationConfig> <stopSequences>{$temp.Prompt_Configuration[1]/stopSequences}</stopSequences> <candidateCount>{$temp.Prompt_Configuration[1]/candidateCount }</candidateCount> <maxOutputTokens>{$temp.Prompt_Configuration[1]/maxOutputTokens }</maxOutputTokens> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <topP>{$temp.Prompt_Configuration[1]/topP }</topP> <topK>{$temp.Prompt_Configuration[1]/topK }</topK> </generationConfig> </Generate_Content_Request>
    For the
    Prompt_Configuration
    field, enter values for the following properties:
    Property
    Description
    stopSequences
    Contains sequences of characters or strings that stop the model's output. It controls where the model must end its response.
    candidateCount
    Specifies the number of response candidates that the model must generate. For example, if the value is set to 1, the model generates one response. If set to a higher number, the model generates that many alternative responses for the same input.
    maxOutputTokens
    Defines the maximum number of tokens the model can generate in its response. Setting a limit ensures that the response is concise and fits within the desired length constraints.
    temperature
    Controls the randomness of the model's output. A lower value makes the output more deterministic, while a higher value increases randomness and creativity. For example, a temperature of 0.5 balances between deterministic and creative outputs.
    topP
    Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds topP. For example, if topP is set to 0.1, the model considers only the top 10% most probable tokens at each step.
    topK
    Limits the number of the highest-probability tokens to consider during response generation. For example, if topK is set to 2, the model considers only the top 2 tokens at each step, controlling output diversity and quality.
    After configuring the prompt instructions, the process send the details to the LLM to fetch the required response, and then stores the first response.
  4. In the
    Create Second Prompt
    step, in the
    Assignments
    field, update the
    Prompt_Request
    field using the Expression Editor as shown in the following sample code:
    <Generate_Content_Request> <contents> <parts> <text>Your behaviour is : {$input.First_System_Prompt}, Topic is : {$input.First_User_Prompt}</text> </parts> <role>user</role> </contents> <contents> <parts> <text>{$temp.Prompt_Response}</text> </parts> <role>model</role> </contents> <contents> <parts> <text>Your additional instructions are : {$input.Second_System_Prompt}. Question is {$input.Second_User_Prompt}</text> </parts> <role>user</role> </contents> <generationConfig> <stopSequences>{$temp.Prompt_Configuration[1]/stopSequences}</stopSequences> <candidateCount>{$temp.Prompt_Configuration[1]/candidateCount }</candidateCount> <maxOutputTokens>{$temp.Prompt_Configuration[1]/maxOutputTokens }</maxOutputTokens> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <topP>{$temp.Prompt_Configuration[1]/topP }</topP> <topK>{$temp.Prompt_Configuration[1]/topK }</topK> </generationConfig> </Generate_Content_Request>
    The LLM uses both the requests as an instruction to prepare the final response.
  5. Save and publish the process.

0 COMMENTS

We’d like to hear from you!