Simple React AI Agent with Search Tools

Simple React AI Agent with Search Tools

Configuring and publishing the processes

Configuring and publishing the processes

To configure and publish the processes, perform the following steps:
  1. To publish the
    Search Tool
    process, click
    Actions
    in the row that contains the process and select
    Publish
    .
  2. To publish the
    Action Detail Extractor
    process, click
    Actions
    in the row that contains the process and select
    Publish
    .
  3. Open the
    Prompt Gemini
    process.
  4. On the
    Temp Fields
    tab of the Start step, the
    Model_LLM
    field is set to
    gemini-1.5 pro
    by default. You can optionally edit the model version. For information about changing the model version, see the Gemini documentation.
  5. Save and publish the process.
  6. Open the
    Search Tool Workflow
    process.
  7. On the
    Temp Fields
    tab of the Start step, the
    Loop_Limit
    field is set to
    5
    by default. You can optionally edit the limit. This field defines the number of times the Search Tool subprocess can be called.
    You can adjust the number of repeat queries in the Search Tool to get new information and better results. However, this will increase the number of tokens used.
  8. On the
    Assignment
    tab of the
    Create Prompt to LLM
    step, the
    Prompt
    field is configured to contain the request template for the prompt to LLM. Optionally, you can configure the LLM behavior in the
    Prompt_Request
    field using the Expression Editor, as shown in the following sample code:
    For Prompt_Request: <GenerateContentRequestBody> <contents> <parts> <text>{$temp.Prompt}</text> </parts> <role>user</role> </contents> <generationConfig> <maxOutputTokens>2048</maxOutputTokens> <temperature>0</temperature> <topP>1</topP> <topK>250</topK> </generationConfig> </GenerateContentRequestBody>
    For the
    Prompt_Request
    field, enter values for the following properties:
    Property
    Description
    maxOutputTokens
    Defines the maximum number of tokens that the model can generate in its response. Setting a limit ensures that the response is concise and fits within the desired length constraints.
    temperature
    Controls the randomness of the model's output. A lower value close to 0 makes the output more deterministic, while a higher value close to 1 increases randomness and creativity. For example, if
    temperature
    is set to 0.5, the model balances between deterministic and creative outputs.
    topP
    Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds
    topP
    . For example, if
    topP
    is set to 0.1, the model considers only the top 10% most probable tokens at each step.
    topK
    Limits the number of the highest-probability tokens to consider during response generation. For example, if
    topK
    is set to 2, the model considers only the top 2 tokens at each step, controlling output diversity and quality.
  9. Save and publish the process.
Alternatively, you can also publish the
Search Tool Workflow
process from the assets page. To publish the process, click
Actions
in the row that contains the process and select
Publish
. The subprocesses will be published automatically. You can modify the process as mentioned in steps 7 and 8 before publishing the process if needed.

0 COMMENTS

We’d like to hear from you!