Introduction to AI Agent for Salesforce using Google Gemini recipe
Introduction to AI Agent for Salesforce
using Google Gemini recipe
The AI Agent for Salesforce using Google Gemini recipe is based on REST and SOAP APIs.
The recipe shows you how to use the GeminiAI Agent framework to interact with Salesforce and
address user queries autonomously.
Based on the user's query, the LLM generates a list of Salesforce Object Query Language
(SOQL) queries that are needed to retrieve all the relevant details from Salesforce. These
queries are executed in a sequence, and the results are used by the Gemini Large Language
Model (LLM) as context to answer the user's query.
The process receives a request from the user that includes system instructions for an LLM and
an additional system instruction for executing SOQL queries. The LLM uses the instructions to
generate a list of SOQL queries that need to be executed against Salesforce. The process
sequentially executes each generated SOQL query against the Salesforce database. After each
query execution, the result is used as context for the next query to the LLM.
The process continues this cycle using the response from each SOQL query as context data for
the next LLM query, along with the initial user instructions. This loop continues until all
queries generated from the user input have been processed. The maximum number of requests to
be made to Salesforce is set to 5 by default. You can change the limit while invoking the
process.
After all the SOQL queries have been executed and their results gathered, a final query is
made to the LLM. The context for this final query consists of the results of all the executed
SOQL queries.
The LLM uses the aggregated context to provide a response to the user's original
question.
Watch an interactive demo to know more about how to use this recipe.