Application Integration
- Application Integration
- All Products
For Prompt_Configuration: <generationConfig> <candidateCount>1</candidateCount> <maxOutputTokens>500</maxOutputTokens> <temperature>0.5</temperature> <topP>0.5</topP> <topK>2</topK> </generationConfig>
For Request: <Generate_Content_Request> <contents> <parts> <text>Answer using only the context provided: {$temp.Content_From_File}. Question : {$input.User_Prompt}</text> </parts> <role>user</role> </contents> <generationConfig> <stopSequences>{$temp.Prompt_Configuration[1]/stopSequences}</stopSequences> <candidateCount>{$temp.Prompt_Configuration[1]/candidateCount }</candidateCount> <maxOutputTokens>{$temp.Prompt_Configuration[1]/maxOutputTokens }</maxOutputTokens> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <topP>{$temp.Prompt_Configuration[1]/topP }</topP> <topK>{$temp.Prompt_Configuration[1]/topK }</topK> </generationConfig> </Generate_Content_Request>
Property
| Description
|
---|---|
candidateCount
| Specifies the number of response candidates that the model must generate. For example, if the value is set to 1, the model generates one response. If set to a higher number, the model generates that many alternative responses for the same input.
|
maxOutputTokens
| Defines the maximum number of tokens that the model can generate in its response. Setting a limit ensures that the response is concise and fits within the desired length constraints.
|
temperature
| Controls the randomness of the model's output. A lower value close to 0 makes the output more deterministic, while a higher value close to 1 increases randomness and creativity. For example, if
temperature is set to 0.5, the model balances between deterministic and creative outputs.
|
topP
| Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds
topP . For example, if
topP is set to 0.1, the model considers only the top 10% most probable tokens at each step.
|
topK
| Limits the number of the highest-probability tokens to consider during response generation. For example, if
topK is set to 2, the model considers only the top 2 tokens at each step, controlling output diversity and quality.
|