Application Integration
- Application Integration
- All Products
For Prompt_Configuration: <generationConfig> <stopSequences>.</stopSequences> <candidateCount>1</candidateCount> <maxOutputTokens>200</maxOutputTokens> <temperature>0.5</temperature> <topP>0.5</topP> <topK>2</topK> </generationConfig>
For Prompt_Request: <Generate_Content_Request> <contents> <parts> <text>Your behaviour is {$input.First_System_Prompt}, Topic is {$input.First_User_Prompt}</text> </parts> <role>user</role> </contents> <generationConfig> <stopSequences>{$temp.Prompt_Configuration[1]/stopSequences}</stopSequences> <candidateCount>{$temp.Prompt_Configuration[1]/candidateCount }</candidateCount> <maxOutputTokens>{$temp.Prompt_Configuration[1]/maxOutputTokens }</maxOutputTokens> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <topP>{$temp.Prompt_Configuration[1]/topP }</topP> <topK>{$temp.Prompt_Configuration[1]/topK }</topK> </generationConfig> </Generate_Content_Request>
Property
| Description
|
---|---|
stopSequences
| Contains sequences of characters or strings that stop the model's output. It controls where the model must end its response.
|
candidateCount
| Specifies the number of response candidates that the model must generate. For example, if the value is set to 1, the model generates one response. If set to a higher number, the model generates that many alternative responses for the same input.
|
maxOutputTokens
| Defines the maximum number of tokens the model can generate in its response. Setting a limit ensures that the response is concise and fits within the desired length constraints.
|
temperature
| Controls the randomness of the model's output. A lower value makes the output more deterministic, while a higher value increases randomness and creativity. For example, a temperature of 0.5 balances between deterministic and creative outputs.
|
topP
| Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds topP. For example, if topP is set to 0.1, the model considers only the top 10% most probable tokens at each step.
|
topK
| Limits the number of the highest-probability tokens to consider during response generation. For example, if topK is set to 2, the model considers only the top 2 tokens at each step, controlling output diversity and quality.
|
<Generate_Content_Request> <contents> <parts> <text>Your behaviour is : {$input.First_System_Prompt}, Topic is : {$input.First_User_Prompt}</text> </parts> <role>user</role> </contents> <contents> <parts> <text>{$temp.Prompt_Response}</text> </parts> <role>model</role> </contents> <contents> <parts> <text>Your additional instructions are : {$input.Second_System_Prompt}. Question is {$input.Second_User_Prompt}</text> </parts> <role>user</role> </contents> <generationConfig> <stopSequences>{$temp.Prompt_Configuration[1]/stopSequences}</stopSequences> <candidateCount>{$temp.Prompt_Configuration[1]/candidateCount }</candidateCount> <maxOutputTokens>{$temp.Prompt_Configuration[1]/maxOutputTokens }</maxOutputTokens> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <topP>{$temp.Prompt_Configuration[1]/topP }</topP> <topK>{$temp.Prompt_Configuration[1]/topK }</topK> </generationConfig> </Generate_Content_Request>