Application Integration
- Application Integration
- All Products
For Prompt_Request: <GenerateContentRequestBody> <contents> <parts> <text>{$temp.Prompt}</text> </parts> <role>user</role> </contents> <generationConfig> <maxOutputTokens>2048</maxOutputTokens> <temperature>0</temperature> <topP>1</topP> <topK>250</topK> </generationConfig> </GenerateContentRequestBody>
Property | Description |
---|---|
maxOutputTokens | Defines the maximum number of tokens that the model
can generate in its response. Setting a limit ensures that the
response is concise and fits within the desired length constraints.
|
temperature | Controls the randomness of the model's output. A
lower value close to 0 makes the output more deterministic, while a
higher value close to 1 increases randomness and creativity. For
example, if temperature is set to 0.5, the model
balances between deterministic and creative outputs. |
topP | Determines the cumulative probability threshold for
token selection. The model considers the smallest set of tokens whose
cumulative probability meets or exceeds topP . For
example, if topP is set to 0.1, the model considers
only the top 10% most probable tokens at each step. |
topK | Limits the number of the highest-probability tokens
to consider during response generation. For example, if
topK is set to 2, the model considers only the top
2 tokens at each step, controlling output diversity and quality.
|