Application Integration
- Application Integration
- All Products
<generationConfig_AzureAI> <max_tokens>200</max_tokens> <temperature>1</temperature> <topP>1</topP> </generationConfig_AzureAI>
Property | Description |
---|---|
maxTokens | Defines the maximum number of tokens that
the model can generate in its response. Setting a limit
ensures that the response is concise and fits within the
desired length constraints. |
temperature | Controls the randomness of the model's
output. A lower value makes the output more deterministic,
while a higher value increases randomness and creativity.
For example, a temperature of 0.9 balances between
deterministic and creative outputs. |
topP | An alternative to sampling with
temperature where the model considers the results of the
token with topP probability. For example, if topP is set to
0.1, the model considers only the top 10% most probable
tokens at each step. |