Application Integration
- Application Integration
- All Products
For Prompt_Configuration: <GenerationConfig_AzureAI> <temperature>0.5</temperature> <topP>1</topP> <max_tokens>1000</max_tokens> </GenerationConfig_AzureAI>
Property | Description |
---|---|
temperature | Controls the randomness of the model's
output. A lower value close to 0 makes the output more
deterministic, while a higher value close to 1 increases
randomness and creativity. For example, if
temperature is set to 0.5, the model
balances between deterministic and creative outputs.
|
topP | Determines the cumulative probability
threshold for token selection. The model considers the
smallest set of tokens whose cumulative probability meets or
exceeds topP . For example, if
topP is set to 0.1, the model considers
only the top 10% most probable tokens at each step. |
max_tokens | Defines the token count of your prompt. The max_tokens
can't exceed the model's context length. Most of the models
have a context length of 2048 tokens. |
For Request: <CreateChatCompletionRequest> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <top_p>{$temp.Prompt_Configuration[1]/top_p }</top_p> <max_tokens>{$temp.Prompt_Configuration[1]/max_tokens }</max_tokens> <messages> <role>user</role> <content>You are a helpful assistant.{$temp.Content_From_File}</content> </messages> <messages> <role>user</role> <content>{$input.User_Prompt}</content> </messages> </CreateChatCompletionRequest>