Application Integration
- Application Integration
- All Products
<GenerationConfig_AzureAI> <topP>1</topP> <max_tokens>500</max_tokens> <temperature>0.5</temperature> </GenerationConfig_AzureAI>
Property | Description |
---|---|
max_tokens | Defines the maximum number of tokens that the model
can generate in its response. Setting a limit ensures that the
response is concise and fits within the desired length constraints.
|
temperature | Controls the randomness of the model's output. A
lower value makes the output more deterministic, while a higher value
increases randomness and creativity. For example, a temperature of 0.5
balances between deterministic and creative outputs. |
topP | Determines the cumulative probability threshold for
token selection. The model considers the smallest set of tokens whose
cumulative probability meets or exceeds topP. For example, if topP is
set to 0.1, the model considers only the top 10% most probable tokens
at each step. |
<CreateChatCompletionRequest> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <top_p>{$temp.Prompt_Configuration[1]/top_p }</top_p> <max_tokens>{$temp.Prompt_Configuration[1]/max_tokens }</max_tokens> <messages> <role>system</role> <content>{$input.First_System_Prompt } </content> </messages> <messages> <role>user</role> <content>{$input.First_User_Prompt }</content> </messages> </CreateChatCompletionRequest>
<CreateChatCompletionRequest> <temperature>{$temp.Prompt_Configuration[1]/temperature }</temperature> <top_p>{$temp.Prompt_Configuration[1]/top_p }</top_p> <max_tokens>{$temp.Prompt_Configuration[1]/max_tokens }</max_tokens> <messages> <role>system</role> <content> {$input.First_System_Prompt} </content> </messages> <messages> <role>user</role> <content>{ $input.First_User_Prompt }</content> </messages> <messages> <role>assistant</role> <content>{ $temp.Prompt_Response[1]/choices[1]/message[1]/content }</content> </messages> <messages> <role>user</role> <content>{$input.Second_User_Prompt }</content> </messages> </CreateChatCompletionRequest>