Application Integration
- Application Integration
- All Products
For Prompt_Configuration: <GenerationConfig_AwsBedrock_PO-1> <Temperature>0.5</Temperature> <TopP>0.6</TopP> <MaxTokens>500</MaxTokens> </GenerationConfig_AwsBedrock_PO-1>
Property
| Description
|
---|---|
temperature
| Controls the randomness of the model's output. A lower value close to 0 makes the output more deterministic, while a higher value close to 1 increases randomness and creativity. For example, if
temperature is set to 0.5, the model balances between deterministic and creative outputs.
|
topP
| Determines the cumulative probability threshold for token selection. The model considers the smallest set of tokens whose cumulative probability meets or exceeds
topP . For example, if
topP is set to 0.1, the model considers only the top 10% most probable tokens at each step.
|
max_tokens
| Defines the token count of your prompt. The value can't exceed the model's context length. Most of the models have a context length of 2048 tokens.
|
For Request: <root xmlns:m="urn:informatica:ae:xquery:json2xml:meta-data"> <system m:isArray="true"> <type>text</type> <text>You are an helpfull assistant .{$temp.Content_From_File }</text> </system> <messages m:isArray="true"> <role>user</role> <content m:isArray="true"> <type>text</type> <text>{$input.User_Prompt }</text> </content> </messages> <interfenceConfig> <max_tokens m:type="xs:int">{$temp.Prompt_Configuration[1]/MaxTokens }</max_tokens> <temperature m:type="xs:double">{$temp.Prompt_Configuration[1]/Temperature }</temperature> <topP m:type="xs:double">{$temp.Prompt_Configuration[1]/TopP }</topP> </interfenceConfig> </root>