Application Integration
- Application Integration
- All Products
<GenerationConfig> <temperature>0.3</temperature> <top_p>0.8</top_p> <max_tokens>800</max_tokens> </GenerationConfig>
Property | Description |
---|---|
temperature | Controls the randomness of the model's output. A
lower value close to 0 makes the output more
deterministic, while a higher value close to 1
increases randomness and creativity. For example, if
temperature is set to 0.5, the
model balances between deterministic and creative
outputs. |
topP | Determines the cumulative probability threshold
for token selection. The model considers the
smallest set of tokens whose cumulative probability
meets or exceeds topP . For example,
if topP is set to 0.1, the model
considers only the top 10% most probable tokens at
each step. |
max_tokens | Defines the token count of your prompt. The value
can't exceed the model's context length. Most of the
models have a context length of 2048 tokens. |
<generationConfig> <temperature>0.3</temperature> <topP>0.8</topP> <maxOutputTokens>800</maxOutputTokens> </generationConfig>
Property | Description |
---|---|
temperature | Controls the randomness of the model's output. A
lower value close to 0 makes the output more
deterministic, while a higher value close to 1
increases randomness and creativity. For example, if
temperature is set to 0.5, the
model balances between deterministic and creative
outputs. |
topP | Determines the cumulative probability threshold
for token selection. The model considers the
smallest set of tokens whose cumulative probability
meets or exceeds topP . For example,
if topP is set to 0.1, the model
considers only the top 10% most probable tokens at
each step. |
maxOutputTokens | Defines the maximum number of tokens the model
can generate in its response. Setting a limit
ensures that the response is concise and fits within
the desired length constraints. |