Mistral Cloud Chat Model node
Use the Mistral Cloud Chat Model node to combine Mistral Cloud's chat models with conversational agents.
On this page, you'll find the node parameters for the Mistral Cloud Chat Model node, and links to more resources.
Credentials: You can find authentication information for this node here.
Node parameters
- Model: Select the model to use to generate the completion. n8n dynamically loads models from Mistral Cloud and you'll only see the models available to your account.
Node options
- Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.
- Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
- Timeout: Enter the maximum request time in milliseconds.
- Max Retries: Enter the maximum number of times to retry a request.
- Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.
- Enable Safe Mode: Enable safe mode by injecting a safety prompt at the beginning of the completion. This helps prevent the model from generating offensive content.
- Random Seed: Enter a seed to use for random sampling. If set, different calls will generate deterministic results.
Related resources
Refer to LangChains's Mistral documentation (opens in a new tab) for more information about the service.