Cluster Nodes
sub-nodes
Azure OpenAI Chat Model node documentation

Azure OpenAI Chat Model node

Use the Azure OpenAI Chat Model node to use OpenAI's chat models with conversational agents.

On this page, you'll find the node parameters for the Azure OpenAI Chat Model node, and links to more resources.

Credentials: You can find authentication information for this node here.

Node parameters

  • Model: Select the model to use to generate the completion.

Node options

  • Frequency Penalty: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.
  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.
  • Response Format: Choose Text or JSON. JSON ensures the model returns valid JSON.
  • Presence Penalty: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.
  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
  • Timeout: Enter the maximum request time in milliseconds.
  • Max Retries: Enter the maximum number of times to retry a request.
  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

Proxy limitations

This node doesn't support the NO_PROXY environment variable.

Related resources

Refer to LangChains's Azure OpenAI documentation (opens in a new tab) for more information about the service.