Ollama Chat Model node
The Ollama Chat Model node allows you use local Llama 2 models with conversational agents.
On this page, you'll find the node parameters for the Ollama Chat Model node, and links to more resources.
Credentials: You can find authentication information for this node here.
Node parameters
- Model: Select the model that generates the completion. Choose from:
- Llama2
- Llama2 13B
- Llama2 70B
- Llama2 Uncensored
Refer to the Ollama Models Library documentation (opens in a new tab) for more information about available models.
Node options
- Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
- Top K: Enter the number of token choices the model uses to generate the next token.
- Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.
Related resources
Refer to LangChains's Ollama Chat Model documentation (opens in a new tab) for more information about the service.
Common issues
For common questions or issues and suggested solutions, refer to Common issues.