ai_provider_ollama
244 sites
Security covered
Looking for a powerful server to run your local AI models? Ollama is a perfect choice, designed to operate in a headless setup. Integrated with the AI module, Ollama enables efficient management and execution of AI models on your own infrastructure, giving you full control without the need for a graphical interface.
Features
Ollama acts as a robust server solution for running local models, perfect for developers who need headless operations. Integrated with the AI module, it allows you to run and scale AI models without a GUI, making it ideal for production environments where efficiency and control are key.
Post-Installation
- Set up your API key using the Key module with the details of your local Ollama server.
- Visit /admin/config/ai/providers/ollama and setup your server ip.
- Once configured, you can start running models on your headless Ollama server seamlessly.
Additional Requirements
The AI module.
Supporting this Module
Support the AI module on OpenCollective.