Drupal is a registered trademark of Dries Buytaert

Integrate local AI models with Drupal's AI module using Docker Model Runner. Test & build AI solutions without API costs, perfect for local development & experimentation.

The Docker Model Runner Provider module is your gateway to leveraging AI models directly within your Drupal projects, all from your local machine. Forget costly external APIs during development; this module connects Drupal's AI capabilities to Docker Model Runner, creating a seamless, cost-free environment for experimentation and rapid prototyping.

It provides a local, efficient, and budget-friendly way to integrate AI chat functionalities into Drupal. This empowers developers to build and test AI features rapidly, turning your local setup into a powerful AI sandbox.

Features

  • Operation Types:
    • Chat
    • Embeddings
  • Local AI for Free: Develop AI features without external API costs.
  • Rapid Experimentation: Test different models and prompts quickly.
  • Developer Friendly: Aligns with Docker's "Models" feature, bringing AI control to your machine.

 

1. Preparing Docker Model Runner Locally

Docker Model Runner (DMR) is a new feature within Docker Desktop that allows you to run AI models directly on your machine, leveraging your local hardware.

Availability: Docker Model Runner is currently in Beta.

Availability: Beta

Requires: Docker Engine or Docker Desktop (Windows) 4.41+ or Docker Desktop (MacOS) 4.40+

For: Docker Desktop for Mac with Apple Silicon or Windows with NVIDIA GPUs.

Verifying Docker Model Runner Installation

Once Docker Desktop is installed and running, you can verify if the Model Runner feature is active and ready to use via your terminal.

  1. Open your terminal.
  2. Run the command: 

    docker model status
  3. Expected Output: You should see a message indicating that Docker Model Runner is running, along with details about the inference engine (e.g., llama.cpp) and its status.



 

2. Managing AI Models with Docker Model Runner

DMR uses a simple command-line interface to manage your local AI models. Models are pulled from the Docker Hub Generative AI Catalog.

Listing Downloaded Models

To see which AI models you currently have downloaded and available locally:

  1. Run the command: 

    docker model list
  2. Expected Output: This will display a table of your downloaded models.



Pulling a New AI Model

To download a new AI model from Docker Hub:

  1. Visit the Docker Hub Generative AI Catalog to discover available models and their exact names (e.g., ai/qwen2.5). 
  2. Use the docker model pull command with the model's full name(Use the model variant name).

    docker model pull ai/qwen2.5:0.5B-F16

    This command will download the model to your local machine. The download time will vary based on the model's size and your internet speed.

Optional: Interacting with Models via Docker Desktop UI (Chat Interface)

Docker Desktop provides a built-in chat interface for direct interaction with your downloaded models, which is excellent for quick testing and experimentation.

  1. Open your Docker Desktop application.
  2. In the left-hand navigation menu, click on "Models". This will take you to the "Models" dashboard.Here, you will see a list of all your downloaded and available AI models.
  3. Select any model from the list by clicking on it.

    Docker Desktop will open a chat interface directly within the application, allowing you to send prompts and receive responses from that specific model.

 

3. Integrating with Drupal

Now that Docker Model Runner is ready, let's connect it to your Drupal site using the Drupal AI module and the ai_provider_docker module.

Drupal AI Module Requirements

Your Drupal site needs the core 
Drupal AI module installed. This module is available for Drupal 10.1+. It acts as the central hub for all AI integrations in Drupal.

Installing the Docker Model Runner Provider Module

The ai_provider_docker module is your bridge to Docker Model Runner.

composer require drupal/ai_provider_docker
drush en ai_provider_docker

If you have not installed ai module, the previous step will install it automatically because it is a dependency.

Configuring the Docker Model Runner Provider

After enabling the module, you'll configure it within the Drupal AI module's settings.

  1. Log in to your Drupal site as an administrator.

  2. Visit /admin/config/ai/providers.

  3. Select "Docker Model Runner Configuration".



  4. Fill out the configuration form:

     

Creating a New AI Assistant

An AI Assistant in Drupal uses a configured provider to perform specific tasks.

  1. After saving your provider, go to the “Assistants” on the AI configuration page (/admin/config/ai/ai-assistant).

  2. Click the "Add assistant" button.

  3. Fill out the assistant configuration:

    • Label: Give your assistant a name (e.g., "Local Chat Assistant").
    • AI Provider: Select the Docker Model Runner.
    • Model: From the dropdown, select the specific model you pulled earlier using docker model pull (e.g., ai/qwen2.5). This dropdown dynamically lists models available via your configured Docker Model Runner endpoint.
    • System Instructions: Provide initial instructions for your AI (e.g., "You are a helpful Drupal expert.").
    • Configure other parameters as needed.
  4. Click "Save".



 

 

Community Documentation

For Docker Model Runner setup, refer to the official Docker AI documentation.

For Drupal AI (Artificial Intelligence) module https://project.pages.drupalcode.org/ai/.

Activity

Total releases
1
First release
Jun 2025
Latest release
8 months ago
Release cadence
Stability
0% stable

Releases

Version Type Release date
1.0.x-dev Dev Jun 19, 2025