Serve flows in a long-lived Docker container#
The .serve
method allows you to easily elevate a flow to a deployment, listening for scheduled work to execute as a local process.
However, this “local” process does not need to be on your local machine. In this example we show how to run a flow in Docker container on your local machine, but you could use a Docker container on any machine that has Docker installed.
Overview#
In this example, you will set up:
a simple flow that retrieves the number of stars for some GitHub repositories
a
Dockerfile
that packages up your flow code and dependencies into a container image
Writing the flow#
Say we have a flow that retrieves the number of stars for a GitHub repository:
import httpx
from prefect import flow, task
@task(log_prints=True)
def get_stars_for_repo(repo: str) -> int:
response = httpx.Client().get(f"https://api.github.com/repos/{repo}")
stargazer_count = response.json()["stargazers_count"]
print(f"{repo} has {stargazer_count} stars")
return stargazer_count
@flow
def retrieve_github_stars(repos: list[str]) -> list[int]:
return get_stars_for_repo.map(repos).wait()
if __name__ == "__main__":
retrieve_github_stars.serve(
parameters={
"repos": ["python/cpython", "prefectHQ/prefect"],
}
)
We can serve this flow on our local machine using:
python serve_retrieve_github_stars.py
… but how can we package this up so we can run it on other machines?
Writing the Dockerfile#
Assuming we have our Python requirements defined in a file:
prefect
and this directory structure:
├── Dockerfile
├── requirements.txt
└── serve_retrieve_github_stars.py
We can package up our flow into a Docker container using a Dockerfile
.
# Use an official Python runtime as the base image
FROM python:3.12-slim
# Set the working directory in the container
WORKDIR /app
# Copy the requirements file into the container
COPY requirements.txt .
# Install the required packages
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code
COPY serve_retrieve_github_stars.py .
# Set the command to run your application
CMD ["python", "serve_retrieve_github_stars.py"]
# Use the official Python image with uv pre-installed
FROM ghcr.io/astral-sh/uv:python3.12-bookworm-slim
# Set the working directory
WORKDIR /app
# Set environment variables
ENV UV_SYSTEM_PYTHON=1
ENV PATH="/root/.local/bin:$PATH"
# Copy only the requirements file first to leverage Docker cache
COPY requirements.txt .
# Install dependencies
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install -r requirements.txt
# Copy the rest of the application code
COPY serve_retrieve_github_stars.py .
# Set the entrypoint
ENTRYPOINT ["python", "serve_retrieve_github_stars.py"]
You can learn more about using uv
in the Astral documentation.
Build and run the container#
Now that we have a flow and a Dockerfile, we can build the image from the Dockerfile and run a container from this image.
Build (and push) the image#
We can build the image with the docker build
command and the -t
flag to specify a name for the image.
docker build -t my-flow-image .
At this point, you may also want to push the image to a container registry such as Docker Hub or GitHub Container Registry. Please refer to each registry’s respective documentation for details on authentication and registry naming conventions.
Run the container#
You’ll likely want to inject some environment variables into your container, so let’s define a .env
file:
PREFECT_API_URL=<YOUR-API-URL>
PREFECT_API_KEY=<YOUR-API-KEY-IF-USING-PREFECT-CLOUD>
Then, run the container in detached mode (in other words, in the background):
docker run -d --env-file .env my-flow-image
Verify the container is running#
docker ps | grep my-flow-image
You should see your container in the list of running containers, note the CONTAINER ID
as we’ll need it to view logs.
View logs#
docker logs <CONTAINER-ID>
You should see logs from your newly served process, with the link to your deployment in the UI.
Stop the container#
docker stop <CONTAINER-ID>
Next steps#
Congratulations! You have packaged and served a flow on a long-lived Docker container.
You may now easily deploy this container to other infrastructures, such as:
Managed Kubernetes (For example: GKE, EKS, or AKS)
or anywhere else you can run a Docker container!