Deploy Apps Using Docker Compose on Koyeb
Introduction
Docker Compose is a popular tool for defining and managing multi-container applications. It can be used to coordinate shared resources and interfaces between different discrete components so that they can be deployed, updated, and supervised as a unit.
In this guide, we'll walk through how to deploy a Docker Compose application on Koyeb. We'll deploy a GitHub repository that contains a Docker Compose configuration file by building an image with a special Dockerfile based on a Docker Compose image. The build process will copy the repository contents to an image with Docker Compose installed and then initialize the applications defined within using the conventional Docker Compose mechanisms.
You can deploy and preview the example application from this guide by clicking the Deploy to Koyeb button below:
You can consult the repository on GitHub to find out more about the example application that this guide uses.
Requirements
To successfully follow and complete this guide, you need:
- A GitHub account to host your Docker Composed-based application.
- A Koyeb account to deploy and run the preview environments for each pull request.
Steps
To complete this guide and deploy a Docker Compose application, you'll need to follow these steps:
Fork the example Docker Compose repository
To demonstrate how to deploy projects that use Docker Compose on Koyeb, we will work with an example repository containing a simple Flask application. The project defines a Dockerfile to build the main application and includes a docker-compose.yaml
file that defines how to run the application alongside a Redis instance to cache results.
To get started, log into GitHub and fork the example Docker Compose repository into your own account. If you'd like to deploy the application without any changes, you can skip this step and deploy from Koyeb's public repository in the next step instead.
The main application code is found in the app.py
file, but for our purposes, the Docker-related files are more interesting. These three files define how to build and run the necessary images and containers: Dockerfile
, docker-compose.yaml
, and Dockerfile.koyeb
.
The Dockerfile
is very standard and doesn't require any special design for Koyeb:
FROM python:3.7-alpine
WORKDIR /code
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
RUN apk add --no-cache gcc musl-dev linux-headers
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
EXPOSE 5000
COPY . .
CMD ["flask", "run"]
It packages up the application code into a container image and configures it to run on port 5000.
Likewise, the docker-compose.yaml
file doesn't require any special consideration:
services:
web:
build: .
ports:
- '${PORT}:5000'
redis:
image: 'redis:alpine'
The compose file automatically builds the above Dockerfile as needed. It defines a two container application setup using the image from the build above and as well as a Redis container. It runs the application container on the port defined by the PORT
environment variable, using port 5000 as a fallback.
The only file required to bridge the gap between a standard Docker Compose configuration and a Koyeb deployment is the Dockerfile.koyeb
file:
FROM koyeb/docker-compose
COPY . /app
This file will be the primary director of the Koyeb build process. It creates a new image based on the Koyeb Docker Compose image (check out the repository for the image for additional details). The only other instruction in this Dockerfile is to copy the entire repository to the /app
directory within the image.
This works because the Koyeb Docker Compose image runs a Docker daemon that can be used to spawn containers. The image uses Docker Compose to read and apply the docker-compose.yaml
file, which will build the main application image and then run a container from it in coordination with the Redis container, as outlined above.
Deploy the Docker Compose example application
Now that you understand how the Docker Compose example works, you can deploy it to Koyeb to validate that the process works as expected.
To get started, open the Koyeb control panel and complete the following steps:
- On the Overview tab, click Create Web Service.
- Select GitHub as the deployment method.
- In the GitHub repository list, either select your fork of the Docker Compose repository from your account's repository list or enter the public example repository in the Public GitHub repository field:
https://github.com/koyeb/example-docker-compose
. - In the Builder section, select Dockerfile. Click the Override toggle to enable the Dockerfile location field. In the field, type
Dockerfile.koyeb
. - Enable the Privileged toggle.
- Choose a name for your App and Service, for example
koyeb-docker-compose
, and click Deploy.
Koyeb will clone the GitHub repository and use the Dockerfile.koyeb
file to build a new container image for the project. Once the build is complete, a container will be started from the image to run Docker Compose. The Docker Compose process works with a Docker daemon to build or pull the necessary images and start the associated containers.
Once the deployment is healthy, visit your Koyeb Service's subdomain (you can find this on your Service's detail page) to connect. It will have the following format:
https://<YOUR_APP_NAME>-<KOYEB_ORG_NAME>.koyeb.app
You should see the example application's response:
Hello World! I have been seen 1 times.
The number in the counter will increment as you refresh or visit the page from other devices. The persistence is provided by the Redis container, indicating that both components defined in the docker-compose.yaml
file are operating as expected.
Conclusion
In this tutorial, we demonstrated how to use the Koyeb's Docker Compose image to bootstrap projects that rely on Docker Compose to orchestrate containers. With a two line Dockerfile.koyeb
file, the Koyeb build process can build a container that runs Docker Compose for your applications, building the images and spawning the containers defined by your configuration.
This method of running Docker Compose-oriented workloads makes it straightforward to get existing projects up-and-running quickly without much modification to your existing repositories. You can use your existing orchestration tool to get started and then migrate to running independent containers on the platform when you want additional control and performance.