Introduction
Modern software development and deployment rely heavily on containerization technologies to streamline workflows and enhance scalability. Docker Compose, a powerful tool, takes containerization to the next level by enabling developers to define and manage multi-container applications. If you’re seeking a comprehensive guide : How to install and use Docker Compose on CentOS 7, you’re in the right place. This article will walk you through the step-by-step process of setting up Docker Compose and unleashing its capabilities to simplify application deployment.
Table of Contents
- Introduction
- Docker and Docker Compose Concepts
- Docker Images
- Communication Between Docker Images
- Prerequisites
- Step 1: Update Packages
- Step 2: Install Docker
- Step 3: Download Docker Compose
- Step 4: Set Executable Permissions
- Step 5: Verify Docker Compose Installation
- Step 6: Running a Container with Docker Compose
- Step 7: Learning Docker Compose Commands
- Step 8: Updating, Rebuilding and Accessing the Docker Container
- Conclusion
In this tutorial, you will install the latest version of Docker Compose to help you manage multi-container applications, and will explore the basic commands of the software.
Docker and Docker Compose Concepts
Using Docker Compose requires a combination of a bunch of different Docker concepts in one, so before we get started let’s take a minute to review the various concepts involved. If you’re already familiar with Docker concepts like volumes, links, and port forwarding then you might want to go ahead and skip on to the next section.
Docker Images
Each Docker container is a local instance of a Docker image. You can think of a Docker image as a complete Linux installation. Usually a minimal installation contains only the bare minimum of packages needed to run the image. These images use the kernel of the host system, but since they are running inside a Docker container and only see their own file system, it’s perfectly possible to run a distribution like CentOS on an Ubuntu host (or vice-versa).
Most Docker images are distributed via the Docker Hub, which is maintained by the Docker team. Most popular open source projects have a corresponding image uploaded to the Docker Registry, which you can use to deploy the software. When possible, it’s best to grab “official” images, since they are guaranteed by the Docker team to follow Docker best practices.
Communication Between Docker Images
Docker containers are isolated from the host machine, meaning that by default the host machine has no access to the file system inside the Docker container, nor any means of communicating with it via the network. This can make configuring and working with the image running inside a Docker container difficult.
Docker has three primary ways to work around this. The first and most common is to have Docker specify environment variables that will be set inside the Docker container. The code running inside the Docker container will then check the values of these environment variables on startup and use them to configure itself properly.
Another commonly used method is a Docker data volume. Docker volumes come in two flavors — internal and shared.
Specifying an internal volume just means that for a folder you specify for a particular Docker container, the data will be persisted when the container is removed. For example, if you wanted to make sure your log files persisted you might specify an internal /var/log volume.
A shared volume maps a folder inside a Docker container onto a folder on the host machine. This allows you to easily share files between the Docker container and the host machine.
The third way to communicate with a Docker container is via the network. Docker allows communication between different Docker containers via links, as well as port forwarding, allowing you to forward ports from inside the Docker container to ports on the host server. For example, you can create a link to allow your WordPress and MariaDB Docker containers to talk to each other and use port-forwarding to expose WordPress to the outside world so that users can connect to it.
Prerequisites
Before diving into the installation and usage of Docker Compose, ensure you have:
- A CentOS 7 system with root or sudo privileges.
- Docker already installed on your system.
Step 1: Update Packages
Before any installation, ensure your system packages are up to date:
[samm@docker-ce ~]$ sudo yum update
[samm@docker-ce ~]$ sudo yum upgrade
Step 2: Install Docker
If Docker is not already installed, follow our guide :
Step 3: Download Docker Compose
Download the Docker Compose binary to your system:
Check the current release and if necessary, update it in the command below:
[samm@docker-ce ~]$ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
Step 4: Set Executable Permissions
Make the Docker Compose binary executable:
[samm@docker-ce ~]$ sudo chmod +x /usr/local/bin/docker-compose
Step 5: Verify Docker Compose Installation
Then, verify that the installation was successful by checking the version:
[samm@docker-ce ~]$ docker-compose --version
This will print out the version you installed:
Output
docker-compose version 1.29.2, build 5becea4c
Now that you have Docker Compose installed, you’re ready to run a “Hello World” example.
Step 6: Running a Container with Docker Compose
Create a docker-compose.yml
file in your project directory. This file defines how your multi-container application will be structured and configured. Refer to Docker Compose’s official documentation for syntax guidelines and available options.
First, create a directory for our YAML file:
[samm@docker-ce ~]$ sudo mkdir -p /opt/hello-world
Then change into the directory:
[samm@docker-ce ~]$ cd /opt/hello-world
[samm@docker-ce hello-world]$ vi docker-compose.yml
Enter insert mode, by pressing i, then put the following contents into the file:
my-test:
image: hello-world
The first line will be part of the container name. The second line specifies which image to use to create the container. When you run the command docker-compose up it will look for a local image by the name specified, hello-world. Save and exit the file.
To look manually at images on your system, use the docker images command:
[samm@docker-ce hello-world]$ docker images
When there are no local images at all, only the column headings display:
Output
REPOSITORY TAG IMAGE ID CREATED SIZE
Now, while still in the ~/hello-world directory, execute the following command to create the container:
[samm@docker-ce hello-world]$ docker-compose up
The first time we run the command, if there’s no local image named hello-world, Docker Compose will pull it from the Docker Hub public repository:
Output
Pulling my-test (hello-world:)...
latest: Pulling from library/hello-world
1b930d010525: Pull complete
. . .
After pulling the image, docker-compose creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:
Output
. . .
Creating helloworld_my-test_1...
Attaching to helloworld_my-test_1
my-test_1 |
my-test_1 | Hello from Docker.
my-test_1 | This message shows that your installation appears to be working correctly.
my-test_1 |
. . .
It will then print an explanation of what it did:
Output
. . .
my-test_1 | To generate this message, Docker took the following steps:
my-test_1 | 1. The Docker client contacted the Docker daemon.
my-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
my-test_1 | (amd64)
my-test_1 | 3. The Docker daemon created a new container from that image which runs the
my-test_1 | executable that produces the output you are currently reading.
my-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it
my-test_1 | to your terminal.
. . .
Docker containers only run as long as the command is active, so once hello finished running, the container stops. Consequently, when you look at active processes, the column headers will appear, but the hello-world container won’t be listed because it’s not running:
[samm@docker-ce hello-world]$ docker ps
Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Use the -a flag to show all containers, not just the active ones:
[samm@docker-ce hello-world]$ docker ps -a
Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
50a99a0beebd hello-world "/hello" 3 minutes ago Exited (0) 3 minutes ago hello-world_my-test_1
Now that you have tested out running a container, you can move on to exploring some of the basic Docker Compose commands.
Step 7: Learning Docker Compose Commands
To get you started with Docker Compose, this section will go over the general commands that the docker-compose tool supports.
The docker-compose command works on a per-directory basis. You can have multiple groups of Docker containers running on one machine — just make one directory for each container and one docker-compose.yml file for each directory.
So far you’ve been running docker-compose up on your own, from which you can use CTRL-C to shut the container down. This allows debug messages to be displayed in the terminal window. This isn’t ideal though; when running in production it is more robust to have docker-compose act more like a service. One simple way to do this is to add the -d option when you up your session:
docker-compose up -d
docker-compose will now fork to the background.
To show your group of Docker containers (both stopped and currently running), use the following command:
docker-compose ps -a
If a container is stopped, the State will be listed as Exited, as shown in the following example:
Output
Name Command State Ports
------------------------------------------------
hello-world_my-test_1 /hello Exit 0
A running container will show Up:
Output
Name Command State Ports
---------------------------------------------------------------
nginx_nginx_1 nginx -g daemon off; Up 443/tcp, 80/tcp
To stop all running Docker containers for an application group, issue the following command in the same directory as the docker-compose.yml file that you used to start the Docker group:
docker-compose stop
In some cases, Docker containers will store their old information in an internal volume. If you want to start from scratch you can use the rm command to fully delete all the containers that make up your container group:
docker-compose rm
If you try any of these commands from a directory other than the directory that contains a Docker container and .yml file, it will return an error:
Output
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
This section has covered the basics of how to manipulate containers with Docker Compose. If you needed to gain greater control over your containers, you could access the filesystem of the Docker container and work from a command prompt inside your container, a process that is described in the next section.
Step 8: Updating, Rebuilding and Accessing the Docker Container
In order to work on the command prompt inside a container and access its filesystem, you can use the docker exec command.
The “Hello World” example exits after it runs, so to test out docker exec, start a container that will keep running. For the purposes of this tutorial, use the Nginx image from Docker Hub.
Create a new directory named nginx and move into it:
mkdir ~/nginx
cd ~/nginx
Next, make a docker-compose.yml file in your new directory and open it in a text editor:
vi docker-compose.yml
Next, add the following lines to the file:
nginx:
image: nginx
Save the file and exit. Start the Nginx container as a background process with the following command:
docker-compose up -d
Docker Compose will download the Nginx image and the container will start in the background.
If you need to rebuild images, add the --build
flag:
docker-compose up -d --build
Now you will need the CONTAINER ID for the container. List all of the containers that are running with the following command:
docker ps
You will see something similar to the following:
Output of `docker ps`
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b86b6699714c nginx "nginx -g 'daemon of…" 20 seconds ago Up 19 seconds 80/tcp nginx_nginx_1
If you wanted to make a change to the filesystem inside this container, you’d take its ID (in this example b86b6699714c) and use docker exec to start a shell inside the container:
docker exec -it b86b6699714c /bin/bash
The -t option opens up a terminal, and the -i option makes it interactive. /bin/bash opens a bash shell to the running container.
You will then see a bash prompt for the container similar to:
root@b86b6699714c:/#
Step 9: Scaling Services
Docker Compose allows you to scale your services easily. For instance, if you have multiple instances of a service, you can scale them using:
docker-compose up -d --scale service_name=desired_instances
Conclusion
You’ve successfully learned how to install and harness the power of Docker Compose on CentOS 7. With Docker Compose, you can define, manage, and scale multi-container applications with ease. This tool simplifies the deployment process, allowing you to focus on crafting robust applications. By following this comprehensive guide, you’ve gained the ability to create Docker Compose files, deploy services, scale instances, and update containers seamlessly. Incorporating Docker Compose into your workflow empowers you to achieve consistency, efficiency, and flexibility in managing complex applications. Start streamlining your application deployment process today with Docker Compose on CentOS 7.
Also Read Our Other Guides :
- Install and Configure Docker Swarm Mode on Centos 7
- How To Install Docker CE on Centos 7
- How To Install Docker CE on Rocky Linux 9
- How To Install and Use Docker CE on Ubuntu 22.04
Finally, now you have learned how to install and use Docker Compose on Centos 7.