Docker is a container technology developed for Linux. However, if you are new to the world of software development and/or server computing it is difficult to wrap your head around what purpose does Docker actually server.
A good starting point to consider would be hypervisors like VirtualBox or VMWare which most of us have used at some point to run a different operating system (called the guest OS) on top of the current operating system (the host OS a.k.a hypervisor) that we are currently running.
The job of the hypervisor is to imitate all the hardware resources like RAM, CPU, I/O, network card and so on. The guest OS can make use of them and not know that it is actually running inside another operating system.
While very useful the hypervisors have a lot of short-coming. Some of which are very easy to understand:
- Once you’ve allocated a guest OS some resources (e.g, 2GB of RAM) it takes up the entire resource even if not all of it is actually being used.
- There’s a performance penalty associated with imitating hardware interfaces that the hypervisor does. Things don’t work as fast as they could if they were run directly on the actual hardware.
- The hypervisor, or the host OS, has no idea about the processes that are running inside of a guest OS.
This where Docker comes into the picture. Instead of trying to fool the entire operating system into thinking that it is running on actual hardware we go a little higher on the software stack and try to fool the applications by making it appear as if they are, each of them, running on their own operating system, pretty much all alone.
You can have multiple instances or Apache or Nginx running with various versions of PHP, and in every combination possible, by providing a Docker container to each one of the possibility.
- Docker Image: A file which tells Docker what the contents of the container are going to be and how they are to be configured
- Docker Container: A running instance created from a Docker image. You can spin up multiple containers from a given Docker Image.
To get started with the tutorial, you may want to have a VPS running Ubuntu with root privileges and a Public IP. Once you have that, getting Docker is as simple as typing the following commands:
$sudo apt update $sudo apt install docker.io
You may want to sign up at Docker Hub before going any further. It not essential if you just want to use Docker images from them for playing around. However, you will soon want to create your own Docker Images and Docker Hub is a good starting point to save your progress.
To login to your Docker Hub account, just type in:
After which it will ask you to enter your username and password. Once that is done, you can start pulling container images from Docker Hub and start spinning up containers with them
First, let’s start a Hello World container, using the command:
$docker run hello-world
The message itself is self-explanatory of what goes on under the hood. To see the list of running containers you give the command:
If you are following along with this tutorial, then you are not going to see anything interesting.
This is because the container we have nothing interesting. The container ran once and then it exited. To see the list of all the containers that have stopped or are currently running on your system, run the following:
$docker ps -a
This will show you all the containers that reside on your system.
If you run
docker run hello-world multiple times then you will see a list of containers, each with a different ID and name.
This is important to understand, since you may accidentally end up creating so many containers from a Docker image that it might slow down your system or worse, make you lose track of your progress.
To remove a container you first need to stop it. For example, if I want to stop the first two containers. I can use either their names or IDs in the following command:
$docker stop tiny_williams nostalgic_swanson
Here you can see from their status that both the containers have already exited, so stopping them was unnecessary, all you need to do is run:
$docker rm tiny_williams nostalgic_swanson
You can run docker ps -a to verify that the containers have been removed.
Note: The docker ps command is inspired from the Unix command ps which shows the list of running processes on your system. However, this has led to a wrong notion that each container can run only one process in it.
Similarly to check and remove various docker images we have the following command:
To remove an image run:
$docker rmi image_id
Some useful containers
Let’s see some interesting applications of using docker. To begin with let’s spin up a Web server:
$docker run -d -p 8080:80 httpd
Now there are a few things that need explaining. First is the -d flag. This is the daemon flag allows the newly spun container to run quietly in the background. Second is the -p 8080:80 part, this tells your host operating system to forward all the requests coming on its port 8080 to the httpd container’s port 80.
You can then visit http://your-ip-address:8080 and see the message saying, “It works!”
If you want to create more httpd containers you can keep changing the incoming port number e.g 8081 and so on (Just try to avoid using port numbers less than 4000, since they are reserved for other purposes). This entire rigmarole with port numbers is known as port forwarding and is a common practice even outside the world of containers.
If you have come this far, here’s a cool container that you may find extremely useful for testing or just playing around with new ideas.
$docker run -it ubuntu
This spins up a Ubuntu instance and attaches you to it with root access and everything. It smells and feels like a complete newly installed operating system. You can install packages, test your new software, or new upgrades, even run fully-functioning websites in it. If someone tries to gain access to your system by exploiting some vulnerability in that website, they will still find themselves trapped in that container.
If you have exited this new container and want to ‘log in’ again just run.
$docker start container_name $docker attach container_name
You can even explore other distributions like Fedora without actually installing the entire Fedora OS.
Run docker run –help to get a list of different flags including the ones we discussed here. Experiment with various combinations of them. For example, you can ssh directly into your container if you forward some available port from host to port 22 on the guest.
The only limitation is the Linux kernel version which remains the same no matter what. This is also the reason that you can’t have docker containers on Linux that behave like any BSD operating systems since they have a fundamentally different kernel.