Mimi Magusin

July 5, 2018

Three Days of Docker (3)

In February 2017 we taught an advanced session on Docker basics as part of our Acceleration Program of the Codaisseur Code Academy. Not much later, we restructured out advanced classes and this class was just hanging around, almost forgotten…

That’s why we decided to publish it here, for all to enjoy 👏.

🎯 Things you will learn about in this series:
✔️ Servers & Clusters of Servers
✔️ Services & other dependencies
✔️ Processes & Environments
✔️ Configuring applications
✔️ Building an app for production
✔️ Docker
✔️ Docker Compose

Yesterday, you learned about images and container and about how to set up PostgreSQL in a docker container. Today, we’ll finish this series by interacting with deamon and setting up a docker file. Have fun!

Daemonize Containers

All fine and dandy that we can run PostgreSQL and Redis servers in containers, but it’s a bit annoying that they are running in our terminals.

Fortunately, we can deamonize Docker commands with the -d flag. To run a postgres container in the background, we can run:

docker run --name my-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres

Note that the above command again gives an error again because of the name conflict:

docker: Error response from daemon: Conflict. The name "/my-postgres" is already in use by container de8faab0b7f5253ccc19cf1b908d7e3af3d262e7e89d6e5c9df072d400d5f835.
You have to remove (or rename) that container to be able to reuse that name..
See 'docker run --help'.

The important part that mentions the solution is:

“You have to remove (or rename) that container to be able to reuse that name”
🎓 Let’s choose to rename the one we are running for now:
docker run --name daemonized-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
# 0f103fcae4e127601908285737d51b78e767a02a22323354f1b0140f06beb630

There you go. The docker ps command should now show that it is running:

docker ps
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# 0f103fcae4e1 postgres "/docker-entrypoint.s" 28 seconds ago Up 26 seconds 5432/tcp daemonized-postgres

Interact with a daemon

Although the container is daemonized, you can still interact with it.

🎓 Attach to it, and ^C out of it to make it stop.
docker attach daemonized-postgres
# ^CLOG: received fast shutdown request
# LOG: aborting any active transactions
# LOG: autovacuum launcher shutting down
# LOG: shutting down
# LOG: database system is shut down

Stop a deamon

🎓 Start the deamonized-postgres container again, and find a way to stop it without attaching to it.

Remove old containers

🎓 Run docker ps -a to see a list of all containers that docker is keeping around for you. Now clean up the containers by removing them by their NAME or ID to free up disk space.

Combine Containers

We are going to build a small stack that can run a Rails application, including:

  • A PostgreSQL server for our database
  • A Rails app for our Web App

The Rails App

For the Rails app, we are going to use Todo on Rails!

🎓 Get a fresh version of the Todo app:
1. Download the .zip archive.
2. Unpack the .zip and cd into the directory.
3.git init and make an initial commit with everything so far.

The Dockerfile

Now, to run our Todo app inside a Docker container, we need to create a special file with the name Dockerfile.

What makes Docker extra cool, is that we can use other images as a starting point to our own image. In this case, we will use a Ruby image. This is basically an Alpine Linux image with the Ruby version we provide as a version tag:

FROM ruby:2.4.1-alpine

Apart from that, you will see different sections that:

  • RUN commands,
  • ADD files from the local filesystem and to the image,
  • set a USER for the commands to run,
  • provide ENV variables for the container, that can be overridden.
  • set the default CMD to run

Usually, if you read the file and let you intuition guide you, the file is fairly easy to understand. Otherwise, Google is your friend!

The Dockerfile

🎓 Create a Dockerfile inside the Todo app
# Dockerfile
FROM ruby:2.4.1-alpine
# Install system dependencies and remove the cache to free up space afterwards
RUN apk --update add --virtual build-dependencies build-base ruby-dev openssl-dev \
libxml2-dev libxslt-dev postgresql-dev postgresql-client libc-dev linux-headers \
nodejs tzdata bash && \
rm -rf /var/cache/apk/*
# Add the Gemfile and Gemfile.lock from our app
ADD Gemfile /app/
ADD Gemfile.lock /app/
# Install bundler and run bundle install to install the gems from
# the Gemfile
RUN gem install bundler && \
cd /app ; bundle install --without development test
# Add the rest of the app, change the owner to nobody instead of the default:
# root.
ADD . /app
RUN chown -R nobody:nogroup /app
# Switch to the nobody user from here on down
USER nobody
# Set some environment variables and their default values
# These can be overridden when we run the container
ENV PORT 3000
ENV RAILS_ENV production
ENV RAILS_LOG_TO_STDOUT true
ENV RAILS_SERVE_STATIC_FILES true
ENV SECRET_KEY_BASE=8ce4043f8c9a3434334544c5ec6d32eb73662c44a00a4963619e1c98b2e57b0cdb6c0973cc8a50b67b655aaa7054d9629e60614233d5f6a66697c29c41700948
# Set the working directory for the commands that we run inside containers
# from this image
WORKDIR /app
# Precompile assets
RUN cd app ; rake assets:precompile
# Set the default command to run, if we don't provide a command when we run
# a container from this Images
CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]

Now that you have a file with build instructions, build your image:

docker build -t codaisseur/todo .

Run your own container

Running the Rails container

Now that we have built it, we can run it, daemonized:

docker run --name todoapp -p 3000:3000 -d codaisseur/todo

Note that the -p switch binds the container's port 3000 to our local port 3000, so we can access it from our local browser on http://localhost:

When we visit the app, we will see something very disturbing:

Let’s have a look at the logs! Run:

docker logs todoapp

The important part being:

PG::ConnectionBad (could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
):

There we go. It can’t connect to a running PostgreSQL server.

Connect your container

🎓 Stop and remove the todo app container
🎓 Run a postgres container with the name todo-db (check the commands above to make sure it is daemonized!)
🎓 Connect your todo app to that container by setting an ENV variable calledDATABASE_URL=postgres://postgres:todos@db:5432/todos and linking it to the todo-db container.
🎓 Run a new todo app in a container with --rm with the commands: rake db:create db:migrate db:seed

Orchestration

Now, when we visit our app on http://localhost:3000 it works!

To make things a little bit easier for us, we can combine the three commands we needed earlier to set this up in one file that we can pass into a management app, called docker-compose.

Create a file docker-compose.yml with the following content:

# docker-compose.yml
version: '2'
services:
db:
image: postgres
web:
build: .
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgres://postgres:todos@db:5432/todos
depends_on:
- db

Now run this single command to build and run the app and postgres containers:

docker-compose up

And run this command to create, migrate, and seed the db:

docker-compose run web rake db:create db:migrate db:seed

That’s it! Now you have a working cluster of containers, without the price tag of to servers.

Compose your own

🎓 Use a repository (or your imagination) to link external services and your code together.

Security?

If you start a container, and leave it running in your terminal like this:

docker run --name my-postgres  -e POSTGRES_PASSWORD=blah postgres

You can connect to it from another container if you link it:

docker run --rm -it --link my-postgres:db postgres bash
# NOTE: We're running these commands inside the docker container!
root@d975a6e757c4:/> ping -c1 db
PING db (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.190 ms
--- db ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.190/0.190/0.190/0.000 ms
root@d975a6e757c4:/> ping -c1 my-postgres
PING db (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.116 ms
--- db ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.116/0.116/0.116/0.000 ms

The thing is: By default in Docker, you can also connect to containers that you didn’t link specifically:

docker run --rm -it postgres bash
root@4b2cc0984581:/> ping -c1 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: icmp_seq=0 ttl=64 time=0.340 ms
--- 172.17.0.2 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.340/0.340/0.340/0.000 ms
root@4b2cc0984581:/>

How is that even possible? Well, the --link only makes the link explicit, but all containers start at the same IP address space, so they can just talk to each other. How can you tell?

> docker run --rm -it ubuntu:16.04 bash                                                                         
root@8505d2f6b911:/> apt-get update ; apt-get install nmap
#
# Output omitted for brevity
root@8505d2f6b911:/> nmap  172.17.0.0/32
Starting Nmap 7.01 ( https://nmap.org ) at 2017-02-23 11:03 UTC
Note: Host seems down. If it is really up, but blocking our ping probes, try -Pn
Nmap done: 1 IP address (0 hosts up) scanned in 0.56 seconds
root@8505d2f6b911:/> nmap 172.17.0.0/24 -p 5432
Starting Nmap 7.01 ( https://nmap.org ) at 2017-02-23 11:03 UTC
Nmap scan report for 172.17.0.1
Host is up (0.00011s latency).
PORT STATE SERVICE
5432/tcp closed postgresql
MAC Address: 02:42:37:21:66:9D (Unknown)
Nmap scan report for 172.17.0.2
Host is up (0.00010s latency).
PORT STATE SERVICE
5432/tcp open postgresql
MAC Address: 02:42:AC:11:00:02 (Unknown)
Nmap scan report for 8505d2f6b911 (172.17.0.3)
Host is up (0.000077s latency).
PORT STATE SERVICE
5432/tcp closed postgresql
Nmap done: 256 IP addresses (3 hosts up) scanned in 4.38 seconds

As you can see, you can scan the whole subnet (172.17.0.*) for running Postgres servers, or actually, for anything.

Make sure that if you run containers for different clients with competing interests, that you run them on different physical hosts. There are many ways how Docker is not secure by default. For development that usually isn’t an argument, for some employers or bigger deployments it can be. If you’re responsible the security, or just want to read more about it, have a look at the security chapter from the book ‘Using Docker’ by A. Mouat.

Conclusion

I hope you enjoyed your three day introduction to Docker! Did you get everything working? Do you have any questions left? Let us know! And if you didn’t do so yet, checkout the first two parts here and here!


Three Days of Docker (3) was originally published in Codaisseur Academy on Medium, where people are continuing the conversation by highlighting and responding to this story.

Share this page