Using an ssh tunnel to connect to a MongoDB Atlas hosted replica set is not straight forward. There are multiple members of the replica set, access restrictions based on IP, and no SSH access into the Mongo instances themselves. Here’s an example project to use a docker container to connect to a remote, hosted MongoDB replica set with a ssh tunnel through a bounce box with an authorized IP. You’re kidding right? Find the working project here.
Category Archives: Docker
list contents of all docker volumes
To list the contents of a docker named volume, run a temporary container and mount the volume into the container, then do a directory listing. Loop over all the volumes to see what each one holds.
~$ for i in `docker volume ls -q`; do echo "volume: ${i}"; docker run --rm -it -v ${i}:/vol alpine:latest ls /vol; echo; done; volume: 140f898b1c69b85585942aa7f25cf03eba6ac66125d4a122e2fe99455c4a1a3f volume: 1fa7b49173076a3a1fdb07ea7ce65d7187ff80e8b1a56e2fa667ebbbc0543f3a dump.rdb volume: 5564a11a1945567ffcc231145c01c806afe13a02b3e0a548f1504a1cd36c9374 dump.rdb volume: 6d0b313416430d2abc0c872b98fd4180bbda4d14560c0a5d98f534f33b792164 app ib_buffer_pool private_key.pem auto.cnf ib_logfile0 public_key.pem ca-key.pem ib_logfile1 server-cert.pem ca.pem ibdata1 server-key.pem client-cert.pem mysql sys client-key.pem performance_schema volume: 8b08da4a38ca8f5924b90220db8c84384d90fe331a953a5aaa1a1944d826fc68 volume: 976239a3528b8b0b074b6b7438552e1d22c4f069cf20582d2250fcc1c068dc4f dump.rdb volume: aac262a93155286f4c551271d8d2a70f81ed1ca4cd56925e94006248e458895e dump.rdb volume: b385ebee063b72350fbc1158788cbb43d7da9b37ec95196b74caa6b22b1c115b dump.rdb volume: c73f2001f829e5574bd4246b2ab7a261a3f4d9a7ef89997765d7bf43883e5c24 dump.rdb volume: ce73f9f85c475b1fd9cf4fede20fd04250ee7702e83db67c29c7118055275c28 dump.rdb volume: foovolume1 volume: efc8a8855ac2c13d83c23573aebfd53e15072ec68d23e2793262f662ea0ae308 volume: foovolume2 auto.cnf ibdata1 ca-key.pem mysql ca.pem performance_schema client-cert.pem private_key.pem client-key.pem public_key.pem ib_buffer_pool server-key.pem ib_logfile0 sys ib_logfile1 volume: foovolume3 volume: phpsockettest php-fpm.sock volume: foovolume4 auto.cnf ib_logfile0 public_key.pem ca-key.pem ib_logfile1 server-cert.pem ca.pem ibdata1 server-key.pem client-cert.pem mysql sys client-key.pem performance_schema ib_buffer_pool private_key.pem testvol asdf asdf1 asdf2
This installation has some test files, backing files from a few different mysql databases, a unix socket, redis files, and empty volumes.
connect to ssh tunnel on Mac host from inside docker container
In some very edge use cases you may want to temporarily connect to a remote service on a different network from within a docker container. In this example I needed to provide a solution to connect from a docker container on a Mac laptop to a database hosted in a subnet far far away. Here’s a proof of concept.
setup the ssh tunnel from the Mac host:
ssh -L 56789:10.0.100.200:3306 fordodone@1.2.3.4
This starts an ssh tunnel with the TCP port 56789 open on localhost, forwarding through the ssh tunnel to port 3306 on a host with IP address 10.0.100.200 on the remote network
start a container:
docker run --rm -it alpine:latest sh
This runs an interactive shell in a temporary docker container from the alpine:latest image
inside the container add mysql client for testing:
/ # apk add --update --no-cache mysql-client
This fetches a temporary package list and installs mysql-client
create a database connection:
/ # mysql -h docker.for.mac.localhost --port 56789 -udbuser -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 675073323
Server version: 5.6.27-log MySQL Community Server (GPL)
Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MySQL [(none)]>
MySQL [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| awesome_app |
| innodb |
| mysql |
| performance_schema |
+--------------------+
5 rows in set (0.08 sec)
MySQL [(none)]>
The magic here comes from the special docker.for.mac.localhost
hostname. The internal Docker for Mac DNS resolver uses this special entry to return the internal IP address used by the Mac host. Tell the mysql client to use the port-forwarded TCP port --port 56789
and the mysql client inside the container connects through the ssh tunnel to the remote database.
using docker-compose to prototype against different databases
Greenfield projects come along with the huge benefit of not having any existing or legacy code or infrastructure to navigate when trying to design an application. In some ways having a greenfield app land in your lap is the thing a developer’s dreams are made of. Along with the amazing opportunity that comes with a “start from scratch” project, comes a higher level of creative burden. The goals of the final product dictate the software architecture, and in turn the systems infrastructure, both of which have yet to be conceived.
Many times this question (or one similarly themed) arises:
“What database is right for my application?”
Often there is a clear and straightforward answer to the question, but in some cases a savvy software architect might wish to prototype against various types of persistent data stores.
This docker-compose.yml
has a node.js container and four data store containers to play around with: MySQL, PostgreSQL, DynamoDB, and MongoDB. They can be run simultaneously, or one at a time, making it perfect for testing these technologies locally during the beginnings of the application software architecture. The final version of your application infrastructure is still a long ways off, but at least it will be easy to test drive different solutions at the outset of the project.
version: '2'
services:
my-api-node:
container_name: my-api-node
image: node:latest
volumes:
- ./:/app/
ports:
- '3000:3000'
my-api-mysql:
container_name: my-api-mysql
image: mysql:5.7
#image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: secretpassword
MYSQL_USER: my-api-node-local
MYSQL_PASSWORD: secretpassword
MYSQL_DATABASE: my_api_local
volumes:
- my-api-mysql-data:/var/lib/mysql/
ports:
- '3306:3306'
my-api-pgsql:
container_name: my-api-pgsql
image: postgres:9.6
environment:
POSTGRES_USER: my-api-node-local-pgsqltest
POSTGRES_PASSWORD: secretpassword
POSTGRES_DB: my_api_local_pgsqltest
volumes:
- my-api-pgsql-data:/var/lib/postgresql/data/
ports:
- '5432:5432'
my-api-dynamodb:
container_name: my-api-dynamodb
image: dwmkerr/dynamodb:latest
volumes:
- my-api-dynamodb-data:/data
command: -sharedDb
ports:
- '8000:8000'
my-api-mongo:
container_name: my-api-mongo
image: mongo:3.4
volumes:
- my-api-mongo-data:/data/db
ports:
- '27017:27017'
volumes:
my-api-mysql-data:
my-api-pgsql-data:
my-api-dynamodb-data:
my-api-mongo-data:
I love Docker. I use Docker a lot. And like any tool, you can do really stupid things with it. A great piece of advice comes to mind when writing a docker-compose project like this one:
“Just because you can, doesn’t mean you should.”
This statement elicits strong emotions from both halves of a syadmin brain. The first shudders at the painful thought of running multiple databases for an application (local or otherwise), and the other shouts “Hold my beer!” Which half will you listen to today?
make Makefile target for help or usage options
Using make and Makefiles with a docker based application development strategy are a great way to track shortcuts and allow team members to easily run common docker or application tasks without having to remember the syntax specifics. Without a “default” target make will attempt to run the first target (the default goal). This may be desirable in some cases, but I find it useful to have make
just print out a usage, and require the operator to specify the exact target they need.
#Makefile
DC=docker-compose
DE=docker-compose exec app
.PHONY: help
help:
@sh -c "echo ; echo 'usage: make <target> ' ; cat Makefile | grep ^[a-z] | sed -e 's/^/ /' -e 's/://' -e 's/help/help (this message)/'; echo"
docker-up:
$(DC) up -d
docker-down:
$(DC) stop
docker-rm:
$(DC) rm -v
docker-ps:
$(DC) ps
docker-logs:
$(DC) logs
test:
$(DE) sh -c "vendor/bin/phpunit"
Now without any arguments make
outputs a nice little usage message:
$ make
usage: make <target>
help (this message)
docker-up
docker-down
docker-rm
docker-ps
docker-logs
test
$
This assumes a bunch of things like you must be calling make
from the correct directory, but is a good working proof of concept.
use tmpfs for docker images
For i/o intensive Docker builds, you may want to configure Docker to use memory backed storage for images and containers. Ephemeral storage has several applications, but in this case our Docker engine is on a temporary EC2 spot instance and participating in a continuous delivery pipeline. In other words, it’s ok to loose the instance and all of the Docker images it has on it. This is for a systemd based system, in this case Ubuntu 16.04.
Create the tmpfs, then reconfigure the Docker systemd unit to use it:
mkdir /mnt/docker-tmp
mount -t tmpfs -o size=25G tmpfs /mnt/docker-tmp
sed -i 's|/mnt/docker|/mnt/docker-tmp|' /etc/systemd/system/docker.service.d/docker-startup.conf
systemctl daemon-reload
systemctl restart docker
This could be part of a bootstrapping script for build instances, or more effectively translated into config management or rolled into an AMI.
Docker Compose static IP address in docker-compose.yml
Usually, when launching Docker containers we don’t really know or care what IP address a specific container will be given. If proper service discovery and registration is configured, we just launch containers as needed and they make it into the application ecosystem seamlessly. Recently, I was working on a very edge-case multi-container application where every container needed to know (or be able to predict) every other containers’ IP address at run time. This was not a cascaded need where successor containers learn predecessors’ IP addresses, but more like a full mesh.
In Docker Engine 1.10 the docker run
command received a new flag namely the --ip
flag. This allows you to define a static IP address for a container at run time. Unfortunately, Docker Compose (1.6.2) did not support this option. I guess we can think of Engine as being upstream of Compose, so some new Engine features take a while to make it into Compose. Luckily, this has already made it into mainline dev for Compose and is earmarked for release with the 1.7.0 milestone (which should coincide with Engine 1.11). Find the commit we care about here.
get the dev build for Compose 1.7.0:
# cd /usr/local/bin
# wget -q https://dl.bintray.com/docker-compose/master/docker-compose-Linux-x86_64
# chmod 755 docker-compose-Linux-x86_64
# mv docker-compose-Linux-x86_64 docker-compose$(./docker-compose-Linux-x86_64 --version | awk '{print "-"$3$5}' | sed -e 's/,/_/')
# mv docker-compose docker-compose$(./docker-compose --version | awk '{print "-"$3$5}' | sed -e 's/,/_/')
# ln -s docker-compose-1.7.0dev_85e2fb6 docker-compose
# ls
lrwxrwxrwx 1 root root 31 Mar 30 08:38 docker-compose -> docker-compose-1.7.0dev_85e2fb6
-rwxr-xr-x 1 root root 7929597 Mar 24 08:01 docker-compose-1.6.2_4d72027
-rwxr-xr-x 1 root root 7938771 Mar 29 09:14 docker-compose-1.7.0dev_85e2fb6
#
In this case I decided to keep the 1.6.2 docker-compose
binary along with the 1.7.0 docker-compose
binary, then create a symlink to the one I wanted to use as the active docker-compose
Here’s a sample of how you might define a static IP address in docker-compose.yml
that would work using docker-compose
1.7.0
version: "2"
services:
host1:
networks:
mynet:
ipv4_address: 172.25.0.101
networks:
mynet:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/24
docker get list of tags in repository
The native docker
command has an excellent way to search the docker hub repository for an image. Just use docker search <search string>
to look in their registry.
# docker search debian
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating s... 2338 [OK]
debian Debian is a Linux distribution that's comp... 763 [OK]
google/debian 47 [OK]
neurodebian NeuroDebian provides neuroscience research... 12 [OK]
jesselang/debian-vagrant Stock Debian Images made Vagrant-friendly ... 4 [OK]
eboraas/debian Debian base images, for all currently-avai... 3 [OK]
armbuild/debian ARMHF port of debian 3 [OK]
mschuerig/debian-subsonic Subsonic 5.1 on Debian/wheezy. 3 [OK]
fike/debian-postgresql PostgreSQL 9.4 until 9.0 version running D... 2 [OK]
maxexcloo/debian Docker base image built on Debian with Sup... 1 [OK]
kalabox/debian 1 [OK]
takeshi81/debian-wheezy-php Debian wheezy based PHP repo. 1 [OK]
webhippie/debian Docker images for debian 1 [OK]
eeacms/debian Docker image for Debian to be used with EE... 1 [OK]
reinblau/debian Debian with usefully default packages for ... 1 [OK]
mariorez/debian Debian Containers for PHP Projects 0 [OK]
opennsm/debian Lightly modified Debian images for OpenNSM 0 [OK]
konstruktoid/debian Debian base image 0 [OK]
visono/debian Docker base image of debian 7 with tools i... 0 [OK]
nimmis/debian This is different version of Debian with a... 0 [OK]
pl31/debian Basic debian image 0 [OK]
idcu/debian mini debian os 0 [OK]
sassmann/debian-chromium Chromium browser based on debian 0 [OK]
sassmann/debian-firefox Firefox browser based on debian 0 [OK]
cloudrunnerio/debian 0 [OK]
We can see the official debian repository right at the top. Unfortunately there’s no way to see what tags
and images
are available for us to pull
down and deploy. However, there is a way to query the registry for all the tags in a repository, returned in JSON format. You can use a higher level programming language to get the list and parse the JSON for you. Or you can just use a simple one-liner:
# wget -q https://registry.hub.docker.com/v1/repositories/debian/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}'
latest
6
6.0
6.0.10
6.0.8
6.0.9
7
7.3
7.4
7.5
7.6
7.7
7.8
7.9
8
8.0
8.1
8.2
experimental
jessie
jessie-backports
oldstable
oldstable-backports
rc-buggy
sid
squeeze
stable
stable-backports
stretch
testing
unstable
wheezy
wheezy-backports
Wrap that in a little bash script and you have an easy way to list the tags
of a repository. Since a tag
is just a pointer to a image
commit
multiple tags
can point to the same image. Get fancy:
# wget -q https://registry.hub.docker.com/v1/repositories/debian/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | sed -e 's/^,//' | sort -t: -k2 | awk -F[:,] 'BEGIN {i="image";j="tags"}{if(i!=$2){print i" : "j; i=$2;j=$4}else{j=$4" | "j} }END{print i" : "j}'
image : tags
06af7ad6 : 7.5
19de96c1 : wheezy | 7.9 | 7
1aa59f81 : experimental
20096d5a : rc-buggy
315baabd : stable
37cbf6c3 : testing
47921512 : 7.7
4a5e6db8 : 8.1
4fbc238a : oldstable-backports
52cb7765 : wheezy-backports
84bd6e50 : unstable
88dc7f13 : jessie-backports
8c00acfb : latest | jessie | 8.2 | 8
91238ddc : stretch
b2477d24 : stable-backports
b5fe16f2 : 7.3
bbe78c1a : 7.8
bd4b66c4 : oldstable
c952ddeb : squeeze | 6.0.10 | 6.0 | 6
d56191e1 : 6.0.8
df2a0347 : 8.0
e565fbbc : 7.4
e7d52d7d : sid
feb75584 : 7.6
fee2ea4e : 6.0.9