docker get list of tags in repository

The native docker command has an excellent way to search the docker hub repository for an image. Just use docker search <search string> to look in their registry.

# docker search debian
NAME                          DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
ubuntu                        Ubuntu is a Debian-based Linux operating s...   2338      [OK]       
debian                        Debian is a Linux distribution that's comp...   763       [OK]       
google/debian                                                                 47                   [OK]
neurodebian                   NeuroDebian provides neuroscience research...   12        [OK]       
jesselang/debian-vagrant      Stock Debian Images made Vagrant-friendly ...   4                    [OK]
eboraas/debian                Debian base images, for all currently-avai...   3                    [OK]
armbuild/debian               ARMHF port of debian                            3                    [OK]
mschuerig/debian-subsonic     Subsonic 5.1 on Debian/wheezy.                  3                    [OK]
fike/debian-postgresql        PostgreSQL 9.4 until 9.0 version running D...   2                    [OK]
maxexcloo/debian              Docker base image built on Debian with Sup...   1                    [OK]
kalabox/debian                                                                1                    [OK]
takeshi81/debian-wheezy-php   Debian wheezy based PHP repo.                   1                    [OK]
webhippie/debian              Docker images for debian                        1                    [OK]
eeacms/debian                 Docker image for Debian to be used with EE...   1                    [OK]
reinblau/debian               Debian with usefully default packages for ...   1                    [OK]
mariorez/debian               Debian Containers for PHP Projects              0                    [OK]
opennsm/debian                Lightly modified Debian images for OpenNSM      0                    [OK]
konstruktoid/debian           Debian base image                               0                    [OK]
visono/debian                 Docker base image of debian 7 with tools i...   0                    [OK]
nimmis/debian                 This is different version of Debian with a...   0                    [OK]
pl31/debian                   Basic debian image                              0                    [OK]
idcu/debian                   mini debian os                                  0                    [OK]
sassmann/debian-chromium      Chromium browser based on debian                0                    [OK]
sassmann/debian-firefox       Firefox browser based on debian                 0                    [OK]
cloudrunnerio/debian                                                          0                    [OK]

We can see the official debian repository right at the top. Unfortunately there’s no way to see what tags and images are available for us to pull down and deploy. However, there is a way to query the registry for all the tags in a repository, returned in JSON format. You can use a higher level programming language to get the list and parse the JSON for you. Or you can just use a simple one-liner:

# wget -q https://registry.hub.docker.com/v1/repositories/debian/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}'
latest
6
6.0
6.0.10
6.0.8
6.0.9
7
7.3
7.4
7.5
7.6
7.7
7.8
7.9
8
8.0
8.1
8.2
experimental
jessie
jessie-backports
oldstable
oldstable-backports
rc-buggy
sid
squeeze
stable
stable-backports
stretch
testing
unstable
wheezy
wheezy-backports

Wrap that in a little bash script and you have an easy way to list the tags of a repository. Since a tag is just a pointer to a image commit multiple tags can point to the same image. Get fancy:

# wget -q https://registry.hub.docker.com/v1/repositories/debian/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | sed -e 's/^,//' | sort -t: -k2 | awk -F[:,] 'BEGIN {i="image";j="tags"}{if(i!=$2){print i" : "j; i=$2;j=$4}else{j=$4" | "j} }END{print i" : "j}'
image : tags
06af7ad6 : 7.5
19de96c1 : wheezy | 7.9 | 7
1aa59f81 : experimental
20096d5a : rc-buggy
315baabd : stable
37cbf6c3 : testing
47921512 : 7.7
4a5e6db8 : 8.1
4fbc238a : oldstable-backports
52cb7765 : wheezy-backports
84bd6e50 : unstable
88dc7f13 : jessie-backports
8c00acfb : latest | jessie | 8.2 | 8
91238ddc : stretch
b2477d24 : stable-backports
b5fe16f2 : 7.3
bbe78c1a : 7.8
bd4b66c4 : oldstable
c952ddeb : squeeze | 6.0.10 | 6.0 | 6
d56191e1 : 6.0.8
df2a0347 : 8.0
e565fbbc : 7.4
e7d52d7d : sid
feb75584 : 7.6
fee2ea4e : 6.0.9

insert characters into string with sed

# echo 20150429 | sed -e 's/\(.\{4\}\)\(.\{2\}\)/\1\/\2\//'
2015/04/29

Start with a single flat directory with thousands of log files…

# ls | head -5
db301.20140216.log.gz
db301.20140217.log.gz
db301.20140218.log.gz
db301.20140219.log.gz
db301.20140220.log.gz

Now move timestamped files into sorted directory by day

# for i in `ls`; do j=$(echo $i| cut -d . -f 2  | sed -e 's/\(.\{4\}\)\(.\{2\}\)/\1\/\2\//'); mkdir -p $j && mv $i $j; done;


check your work

# find . -type f | head -5
./2014/02/16/db301.20140216.log.gz
./2014/02/17/db301.20140217.log.gz
./2014/02/18/db301.20140218.log.gz
./2014/02/19/db301.20140219.log.gz
./2014/02/20/db301.20140220.log.gz

forgot to rename files

# for i in `find . -mindepth 3 -type d `; do pushd $i; for j in `ls`; do k=$(echo $j | sed -e 's/\(\.[0-9]\{8\}\)//' ); mv $j $k;done; popd; done;

check your work

# find . -type f | head -5
./2014/02/16/db301.log.gz
./2014/02/17/db301.log.gz
./2014/02/18/db301.log.gz
./2014/02/19/db301.log.gz
./2014/02/20/db301.log.gz

wget monitor website download speed

# while true; do date | tr '\n' '-' | sed -e 's/-/ --- /'; wget http://testsite.com/fancy.pdf -O /dev/null 2>&1 | grep saved | awk -F"[()]" '{print $2}'; sleep 1s; done;
Thu Oct 30 15:18:26 PDT 2014 --- 1.25 MB/s
Thu Oct 30 15:18:28 PDT 2014 --- 1.20 MB/s
Thu Oct 30 15:18:29 PDT 2014 --- 958.95 KB/s
Thu Oct 30 15:18:31 PDT 2014 --- 1.36 MB/s
Thu Oct 30 15:18:32 PDT 2014 --- 873.98 KB/s
Thu Oct 30 15:18:33 PDT 2014 --- 1.38 MB/s
Thu Oct 30 15:18:35 PDT 2014 --- 261.90 KB/s
Thu Oct 30 15:18:37 PDT 2014 --- 1.38 MB/s
Thu Oct 30 15:18:38 PDT 2014 --- 360.14 KB/s
Thu Oct 30 15:18:40 PDT 2014 --- 1.37 MB/s
Thu Oct 30 15:18:42 PDT 2014 --- 427.06 KB/s
Thu Oct 30 15:18:44 PDT 2014 --- 1.37 MB/s
Thu Oct 30 15:18:45 PDT 2014 --- 397.54 KB/s

sort nested directories by last modified using find

Using ls -lt to sort a file listing by last modified time is simple and easy. If you have a large directory tree with tens of thousands of directories, using find with some massaging might be the way to go. In this example there is a directory with many directories in a tree like this:

./1
./1/1
./1/1/1
./1/1/2
./1/2
./1/2/3
./2
./2/3
./2/3/4
./2/3/5
./2/3/7
./2/3/8

we are interested in the 3rd level directory and getting a list of which ones were most recently modified

# find . -mindepth 3 -maxdepth 3 -ls | awk '$10 !~ /^20[01]/' | sed -e 's/:/ /' | sort -k8,8M -nk9,9n -nk10 -nk11 | awk '{print $12" "$8" "$9" "$10":"$11}'| column -t | tail -10

We start by finding only 3rd level directories with extended listings (there are no files at this level, so -type d is unnecessary). Then use awk to only print directories that have been modified this year (i.e. anything with a year like 200* or 201* instead of a hour:minute in column 10). Replace the time colon HH:MM so that we can sort by minute after we sort by hour. Then rearrange the columns, add back the hour:minute colon, run it through column to get nice columns, then get the last 10 results.

./586/1586/1311586  Sep  16  16:11
./980/6980/2326980  Sep  16  16:18
./616/3616/513616   Sep  16  16:20
./133/9133/2119133  Sep  16  16:21
./422/6422/2106422  Sep  16  16:24
./566/6566/2326566  Sep  16  16:46
./672/672/2310672   Sep  16  16:51
./680/680/2290680   Sep  16  17:42
./573/5573/2325573  Sep  16  17:47
./106/1106/2321106  Sep  16  17:49

NetApp decode acp domain option

How does this option function to set a network? The acp.domain option is a convoluted decimal representation of the network portion of the IP address used for acp.

toaster*> options acp
acp.domain 65193
acp.enabled on
acp.netmask 65535
acp.port e0f

Take 65193 and convert it to binary: 1111111010101001. Then split it up into two (or more) octets: 11111110 10101001. Then convert each of the octets back to decimal: 254 169. Then reverse the order: 169 254. That is the acp network. The netmask portion is more straightforward. In this case our ACP network is 169.254/16.

You could hack a quick little one liner:

# for i in `echo "obase=2;65193" |bc | awk 'BEGIN{FS=""} {for(i=1;i<33;i++){printf $i; if(i==8)printf " ";}printf "\n"}'`; do echo "ibase=2;$i" |bc; done|tac | paste - - | sed 's/\t/./'
169.254
#

Linux rebuild software RAID1

Set the bash field separator to newline:

IFS="
"

See what disk and partitions are currently up, then generate commands to re-add the missing disk and partitions, then run them:

for i in `cat /proc/mdstat | grep md | cut -d [ -f1 | sed -e 's/\(md[0-9]\).*\(sd[a-z][0-9]\)/mdadm --add \/dev\/\1 \/dev\/\2/' | sed -e 's/sdb/sda/'`;  do eval $i; done;

TODO: make it determine which disk to add (/dev/sda or /dev/sdb)

change SSH listen address

If you have servers with internal and external interfaces, you may want to disable ssh on the external side. In this case we just get the internal IP address and tell sshd to only listen on that address:

sed -i "s/#ListenAddress 0.0.0.0/ListenAddress `grep address /etc/network/interfaces | grep 10.229 | awk '{print $2}'`/" /etc/ssh/sshd_config

Do it to many hosts:

for i in `seq 313 364`; do ssh ftp$i "sed -i \"s/#ListenAddress 0.0.0.0/ListenAddress \`grep address /etc/network/interfaces | grep 10.229 | awk '{print \$2}'\`/\" /etc/ssh/sshd_config"

And restart SSH:

for i in `seq 313 364`; do ssh ftp$i "service ssh restart"; done;

From time to time you might encounter a failure message about SSH and host identification. SSH remembers the fingerprint of keys of other hosts it connects to. It stores these keys in a file, so that if the fingerprint changes you will hear about it. This can happen with some DHCP addresses. For example, HostA has a DHCP address of 192.168.1.102. You change HostA’s IP address, to a static address, and the lease for 192.168.1.102 expires. Then you bring up HostB and it gets the DHCP address of 192.168.1.102. When you go to ssh to 192.168.1.102, you get an error. That’s because SSH recognizes that it’s a different host altogether. This helps prevent in MIM attacks, or IP spoofing. In this case we know what’s going on, so it’s safe to remove the old fingerprint for HostA and reconnect to HostB subsequently storing it’s fingerprint.

# ssh 192.168.1.102
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
34:6e:16:28:90:21:bd:6a:80:e4:97:41:85:ef:4a:ad.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /root/.ssh/known_hosts:15
ECDSA host key for 192.168.1.102 has changed and you have requested strict checking.
Host key verification failed.
#

One way to fix this is to use vi to edit the known_hosts file. Use the :15 command to navigate to line 15 and use dd keystroke to remove this entry, then :wq to save and quit vi.

Another alternative is to use sed to remove line 15:

# sed -i '15d' /root/.ssh/known_hosts
#

Also, the ssh-keygen utility comes with this built in:

# ssh-keygen -f '/root/.ssh/known_hosts' -R 192.168.1.102
/root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old
# 

find duplicate entry in sql dump

Recently, I tried to import a SQL dump created by mysqldump that somehow had a duplicate entry for a primary key. Here’s a sample of the contents:

INSERT INTO `table1` VALUES ('B97bKm',71029594,3,NULL,NULL,'2013-01-22 09:25:39'),('dZfUHQ',804776,1,NULL,NULL,'2012-09-05 16:15:23'),('hWkGsz',70198487,0,NULL,NULL,'2013-01-05 10:55:36'),('n6366s',69480146,1,NULL,NULL,'2012-
12-18 03:27:45'),('tBP6Ug',65100805,1,NULL,NULL,'2012-08-29 21:32:39'),('yfpewZ',18724906,0,NULL,NULL,'2013-03-31 17:12:58'),('UNz5qp',8392940,2,NULL,NULL,'2012-11-28 02:00:00'),('9WVpVV',71181566,0,NULL,NULL,'2013-01-25 06:15:03'),('kEP
Qu5',64972980,9,NULL,NULL,'2012-09-01 06:00:36')

It goes on for another 270,000 entries. I was able to find the duplicate value like this:

# cat /tmp/table1.sql | grep INSERT | sed -e 's/),/\n/g' | sed -e 's/VALUES /\n/' | grep -v INSERT | awk -F, '{print $2}' | sort | uniq -c | awk '{if($1>1) print;}'
    2 64590015
#

The primary key value 64590015 had 2 entries. I removed the spurious entry, and subsequently the SQL imported fine.