use wget to recursively download files via FTP

A command line ftp client is good for many things. You can turn off prompting, and use mget with wildcard to get many files. The problem is that mget doesn’t create directories locally, so when it tries to recurse into destination directories in order to place incoming files into them, it fails. We can use wget instead to traverse the directory structure, create folders, and download

# wget -r 'ftp://username:password@ftp.example.com'

Note: rsync would be ideal for this, but there are some cases where the source only offers ftp as a connection protocol.

perl validate email address

There’s a great perl module Email::Valid to check if a string is a well-formed email address. It doesn’t actually check to see if the email exists in the destination domain, but in it’s simplest form we can just make sure the string follows the proper email address format specification.

Install the module:

# apt-get install -y libemail-valid-perl

quick and dirty:

#!/usr/bin/perl
use strict;
use Email::Valid;

# do it once with an email address as an argument
#if ( ! Email::Valid->address("$ARGV[0]")){print "$ARGV[0]\n"};

my $file="to_be_tested2_emails.txt";

# do it with a list of emails
  open(FH,"$file") or die("Can\'t read $file.\n");
  foreach my $line (<FH>) {
    chomp $line;
    if ( ! Email::Valid->address("$line")){print "$line\n"};

  };
  close(FH);

TODO: think of loading into array; removing malformed addresses; having option to run once vs. run list; test the rfc822 validation

docker get list of tags in repository

The native docker command has an excellent way to search the docker hub repository for an image. Just use docker search <search string> to look in their registry.

# docker search debian
NAME                          DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
ubuntu                        Ubuntu is a Debian-based Linux operating s...   2338      [OK]       
debian                        Debian is a Linux distribution that's comp...   763       [OK]       
google/debian                                                                 47                   [OK]
neurodebian                   NeuroDebian provides neuroscience research...   12        [OK]       
jesselang/debian-vagrant      Stock Debian Images made Vagrant-friendly ...   4                    [OK]
eboraas/debian                Debian base images, for all currently-avai...   3                    [OK]
armbuild/debian               ARMHF port of debian                            3                    [OK]
mschuerig/debian-subsonic     Subsonic 5.1 on Debian/wheezy.                  3                    [OK]
fike/debian-postgresql        PostgreSQL 9.4 until 9.0 version running D...   2                    [OK]
maxexcloo/debian              Docker base image built on Debian with Sup...   1                    [OK]
kalabox/debian                                                                1                    [OK]
takeshi81/debian-wheezy-php   Debian wheezy based PHP repo.                   1                    [OK]
webhippie/debian              Docker images for debian                        1                    [OK]
eeacms/debian                 Docker image for Debian to be used with EE...   1                    [OK]
reinblau/debian               Debian with usefully default packages for ...   1                    [OK]
mariorez/debian               Debian Containers for PHP Projects              0                    [OK]
opennsm/debian                Lightly modified Debian images for OpenNSM      0                    [OK]
konstruktoid/debian           Debian base image                               0                    [OK]
visono/debian                 Docker base image of debian 7 with tools i...   0                    [OK]
nimmis/debian                 This is different version of Debian with a...   0                    [OK]
pl31/debian                   Basic debian image                              0                    [OK]
idcu/debian                   mini debian os                                  0                    [OK]
sassmann/debian-chromium      Chromium browser based on debian                0                    [OK]
sassmann/debian-firefox       Firefox browser based on debian                 0                    [OK]
cloudrunnerio/debian                                                          0                    [OK]

We can see the official debian repository right at the top. Unfortunately there’s no way to see what tags and images are available for us to pull down and deploy. However, there is a way to query the registry for all the tags in a repository, returned in JSON format. You can use a higher level programming language to get the list and parse the JSON for you. Or you can just use a simple one-liner:

# wget -q https://registry.hub.docker.com/v1/repositories/debian/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n'  | awk -F: '{print $3}'
latest
6
6.0
6.0.10
6.0.8
6.0.9
7
7.3
7.4
7.5
7.6
7.7
7.8
7.9
8
8.0
8.1
8.2
experimental
jessie
jessie-backports
oldstable
oldstable-backports
rc-buggy
sid
squeeze
stable
stable-backports
stretch
testing
unstable
wheezy
wheezy-backports

Wrap that in a little bash script and you have an easy way to list the tags of a repository. Since a tag is just a pointer to a image commit multiple tags can point to the same image. Get fancy:

# wget -q https://registry.hub.docker.com/v1/repositories/debian/tags -O -  | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | sed -e 's/^,//' | sort -t: -k2 | awk -F[:,] 'BEGIN {i="image";j="tags"}{if(i!=$2){print i" : "j; i=$2;j=$4}else{j=$4" | "j} }END{print i" : "j}'
image : tags
06af7ad6 : 7.5
19de96c1 : wheezy | 7.9 | 7
1aa59f81 : experimental
20096d5a : rc-buggy
315baabd : stable
37cbf6c3 : testing
47921512 : 7.7
4a5e6db8 : 8.1
4fbc238a : oldstable-backports
52cb7765 : wheezy-backports
84bd6e50 : unstable
88dc7f13 : jessie-backports
8c00acfb : latest | jessie | 8.2 | 8
91238ddc : stretch
b2477d24 : stable-backports
b5fe16f2 : 7.3
bbe78c1a : 7.8
bd4b66c4 : oldstable
c952ddeb : squeeze | 6.0.10 | 6.0 | 6
d56191e1 : 6.0.8
df2a0347 : 8.0
e565fbbc : 7.4
e7d52d7d : sid
feb75584 : 7.6
fee2ea4e : 6.0.9

add auto statements for interface aliases to /etc/network/interfaces

someone added a bunch of iface statements for configuration but forgot the auto part…

# sed -i 's/iface eth0:\([0-9]\{3\}\)/auto eth0:\1\niface eth0:\1/' /etc/network/interfaces
<snip>
auto eth0:196
iface eth0:196 inet static
  address 1.2.3.4
  netmask 255.255.255.0
auto eth0:197
iface eth0:197 inet static
  address 1.2.3.5
  netmask 255.255.255.0
auto eth0:198
iface eth0:198 inet static
  address 1.2.3.6
  netmask 255.255.255.0
</snip>

tcpdump mysql queries

If you have access to the MySQL server and logging is turned on then you have access to the queries as they are logged. Many production databases do not have logging turned on, simply because there are too many queries to handle. Also, there could be hundreds of servers hitting the logs at any given time, making it hard to see activity from a particular client. To take a look at MySQL queries as they leave a webserver you can use tcpdump and massage the output to get you what queries are being sent from that host.

# tcpdump -i eth0 -l -s 0 -w - dst port 3306 | stdbuf -o0 strings| stdbuf -o0 grep "SELECT\|INSERT\|UPDATE|\FROM\|WHERE\|ORDER\|AND\|LIMIT\|FROM\|SET\|COMMIT\|ROLLBACK"

Sometimes the query gets broken up into pieces if WHERE or LIMIT is used, and those pieces wind up on separate lines so we need to grep for them separately. Use stdbuf to force all the pipes to NOT buffer output, i.e. print output in pseudo real time.

generate vol create commands for migrating volumes to new aggr

Anyone who has done a lot of migrations has some snippets jotted down to help streamline the process. On a filer with many volumes you can use this to generate the create commands for destination volumes on a new aggregate.

# for i in `ssh filer01 "vol status" | awk '/raid_dp/ {print $1}' | grep -v Volume | grep -v Warning | grep -v _new`; do j=`ssh filer01 "vol size $i" | grep 'has size' | sed -e "s/'//g" -e "s/\.//g"`; k=`echo $j | awk '{print "vol create "$5"_new newaggr "$NF}'`; echo $k ; done;
vol create vol12_new newaggr 10g
vol create vol13_new newaggr 70g
vol create vol14_new newaggr 1600g
<snip...>

If you trust your hackery enough, you might even send the commands over to actually create the vols…