awk print range of lines

print lines 31 through 34 inclusive:

# awk 'NR==31,NR==34' share/vyatta-cfg/templates/policy/prefix-list/node.tag/rule/node.def
        if [ $VAR(./le/@) -ne 32 ] && [ -n "$VAR(./ge/@)" ] && [ $VAR(./le/@) -le $VAR(./ge/@) ]; then 
          echo "le must be greater than or equal to ge"; 
          exit 1 ; 
        fi ; 
#

sort nested directories by last modified using find

Using ls -lt to sort a file listing by last modified time is simple and easy. If you have a large directory tree with tens of thousands of directories, using find with some massaging might be the way to go. In this example there is a directory with many directories in a tree like this:

./1
./1/1
./1/1/1
./1/1/2
./1/2
./1/2/3
./2
./2/3
./2/3/4
./2/3/5
./2/3/7
./2/3/8

we are interested in the 3rd level directory and getting a list of which ones were most recently modified

# find . -mindepth 3 -maxdepth 3 -ls | awk '$10 !~ /^20[01]/' | sed -e 's/:/ /' | sort -k8,8M -nk9,9n -nk10 -nk11 | awk '{print $12" "$8" "$9" "$10":"$11}'| column -t | tail -10

We start by finding only 3rd level directories with extended listings (there are no files at this level, so -type d is unnecessary). Then use awk to only print directories that have been modified this year (i.e. anything with a year like 200* or 201* instead of a hour:minute in column 10). Replace the time colon HH:MM so that we can sort by minute after we sort by hour. Then rearrange the columns, add back the hour:minute colon, run it through column to get nice columns, then get the last 10 results.

./586/1586/1311586  Sep  16  16:11
./980/6980/2326980  Sep  16  16:18
./616/3616/513616   Sep  16  16:20
./133/9133/2119133  Sep  16  16:21
./422/6422/2106422  Sep  16  16:24
./566/6566/2326566  Sep  16  16:46
./672/672/2310672   Sep  16  16:51
./680/680/2290680   Sep  16  17:42
./573/5573/2325573  Sep  16  17:47
./106/1106/2321106  Sep  16  17:49

connect to git host via ssh on non-standard port

Sometimes people run sshd on a non-standard port. It takes time to scan an IP block, and scanning each host for all 65,535 ports makes it take even longer. Most scanning scripts and utilities target common known open ports, like telnet, smb, and ssh. For this reason someone might opt to run sshd on a port other than 22. This is a problem if you are using git over ssh to connect to one of these repositories.

Add the follwing to your ssh config:

cat >>/home/<yourusername>/.ssh/config << EOF
Host <git server IP address>
  Port <obscure sshd port number>
  IdentityFile /home/<yourusername>/.ssh/id_git
EOF

get list of set environmental variables

Use printenv

# printenv
TERM=screen
SHELL=/bin/bash
SSH_CLIENT=10.171.0.141 60941 22
SSH_TTY=/dev/pts/0
USER=root
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/etc/pam.d
LANG=en_US.UTF-8
SHLVL=1
HOME=/root
LOGNAME=root
SSH_CONNECTION=10.171.0.141 60941 10.122.0.33 22
_=/usr/bin/printenv
OLDPWD=/etc
#

quick memcache testing and troubleshooting

Here’s a quick way to test your new memcached setup. At first it’s a bit confusing to understand that memcache has no knowledge of what keys it stores, just whether or not it has a particular key. Unlike relational databases, you can’t query memcache and get all of the values it has. You have to know the key that you want to fetch before you ask memcache about it. Let’s ask it the value for the key foo:

telnet memcacheserver 11211
Trying 10.131.215.181...
Connected to 10.131.215.181
Escape character is '^]'.
get foo
END

It returns nothing, so it doesn’t have any value for that key. The key foo is unset. Let’s set it:

set foo 0 0 3
bar
STORED

When you set a key like this, follow the syntax “set <keyname> <flag> <ttl> <storebytes>”. In this case our key is foo, we have no important flags (0), there is no expiration of the key value pair (0), and the data we are about to store is 3 bytes (aka 3 characters). Let’s fetch it now:

get foo
VALUE foo 0 3
bar
END

It returns the value of key foo as bar. Now delete it:

delete foo
DELETED

Now another, this time the key is “foobar”, and the data is a 12 byte string “barbarbarbar”:

set foobar 0 0 12
barbarbarbar
STORED
get foobar
VALUE foobar 0 12
barbarbarbar
END
delete foobar
DELETED
^]
telnet> close
Connection closed.

read configuration files without comments and spaces

Long configuration files have a helpful/annoying amount of comments and explanations about directives. Put this function into your .bashrc to read config files without any comments or white space:

rconf(){ cat "$@" | egrep -v '(#|^\s#*$|^$)' ; }

Then re-load .bashrc file and next time you need to read a config file just:

# rconf <config file>

MySQL change field separator for select dump

You can select from a table into an outfile, and change the field delimiter. This is interactive, and you must be logged in locally to the mysql server, with write permissions to wherever you want to write the file.

mysql> SELECT 'pid','email' UNION SELECT pid,email INTO OUTFILE '/tmp/test.csv' FIELDS TERMINATED BY ',' FROM table;
Query OK, 11837 rows affected (0.21 sec)

mysql> Bye
# head /tmp/test.csv
pid,email
1081603,user1@fordodone.com
888151,user2@fordodone.com
781,user3@fordodone.com
2307364,user4@fordodone.com
2286573,user5@fordodone.com
2212194,user6@fordodone.com
2137603,user7@fordodone.com
500492,user8@fordodone.com
1514582,user9@fordodone.com

This is non-interactive and can be done from remote host:

# echo "select pid,email from table;" | mysql -h dbhost -udbuser -p -Ddbname | awk '{for (i=1;i<NF;i++) printf "%s,",$i; printf $i"\n"}' > /tmp/test.csv