use eval to run commands generated by awk

Here’s one way to generate a set of commands with awk, and then run them in a loop with eval.

# cat snippet
field1 /mnt/somedir/785/8785/948785 41 /mnt/somedir2/785/8785/948785 1 2
field1 /mnt/somedir/791/8791/948791 2 /mnt/somedir2/791/8791/948791 6 2
field1 /mnt/somedir/924/8924/948924 2 /mnt/somedir2/924/8924/948924 23 2
field1 /mnt/somedir/993/8993/948993 2 /mnt/somedir2/993/8993/948993 19876 2
field1 /mnt/somedir/3/9003/949003 8 /mnt/somedir2/3/9003/949003 273 2
field1 /mnt/somedir/70/9070/949070 341 /mnt/somedir2/70/9070/949070 6 2
field1 /mnt/somedir/517/4517/954517 2 /mnt/somedir2/517/4517/954517 14 2
field1 /mnt/somedir/699/4699/954699 210 /mnt/somedir2/699/4699/954699 1 2
field1 /mnt/somedir/726/4726/954726 1 /mnt/somedir2/726/4726/954726 6 2

Now use awk to get the output you want and generate commands. Use a forloop and eval to run them.

# for i in `awk '{if($3>$5) print "rsync -a --ignore-existing "$2"/ "$4}' left.compare.sorted  `; do echo $i; eval $i; done;
rsync -a --ignore-existing /mnt/somedir/70/9070/949070/ /mnt/somedir2/70/9070/949070
rsync -a --ignore-existing /mnt/somedir/699/4699/954699/ /mnt/somedir2/699/4699/954699
#

find directories owned by root

Find the directories owned by root in a certain part of the tree:

# find . -depth -mindepth 1 -maxdepth 3 -type d -ls | awk '$5 ~ /root/ {print}'
  7930    0 drwxr-xr-x  12 root root      115 Oct 11 16:44 ./562
3805069    0 drwxr-xr-x   3 root root       20 Oct 11 16:44 ./562/8562
  7946    0 drwxr-xr-x   5 root root       46 Dec  8 23:52 ./563/6563
  7947    0 drwxr-xr-x   3 root root      21 Oct 21  2008 ./563/6563/456563
3464735    0 drwxr-xr-x   2 root root        6 Sep 26 17:29 ./563/6563/436563
4075144    4 drwxr-xr-x   2 root root     4096 Dec  9 00:39 ./563/6563/2366563

Change all the ownership to www-data:

# find . -depth -mindepth 1 -maxdepth 3 -type d -exec chown www-data: {} \;

You could do this:

# cd .. && chown -R www-data: dirname

But we only suspect the problem at a certain level in the tree, and it would be way slow to recursively chown hundreds of millions of files.

awk multiple input field separators

We want just the last directory in the tree from this list:

# tail -3 list
/mnt/fs92/vol3/users/520/520/1680520 -- 8631
/mnt/fs92/vol3/users/568/8568/1578568 -- 2
/mnt/fs92/vol3/users/429/7429/1757429 -- 2

You can use both cut and awk, or awk twice…
weak:

# tail -3 list | cut -d / -f 8 | awk '{print $1}'
1680520
1578568
1757429

Or you can just tell awk to use multiple field separators. They are bound by square brackets, and in this case we can use both / and a space to be the input field separators, making the 8th column the one we are interested in.

strong:

# tail -3 list | awk -F "[/ ]" '{print $8}'
1680520
1578568
1757429

convert log time seconds to readable date

[1122633.028643] end_request: I/O error, dev sdc, sector 0

When looking in logs, like dmesg, error messages are preceded by a number that represents the uptime on the server in seconds at the time of the error. So this I/O error happened 1122633 seconds after the machine booted. This means nothing to us. In order to see when the error happened, you need to convert the seconds of uptime into a readable date.

First get the date/time at which the server booted using who -b and convert to seconds. Then add the seconds of uptime from the error message, and then convert back to a human readable date:

# date --date="@$(echo $(date --date="`who -b | awk '{print $3" "$4}'`" +%s)+1122633|bc)"
# Tue Oct 22 00:03:33 PDT 2013

So this error happened shortly after midnight. Very interesting…

awk print range of lines

print lines 31 through 34 inclusive:

# awk 'NR==31,NR==34' share/vyatta-cfg/templates/policy/prefix-list/node.tag/rule/node.def
        if [ $VAR(./le/@) -ne 32 ] && [ -n "$VAR(./ge/@)" ] && [ $VAR(./le/@) -le $VAR(./ge/@) ]; then 
          echo "le must be greater than or equal to ge"; 
          exit 1 ; 
        fi ; 
#

sort nested directories by last modified using find

Using ls -lt to sort a file listing by last modified time is simple and easy. If you have a large directory tree with tens of thousands of directories, using find with some massaging might be the way to go. In this example there is a directory with many directories in a tree like this:

./1
./1/1
./1/1/1
./1/1/2
./1/2
./1/2/3
./2
./2/3
./2/3/4
./2/3/5
./2/3/7
./2/3/8

we are interested in the 3rd level directory and getting a list of which ones were most recently modified

# find . -mindepth 3 -maxdepth 3 -ls | awk '$10 !~ /^20[01]/' | sed -e 's/:/ /' | sort -k8,8M -nk9,9n -nk10 -nk11 | awk '{print $12" "$8" "$9" "$10":"$11}'| column -t | tail -10

We start by finding only 3rd level directories with extended listings (there are no files at this level, so -type d is unnecessary). Then use awk to only print directories that have been modified this year (i.e. anything with a year like 200* or 201* instead of a hour:minute in column 10). Replace the time colon HH:MM so that we can sort by minute after we sort by hour. Then rearrange the columns, add back the hour:minute colon, run it through column to get nice columns, then get the last 10 results.

./586/1586/1311586  Sep  16  16:11
./980/6980/2326980  Sep  16  16:18
./616/3616/513616   Sep  16  16:20
./133/9133/2119133  Sep  16  16:21
./422/6422/2106422  Sep  16  16:24
./566/6566/2326566  Sep  16  16:46
./672/672/2310672   Sep  16  16:51
./680/680/2290680   Sep  16  17:42
./573/5573/2325573  Sep  16  17:47
./106/1106/2321106  Sep  16  17:49

MySQL change field separator for select dump

You can select from a table into an outfile, and change the field delimiter. This is interactive, and you must be logged in locally to the mysql server, with write permissions to wherever you want to write the file.

mysql> SELECT 'pid','email' UNION SELECT pid,email INTO OUTFILE '/tmp/test.csv' FIELDS TERMINATED BY ',' FROM table;
Query OK, 11837 rows affected (0.21 sec)

mysql> Bye
# head /tmp/test.csv
pid,email
1081603,user1@fordodone.com
888151,user2@fordodone.com
781,user3@fordodone.com
2307364,user4@fordodone.com
2286573,user5@fordodone.com
2212194,user6@fordodone.com
2137603,user7@fordodone.com
500492,user8@fordodone.com
1514582,user9@fordodone.com

This is non-interactive and can be done from remote host:

# echo "select pid,email from table;" | mysql -h dbhost -udbuser -p -Ddbname | awk '{for (i=1;i<NF;i++) printf "%s,",$i; printf $i"\n"}' > /tmp/test.csv

get last occurrence of string in file

Here’s just a few ways to skin this cat:

# tac /etc/fstab | grep -m 1 fs144
10.239.11.144:/vol/vol22         /mnt/fs144/vol22        nfs     auto,rw,soft,mountvers=3 0 0

# grep fs144 /etc/fstab | tail -1
10.239.11.144:/vol/vol22         /mnt/fs144/vol22        nfs     auto,rw,soft,mountvers=3 0 0

# awk '{ if ( /fs144/ ) j=$0;} END {print j}' /etc/fstab
10.239.11.144:/vol/vol22         /mnt/fs144/vol22        nfs     auto,rw,soft,mountvers=3 0 0