disable Linux/Unix user

You can remove a user and their home directory, however to preserve their login, but disable it the easiest way is to change their shell to one that doesn’t exist, or /bin/false. /bin/nologin leaves a breadcrumb that someone else can tell that a human disabled the account.

# chsh -s /bin/nologin firedemployee

merge directories with rsync

rsync -a --ignore-existing --remove-source-files src/ dest

Any existing files in the destination will not be overwritten. After it’s done, look in src to see what is also in destination, then diff to see which ones to manually keep, or quickly write a one-liner to compare time stamps and keep newer ones and overwrite older versions.

drop messages from mail queue

During massive outages (which thankfully happen rarely), I like to keep my Nagios monitoring machines online and working. This is because I like to have a view of the servers with remaining problems, or processes that didn’t come back online correctly. However, I stop our MTA (postfix) on those servers, because I don’t want to receive texts and emails complaining about all the servers that are still down. Once the problem is resumed, I could just startup postfix, but lets take a look at the mailqueue:

# mailq 2>&1 | tail -1 | cut -d " " -f 5-
428 Requests.

Hmm… seems a bit high. If we start postfix again, guess how many text messages are going to wind up on my phone? Let’s drop all of the messages in the queue:

# postsuper -d ALL
postsuper: Deleted: 428 messages

Now we can start postfix without excessive messages being sent.

Alternatively, if the main MX relays go down for a period of time, you will see the mailqueue fill up with undelivered mail. After you bring the MXs back online, the mail may be sent to them immediately. Your MTA probably has an increasing retry interval, which could lead to a one hour delay or longer. Do this to attempt to relay all the mail in the queue:

# postqueue -f

It will try to reconnect immediately to the MX relay, and deliver all mail if it can.

rsync migration with manifest of transfer

I once had a migration project to move 40TB of data that needed to be moved from source to destination NFS volumes. Naturally, I went with rsync. This was basically the command for the initial transfers:

rsync -a --out-format="transfer:%t,%b,%f" --itemize-changes /mnt/srcvol /mnt/destvol >> /log/file.log

Pretty simple right? The logs are a simple csv file that looked like this:

transfer:2013/05/02 10:16:13,35291,mnt/srcvol/archive/foo/bar/barfoo/IMAGES/1256562131100.jpg

The customer asked for daily updates on progress. I said no problem, and this one liner takes care of it:

# grep transfer /log/file.log | awk -F "," '{if ($2!=0) i=i+$2; x++} END {print "total Gbytes: "i/1073741824"\ntotal files: "x}'
total Gbytes: 1153.29
total files: 123686

From the rsync command above, the %t means timestamp (2013/05/02 10:16:13), the %b means bytes transferred (35291), and the %f means the whole file path. By adding up the %b column of output and counting how many times you added it, you get both the total bytes transferred and the total number of files transferred. Directories show up as 0 byte transfers so in awk we don’t count them. Also, I threw in the divide by 1073741824 (1024*1024*1024), which converts bytes to Gebibytes.

I ended up putting it in a shell script and adding options such as, just find transfers for a particular day/hour, better handling for the Gbytes number, rate calculation, and the ability to add logs from multiple data moving servers.

mount iso file in linux

I had to copy the contents of a .iso file. It’s easy to mount it and see what’s in the iso.


# mkdir /mnt/iso
# mount -o loop VMware-VMvisor-Installer-5.1.0-799733.x86_64.iso /mnt/iso
# cd /mnt/disk
# ls
a.b00         ata_pata.v05  boot.cfg      ima_qla4.v00  isolinux.bin  misc_dri.v00  net_e100.v01  net_r816.v00  ohci_usb.v00  sata_sat.v02  scsi_bnx.v00  scsi_meg.v01  scsi_qla.v01  upgrade                  xlibs.v00
ata_pata.v00  ata_pata.v06  chardevs.b00  imgdb.tgz     isolinux.cfg  net_be2n.v00  net_enic.v00  net_r816.v01  safeboot.c32  sata_sat.v03  scsi_fni.v00  scsi_meg.v02  scsi_rst.v00  user.b00                 xorg.v00
ata_pata.v01  ata_pata.v07  efi           imgpayld.tgz  k.b00         net_bnx2.v00  net_forc.v00  net_s2io.v00  sata_ahc.v00  sata_sat.v04  scsi_hps.v00  scsi_mpt.v00  s.v00         useropts.gz
ata_pata.v02  b.b00         efiboot.img   ipmi_ipm.v00  mboot.c32     net_bnx2.v01  net_igb.v00   net_sky2.v00  sata_ata.v00  scsi_aac.v00  scsi_ips.v00  scsi_mpt.v01  tboot.b00     vmware-esx-base-osl.txt
ata_pata.v03  block_cc.v00  ehci_ehc.v00  ipmi_ipm.v01  menu.c32      net_cnic.v00  net_ixgb.v00  net_tg3.v00   sata_sat.v00  scsi_adp.v00  scsi_lpf.v00  scsi_mpt.v02  tools.t00     vmware-esx-base-readme
ata_pata.v04  boot.cat      esx_dvfi.v00  ipmi_ipm.v02  misc_cni.v00  net_e100.v00  net_nx_n.v00  net_vmxn.v00  sata_sat.v01  scsi_aic.v00  scsi_meg.v00  scsi_qla.v00  uhci_usb.v00  weaselin.t00
#

Now the contents of /mnt/disk appear as if you had burned the .iso and put it in the CD drive.

rsync on different ssh port

By default ssh runs on TCP port 22. If you have a ssh configured to listen on a non standard port, you may need a special option to make rsync connect to that port. I was writing a quick backup script for this worpress site, and ran into this issue. In my case I was trying to rsync to a remote server with ssh listening on 4590. You have to give rsync a special ssh option:

# rsync -a --rsh='ssh -p 4590' /srv/www/wp-uploads/ backupsite.com:/backups/wp/wp-uplodas

find recently modified files

To find the 10 newest items in your home directory you can just use ls.

# cd 
# ls -lt | head
total 1197116
-rw-r--r--  1 fordodone     fordodone        5353 2013-04-23 10:42 file1
-rw-r--r--  1 fordodone     fordodone        2945 2013-04-23 10:21 file2
drwxr-xr-x  2 fordodone     fordodone       12288 2013-04-12 08:53 bin
-rw-r--r--  1 fordodone fordodone         0 2013-03-27 08:45 file3
-rw-------  1 fordodone fordodone     90420 2013-03-23 09:03 file4
-rw-r--r--  1 fordodone     fordodone          83 2013-03-19 10:35 file5
-rw-r--r--  1 fordodone     fordodone        8683 2013-03-15 10:26 file6
-rw-r--r--  1 fordodone     fordodone       28628 2013-03-15 09:15 file7
-rw-r--r--  1 fordodone     fordodone       81303 2013-03-15 09:15 file8

You could even get more aggressive by throwing the recursive flag in. Simple, right? But what if you need to recurse a large number of files and directories on a storage system, say 123,000 directories and 37 million files. I think find might be the way to go.

This will find files modified in the last 24 hours:

# find . -type f -mtime -1 -ls

find doesn’t really provide granular control of searching for files with a certain modified time. If you just want to find files that have been modified today (i.e. since 12am) we can use the -newer flag. First touch a temporary file with a timestamp to compare to files you want to find. In this case we make a date string of 04240000, or today at 00:00, and touch a file with that timestamp. Then use find to find files that are newer than the timestamp of the file you just touched.


# touch -t `date +%m%d0000` /tmp/compare
# find . -type f -newer /tmp/compare
(long output)
# rm /tmp/compare