disable Linux/Unix user

You can remove a user and their home directory, however to preserve their login, but disable it the easiest way is to change their shell to one that doesn’t exist, or /bin/false. /bin/nologin leaves a breadcrumb that someone else can tell that a human disabled the account.

# chsh -s /bin/nologin firedemployee
share:

change Baytech RPC MAC address

Why any manufacturer would not burn a MAC address into an interface is beyond me. There are many of these RPC-3 units around that all have the same MAC address ( 00:C0:48:00:56:CE ), because they weren’t set at the factory. To set them:

1) log in on console
2) select 3 for Configuration
3) type in an up carat ^
4) hit enter
5) enter the serial number of the unit (i.e. 02314798 not 02314798-00 )
6) hit enter, enter
7) accept changes Y
8) unit resets and uses the new MAC address

Because the serial numbers should be unique, the MACs will also be unique. This decimal serial number 02314798 translates to this hexadecimal number 23522E, and it’s used to form the last 6 digits of the MAC making the final address 00:C0:48:23:52:2E. They must have set all of them using the fake serial number 00022222 at the factory for some reason.

share:

dhcpd lease information sorted by date and time

When looking on a pxe boot install server, you can see what the newest clients were to boot. If you don’t have KVM access on new servers to be installed, just look at the newest lease info, and make an educated guess about which new one to login to the auto-installer environment (preseed) via ssh.

Here’s a snippet from the leases file:

lease 10.101.40.85 {
  starts 3 2013/05/15 19:54:36;
  ends 3 2013/05/15 20:54:36;
  cltt 3 2013/05/15 19:54:36;
  binding state active;
  next binding state free;
  hardware ethernet 00:30:48:5c:cf:34;
  uid "\001\0000H\\\3174";
}

And after some parsing:

# cat /var/lib/dhcp/dhcpd.leases | grep -e lease -e hardware -e start -e end | grep -v format | grep -v written | sed -e '/start/s/\// /g' -e 's/;//g' -e '/starts/s/:/ /g' | paste - - - - | awk '{print $2" "$18" "$6" "$7" "$8" "$9" "$10" "$11" "$14" "$15}' | sort -k 3,3n -k 4,4n -k 5,5n -k 6,6n -k 7,7n -k 8,8n | awk '{print $1" "$2" "$3"/"$4"/"$5" "$6":"$7":"$8" "$9" "$10}' | column -t


10.101.40.127  00:1e:68:9a:e5:ac  2013/04/26  22:02:58  2013/04/26  23:02:58
10.101.40.129  00:1e:68:9a:e5:ac  2013/04/26  23:10:01  2013/04/27  00:10:01
10.101.40.122  00:1e:68:9a:e5:ac  2013/04/26  23:27:57  2013/04/26  23:30:42
10.101.40.118  00:1e:68:9a:ee:69  2013/05/14  16:21:28  2013/05/14  17:21:28
10.101.40.85   00:30:48:5c:cf:34  2013/05/14  16:54:43  2013/05/14  17:54:43
10.101.40.118  00:1e:68:9a:ee:69  2013/05/14  17:14:04  2013/05/14  18:14:04
10.101.40.85   00:30:48:5c:cf:34  2013/05/14  17:24:43  2013/05/14  18:24:43
10.101.40.85   00:30:48:5c:cf:34  2013/05/14  17:54:42  2013/05/14  18:54:42
#
share:

merge directories with rsync

rsync -a --ignore-existing --remove-source-files src/ dest

Any existing files in the destination will not be overwritten. After it’s done, look in src to see what is also in destination, then diff to see which ones to manually keep, or quickly write a one-liner to compare time stamps and keep newer ones and overwrite older versions.

share:

drop messages from mail queue

During massive outages (which thankfully happen rarely), I like to keep my Nagios monitoring machines online and working. This is because I like to have a view of the servers with remaining problems, or processes that didn’t come back online correctly. However, I stop our MTA (postfix) on those servers, because I don’t want to receive texts and emails complaining about all the servers that are still down. Once the problem is resumed, I could just startup postfix, but lets take a look at the mailqueue:

# mailq 2>&1 | tail -1 | cut -d " " -f 5-
428 Requests.

Hmm… seems a bit high. If we start postfix again, guess how many text messages are going to wind up on my phone? Let’s drop all of the messages in the queue:

# postsuper -d ALL
postsuper: Deleted: 428 messages

Now we can start postfix without excessive messages being sent.

Alternatively, if the main MX relays go down for a period of time, you will see the mailqueue fill up with undelivered mail. After you bring the MXs back online, the mail may be sent to them immediately. Your MTA probably has an increasing retry interval, which could lead to a one hour delay or longer. Do this to attempt to relay all the mail in the queue:

# postqueue -f

It will try to reconnect immediately to the MX relay, and deliver all mail if it can.

share:

get total host memory on VMWare ESXi

It is my opinion that Graphical User Interfaces (GUIs) can make administration both easier, and more difficult. I highly prefer commandline, and many appliances and non desktop systems have a plethora of advanced options when configuring from the command line. I always void the warranty and get under the hood. All of my ESXi servers have SSH and Console enabled. If you have someone in the datacenter replacing suspect memory on an ESXi system, you don’t want to have them re-rack it and cable it up, just so you can log in through vSphere, just to see if the host sees all the memory. Just do it from commandline. Oh yeah, and free is not a recognized command on the ESXi shell.

# free
-ash: free: not found
#
# vim-cmd hostsvc/hosthardware | grep memorySize | sed -e 's/,//' -e 's/^ *//' 
memorySize = 34182787072
#

This was done on ESXi 4.1.0 Build 345043

share:

rsync migration with manifest of transfer

I once had a migration project to move 40TB of data that needed to be moved from source to destination NFS volumes. Naturally, I went with rsync. This was basically the command for the initial transfers:

rsync -a --out-format="transfer:%t,%b,%f" --itemize-changes /mnt/srcvol /mnt/destvol >> /log/file.log

Pretty simple right? The logs are a simple csv file that looked like this:

transfer:2013/05/02 10:16:13,35291,mnt/srcvol/archive/foo/bar/barfoo/IMAGES/1256562131100.jpg

The customer asked for daily updates on progress. I said no problem, and this one liner takes care of it:

# grep transfer /log/file.log | awk -F "," '{if ($2!=0) i=i+$2; x++} END {print "total Gbytes: "i/1073741824"\ntotal files: "x}'
total Gbytes: 1153.29
total files: 123686

From the rsync command above, the %t means timestamp (2013/05/02 10:16:13), the %b means bytes transferred (35291), and the %f means the whole file path. By adding up the %b column of output and counting how many times you added it, you get both the total bytes transferred and the total number of files transferred. Directories show up as 0 byte transfers so in awk we don’t count them. Also, I threw in the divide by 1073741824 (1024*1024*1024), which converts bytes to Gebibytes.

I ended up putting it in a shell script and adding options such as, just find transfers for a particular day/hour, better handling for the Gbytes number, rate calculation, and the ability to add logs from multiple data moving servers.

share: