use iozone to benchmark NFS

iozone -t 32 -s1g -r16k -i 0 -i 1

32 threads operating on a 1G file in 16k chunks doing test 0 (write/re-write) and test 1 (read/re-read)

NFSv3 mounted AWS c4.xlarge 500GB gp2 EBS vol formatted EXT4

        Iozone: Performance Test of File I/O
                Version $Revision: 3.397 $
                Compiled for 64 bit mode.
                Build: linux-AMD64 

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
                     Ben England.  

        Run began: Fri May 13 15:46:09 2016

        File size set to 1048576 KB
        Record Size 16 KB
        Command line used: iozone -t 32 -s1g -r16k -i 0 -i 1
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
        Throughput test with 32 processes
        Each process writes a 1048576 Kbyte file in 16 Kbyte records



        Children see throughput for 32 initial writers  =   82456.18 KB/sec
        Parent sees throughput for 32 initial writers   =   73910.24 KB/sec
        Min throughput per process                      =    2501.01 KB/sec
        Max throughput per process                      =    2746.40 KB/sec
        Avg throughput per process                      =    2576.76 KB/sec
        Min xfer                                        =  954736.00 KB

        Children see throughput for 32 rewriters        =   81779.61 KB/sec
        Parent sees throughput for 32 rewriters         =   78548.79 KB/sec
        Min throughput per process                      =    2509.20 KB/sec
        Max throughput per process                      =    2674.01 KB/sec
        Avg throughput per process                      =    2555.61 KB/sec
        Min xfer                                        =  983856.00 KB

        Children see throughput for 32 readers          =   91412.23 KB/sec
        Parent sees throughput for 32 readers           =   90791.54 KB/sec
        Min throughput per process                      =    2761.06 KB/sec
        Max throughput per process                      =    2910.95 KB/sec
        Avg throughput per process                      =    2856.63 KB/sec
        Min xfer                                        =  998720.00 KB

        Children see throughput for 32 re-readers       =   91781.74 KB/sec
        Parent sees throughput for 32 re-readers        =   91620.13 KB/sec
        Min throughput per process                      =    2832.08 KB/sec
        Max throughput per process                      =    2885.37 KB/sec
        Avg throughput per process                      =    2868.18 KB/sec
        Min xfer                                        = 1029440.00 KB



iozone test complete.

use netstat to monitor receive queue Recv-Q

# i=0; while true; do i=$(($i+1)); echo $i ==============================; netstat -natlp | grep ^tcp | sort -nk1 | awk '{ if($2 != 0) {print}}' ; sleep 1;  done;
1 ==============================
2 ==============================
3 ==============================
4 ==============================
5 ==============================
tcp      100      0 10.0.3.167:22           198.21.8.23:53477       ESTABLISHED 99304/sshd: fordodone
6 ==============================
7 ==============================
8 ==============================
9 ==============================
tcp    43520      0 10.0.3.167:53877        10.0.9.55:3306          ESTABLISHED 119789/mysqldump
10 ==============================
11 ==============================
12 ==============================
13 ==============================
14 ==============================
15 ==============================
16 ==============================
tcp6       1      0 10.0.3.167:80           198.21.8.23:65114       CLOSE_WAIT  3880/apache2    
17 ==============================
18 ==============================

start screen session with 4-way split screen

There are several terminals that allow splitting the screen to accommodate multiple regions in a single window. When logged into a non-desktop/server environment Linux screen is great for this. It has support for splitting the screen vertically and/or horizontally. You can use ctrl + a + S or ctrl + a + | to split regions horizontally or vertically. Here’s an excerpt from a .screenrc file to split the screen into 4 regions, and start ssh sessions to four separate servers in each of those regions.


split
split -v
focus down
split -v

screen -t bash /bin/bash
screen -t deploy1 /usr/bin/ssh deploy1
screen -t deploy2 /usr/bin/ssh deploy2
screen -t deploy3 /usr/bin/ssh deploy3
screen -t deploy4 /usr/bin/ssh deploy4

focus up
focus left
select 1
focus right
select 2
focus left
focus down
select 3
focus right
select 4

Now I can start the screen session…

# screen -c .screenrc-multiwindow

and automatically get this:

4-way-screen-split

use wget to recursively download files via FTP

A command line ftp client is good for many things. You can turn off prompting, and use mget with wildcard to get many files. The problem is that mget doesn’t create directories locally, so when it tries to recurse into destination directories in order to place incoming files into them, it fails. We can use wget instead to traverse the directory structure, create folders, and download

# wget -r 'ftp://username:password@ftp.example.com'

Note: rsync would be ideal for this, but there are some cases where the source only offers ftp as a connection protocol.

bash random number generator using seq and sort

Create the sequence between 0 and 9 and then do a random sort and get the first one.

# seq 0 9 | sort -R | head -1

You can count up the instances of each one and see the distribution looks normal to a human.

# for i in `seq 1 100000`; do seq 0 9 | sort -R | head -1 >> /tmp/rando; done;

# cat /tmp/rando | sort -n | uniq -c | sort -nk2
   9896 0
  10140 1
   9928 2
   9975 3
   9929 4
  10129 5
   9951 6
  10007 7
   9882 8
  10163 9

TODO: test with chi-square? the -R flag can’t be truly random?

map CPU ID to threads to cores to sockets with hyperthreading

The Linux OS uses a predictable method to map hyperthread enabled CPU threads to CPU IDs. Here’s a reminder if you need a quick way to map them:

# egrep 'processor|core id|physical id' /proc/cpuinfo | cut -d : -f 2 | paste - - -  | awk '{print "CPU"$1"\tsocket "$2" core "$3}'
CPU0    socket 0 core 0
CPU1    socket 0 core 1
CPU2    socket 0 core 2
CPU3    socket 0 core 3
CPU4    socket 1 core 0
CPU5    socket 1 core 1
CPU6    socket 1 core 2
CPU7    socket 1 core 3
CPU8    socket 0 core 0
CPU9    socket 0 core 1
CPU10   socket 0 core 2
CPU11   socket 0 core 3
CPU12   socket 1 core 0
CPU13   socket 1 core 1
CPU14   socket 1 core 2
CPU15   socket 1 core 3

This is a dual quad core system with hyperthreading enabled. 2 physical CPUs with 4 cores each and 2 threads per core so the OS sees 16 CPUs. CPU0 and CPU8 are 2 threads on the first core (core 0 ) on the first physical processor (socket 0)