# for i in `cat servers`; do echo "$i: " ; ssh $i "awk '\$4~/(^|,)ro($|,)/' /proc/mounts; done;
www1:
www2:
www3:
/dev/mapper/vg01-tmp on /tmp type ext4 (ro)
www4:
# for i in `cat servers`; do echo "$i: " ; ssh $i "awk '\$4~/(^|,)ro($|,)/' /proc/mounts; done;
www1:
www2:
www3:
/dev/mapper/vg01-tmp on /tmp type ext4 (ro)
www4:
someone added a bunch of iface statements for configuration but forgot the auto part…
# sed -i 's/iface eth0:\([0-9]\{3\}\)/auto eth0:\1\niface eth0:\1/' /etc/network/interfaces
<snip>
auto eth0:196
iface eth0:196 inet static
address 1.2.3.4
netmask 255.255.255.0
auto eth0:197
iface eth0:197 inet static
address 1.2.3.5
netmask 255.255.255.0
auto eth0:198
iface eth0:198 inet static
address 1.2.3.6
netmask 255.255.255.0
</snip>
If you have access to the MySQL server and logging is turned on then you have access to the queries as they are logged. Many production databases do not have logging turned on, simply because there are too many queries to handle. Also, there could be hundreds of servers hitting the logs at any given time, making it hard to see activity from a particular client. To take a look at MySQL queries as they leave a webserver you can use tcpdump
and massage the output to get you what queries are being sent from that host.
# tcpdump -i eth0 -l -s 0 -w - dst port 3306 | stdbuf -o0 strings| stdbuf -o0 grep "SELECT\|INSERT\|UPDATE|\FROM\|WHERE\|ORDER\|AND\|LIMIT\|FROM\|SET\|COMMIT\|ROLLBACK"
Sometimes the query gets broken up into pieces if WHERE or LIMIT is used, and those pieces wind up on separate lines so we need to grep for them separately. Use stdbuf
to force all the pipes to NOT buffer output, i.e. print output in pseudo real time.
Install 7z and extract the ISO to the current directory
# apt-get install -y p7zip-full
# 7z x VMware-VMvisor-Installer-5.5.0-1331820.x86_64.iso
Anyone who has done a lot of migrations has some snippets jotted down to help streamline the process. On a filer with many volumes you can use this to generate the create commands for destination volumes on a new aggregate.
# for i in `ssh filer01 "vol status" | awk '/raid_dp/ {print $1}' | grep -v Volume | grep -v Warning | grep -v _new`; do j=`ssh filer01 "vol size $i" | grep 'has size' | sed -e "s/'//g" -e "s/\.//g"`; k=`echo $j | awk '{print "vol create "$5"_new newaggr "$NF}'`; echo $k ; done;
vol create vol12_new newaggr 10g
vol create vol13_new newaggr 70g
vol create vol14_new newaggr 1600g
<snip...>
If you trust your hackery enough, you might even send the commands over to actually create the vols…
Get rough estimate of rows from multiple mysqldump files from a single table split into multiple sections:
# cat table1.*.sql |sed -e 's/),(/\n/g' | wc -l
4699008
Here’s from the actual table:
mysql> select count(id) from table;
+-----------+
| count(id) |
+-----------+
| 4692064 |
+-----------+
1 row in set (0.00 sec)
Counting the inserts in the mysqldump is a good rough estimate. In this case it’s off by about .1%, because of pattern interpretation or shift in database between dump and the count.
# while true; do date | tr '\n' ' ' && /usr/local/nagios/libexec/check_smtp -H somemailhost -p 25 ; sleep 5s; done;
Fri Aug 7 11:15:35 MST 2015 SMTP OK - 0.108 sec. response time|time=0.108085s;;;0.000000
Fri Aug 7 11:15:41 MST 2015 SMTP OK - 0.111 sec. response time|time=0.111096s;;;0.000000
Fri Aug 7 11:15:46 MST 2015 SMTP OK - 0.110 sec. response time|time=0.110013s;;;0.000000
Create the sequence between 0 and 9 and then do a random sort and get the first one.
# seq 0 9 | sort -R | head -1
You can count up the instances of each one and see the distribution looks normal to a human.
# for i in `seq 1 100000`; do seq 0 9 | sort -R | head -1 >> /tmp/rando; done;
# cat /tmp/rando | sort -n | uniq -c | sort -nk2
9896 0
10140 1
9928 2
9975 3
9929 4
10129 5
9951 6
10007 7
9882 8
10163 9
TODO: test with chi-square? the -R flag can’t be truly random?