count rows in mysqldump

Get rough estimate of rows from multiple mysqldump files from a single table split into multiple sections:

# cat table1.*.sql |sed -e 's/),(/\n/g' | wc -l
4699008

Here’s from the actual table:

mysql> select count(id) from table;
+-----------+
| count(id) |
+-----------+
|   4692064 |
+-----------+
1 row in set (0.00 sec)

Counting the inserts in the mysqldump is a good rough estimate. In this case it’s off by about .1%, because of pattern interpretation or shift in database between dump and the count.

nagios aggressive cli smtp monitoring

# while true; do date | tr '\n' ' ' && /usr/local/nagios/libexec/check_smtp -H somemailhost -p 25 ; sleep 5s; done;
Fri Aug  7 11:15:35 MST 2015 SMTP OK - 0.108 sec. response time|time=0.108085s;;;0.000000
Fri Aug  7 11:15:41 MST 2015 SMTP OK - 0.111 sec. response time|time=0.111096s;;;0.000000
Fri Aug  7 11:15:46 MST 2015 SMTP OK - 0.110 sec. response time|time=0.110013s;;;0.000000

bash random number generator using seq and sort

Create the sequence between 0 and 9 and then do a random sort and get the first one.

# seq 0 9 | sort -R | head -1

You can count up the instances of each one and see the distribution looks normal to a human.

# for i in `seq 1 100000`; do seq 0 9 | sort -R | head -1 >> /tmp/rando; done;

# cat /tmp/rando | sort -n | uniq -c | sort -nk2
   9896 0
  10140 1
   9928 2
   9975 3
   9929 4
  10129 5
   9951 6
  10007 7
   9882 8
  10163 9

TODO: test with chi-square? the -R flag can’t be truly random?

count character occurrence rates in filenames

find all the files in a directory. Take out the first dot . put in by find. Remove slashes (can’t be a character in a filename). Use fold -w 1 (–width) the width option limits column output to 1 character, which puts each character on it’s own line. Don’t count spaces (we don’t care about them). Sort the output, count how many occurrences of each character happened. Sort output by least to most occurrences of characters.

find . -type f | sed -e 's/\.//' -e 's/\// /g' | fold -w 1 | grep -v '^ $' | sort | uniq -c | sort -nk1
      1 '
      7 ^
     22 ,
     29 (
     29 )
     40 #
     51 =
     72 ~
    214 @
    312 :
    672 Y
   1141 +
   1217 J
   1497 Z
   2813 G
   3696 U
   3727 H
   5168 O
   5654 N
   5700 X
   5721 K
  10185 R
  10590 W
  11414 F
  12412 A
  13114 E
  13424 C
  13904 z
  15369 Q
  15698 j
  18746 I
  20582 S
  30232 M
  39547 q
  44301 B
  44946 P
  54675 7
  74749 9
  74777 L
  78077 T
  83720 8
  86739 D
  87151 4
  92824 k
  93168 y
  94261 5
  96495 w
 105734 V
 135527 6
 193306 f
 215943 0
 239003 g
 274810 3
 284082 v
 291777 1
 305769 h
 329499 _
 353852 2
 397075 b
 493086 m
 513388 p
 523439 d
 539160 x
 654812 -
 697485 l
 717868 a
 728134 n
 843460 t
 862742 u
 883640 .
1059771 i
1060749 c
1109991 o
1227620 r
1326244 s
1440326 e

get live config values on MySQL Cluster ndbcluster

Sometimes it’s necessary to double check configuration values that are actually being used by the live ndbd/ndbmtd process

Some operational configs for one node:

# ndb_config -r'\n' -f: --nodeid=12 -q id,MaxNoOfConcurrentIndexOperations,MaxNoOfFiredTriggers --config-from-node=12 
12:81920:40000

Memory allocations on data (ndbd) nodes:

# ndb_config -r'\n' -f: --type ndbd -q id,DataMemory,IndexMemory --config-from-node=12
11:51539607552:4294967296
12:51539607552:4294967296
13:51539607552:4294967296
14:51539607552:4294967296
15:51539607552:4294967296
16:51539607552:4294967296
17:51539607552:4294967296
18:51539607552:4294967296
19:51539607552:4294967296
20:51539607552:4294967296
21:51539607552:4294967296
22:51539607552:4294967296
23:51539607552:4294967296
24:51539607552:4294967296
25:51539607552:4294967296
26:51539607552:4294967296

Due to config caching, pending restarts, or mismatch in management server (ndb_mgmd) nodes, the values in config.ini might be different than what is actually being used by the ndbcluster processes

make large directory tree

I needed to create a large directory structure for some testing. I hacked together this quick script that makes a small or large directory tree easily. You define how many branch points there are and how many branches each branch has all the way from the trunk to the leaves.

#!/bin/bash

i=0;
l=0;
levels=2
dirsperlevel=3

rm -rf tree
mkdir tree && cd tree

while [ $i -lt $levels ];
do
  for j in `find . -mindepth $i -maxdepth $i -type d`
  do
    pushd $j > /dev/null 2>&1
    for k in `seq 1 $dirsperlevel`;
    do
      mkdir $k;
    done;
    popd > /dev/null 2>&1
  done;
  i=$(($i+1))
  l=`echo "($dirsperlevel^$i)+$l"|bc`
done;

echo "$l dirs created in ./tree"

using 2 levels, and 3 directories per level we get 12 total directories created like so:

# mktree.sh 
12 dirs created in ./tree
# find tree    
tree
tree/2
tree/2/2
tree/2/3
tree/2/1
tree/3
tree/3/2
tree/3/3
tree/3/1
tree/1
tree/1/2
tree/1/3
tree/1/1

using something like 6 levels and 6 directories per level would give us 55,986 total directories.