mysql log slow queries

It’s really helpful to know what queries are choking a database. To enable the mysql slow query log add these lines to the [mysqld] configuration section of your my.cnf or mysql configuration file and restart mysqld:

slow_query_log=1
slow_query_log_file=/var/log/mysql_slow_queries/dbserver1.log
long_query_time=10

This will log queries that take longer than 10 seconds to the specified file. After you collect some of the logs, work with the application developers to see if some of the queries can be optimized, using more efficient sql, adding indices, etc.

Linux rebuild software RAID1

Set the bash field separator to newline:

IFS="
"

See what disk and partitions are currently up, then generate commands to re-add the missing disk and partitions, then run them:

for i in `cat /proc/mdstat | grep md | cut -d [ -f1 | sed -e 's/\(md[0-9]\).*\(sd[a-z][0-9]\)/mdadm --add \/dev\/\1 \/dev\/\2/' | sed -e 's/sdb/sda/'`;  do eval $i; done;

TODO: make it determine which disk to add (/dev/sda or /dev/sdb)

awk average multiple columns

If you have some output lined up in columns, use awk to average the columns. Here’s some sample output (from a NetApp “toaster> stats show -p flexscale-access”)

    
# cat sample.txt
    73   5480      0   1040  84     0     0      0     0      0     0      0       541
    73   6038     39   1119  84     0     0      0     0      0     0      0       475
    73   5018     19    859  85     0     0      0     0      0     0      0       348
    73   5960     20   1480  80   120     0    320     0      0     0      0       427
    73   6098      0   1019  85     0     0      0     0      0     0      0       486
    73   5220      0   1220  81     0     0      0     0      0     0      0       288
    73   5758     79   1319  81    59    39    319     0      0     0      0       500
    73   4419      0   2039  68     0     0      0     0      0     0      0       279
    73   5400      0    840  86     0     0      0     0      0     0      0       382
    73   5238      0   1299  80     0     0      0     0      0     0      0       389
    73   5449      0   1696  76    59     0    199     0      0     0      0       340
    73   5478      0   1419  79     0     0      0     0      0     0      0       414
    73   5020     20   1000  83     0     0      0     0      0     0      0       405
    73   4359      0   1059  80     0     0      0     0      0     0      0       295
    73   5838     39   1139  83     0    19      0     0      0     0      0       494
    73   6100     40   1720  78     0     0      0     0      0     0      0       480
    73   5398     19   1239  81     0     0      0     0      0     0      0       398
    73   5089     79   1097  82     0     0      0     0      0     0      0       459
    73   6178     19   1159  84     0    39    159     0      0     0      0       487
    73   4999      0   1239  80     0     0      0     0      0     0      0       345
    73   4820      0    880  84     0     0      0     0      0     0      0       339
    73   5467      0   1177  82     0     0      0     0      0     0      0       413
    73   4700     60   1480  76     0     0      0     0      0     0      0       337
#

And the column averages:

# cat sample.txt | awk '{for (i=1;i<=NF;i++){a[i]+=$i;}} END {for (i=1;i<=NF;i++){printf "%.0f", a[i]/NR; printf "\t"};printf "\n"}'
73      5371    19      1241    81      10      4       43      0       0       0       0       405
#

Here awk loops through each field in a row, and adds the value to an array (a[i]) with the key being the field number. Then at the end, it takes the total, and divides by the number of rows (NR) and prints that (without decimals). It separates each field by a tab (\t) and after the end record prints a newline (\n).

You could make it print totals, as well as averages. You could also make it print out the original data, or a field header to know what each column represents...

NetApp migrate root volume to new aggregate

Moving the root volume from one aggregate to another is fairly straightforward. Be sure to be meticulous about each step. First, identify where the root volume is and what size it is. In this case our root volume is still the default name vol0.

toaster> vol container vol0
Volume 'vol0' is contained in aggregate 'aggr1'
toaster> vol size vol0
vol size: Flexible volume 'vol0' has size 14g.
toaster>

Create a new volume the same size (or larger) as vol0:

toaster> vol create root2 aggr2 14g
Creation of volume 'root2' with size 14g on containing aggregate
'aggr2' has completed.
toaster>

Now restrict the volume and use snapmirror to mirror the root volume to the new root volume you just created:

toaster> vol restrict root2
Volume 'root2' is now restricted.
snapmirror initialize -S vol0 root2
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
toaster>
toaster> snapmirror status
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Uninitialized  -          Transferring  (150 MB done)

Once the transfer is complete, be paranoid and do one last update:

toaster> snapmirror status
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Snapmirrored   00:09:41   Idle
toaster>
toaster> snapmirror update -S vol0 root2
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
toaster>
toaster> snapmirror status              
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Snapmirrored   00:00:12   Idle

Now break the snapmirror, making the destination new root volume writable:

toaster> snapmirror break root2
snapmirror break: Destination root2 is now writable.
Volume size is being retained for potential snapmirror resync.  If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off.
toaster>
toaster>

Mark the new root volume as ‘root’ using vol options:

toaster> vol options root2 root         
Wed Jul 31 09:36:06 PDT [toaster: fmmb.lock.disk.remove:info]: Disk 0a.32 removed from local mailbox set.
Wed Jul 31 09:36:07 PDT [toaster: fmmb.lock.disk.remove:info]: Disk 0c.16 removed from local mailbox set.
Wed Jul 31 09:36:08 PDT [toaster: fmmb.current.lock.disk:info]: Disk 2a.16 is a local HA mailbox disk.
Wed Jul 31 09:36:08 PDT [toaster: fmmb.current.lock.disk:info]: Disk 0c.48 is a local HA mailbox disk.
Volume 'root2' will become root at the next boot.
toaster>

Here you can reboot, or failover. If you need to keep the cluster up while performing this procedure, do a cf takeover from the other head in the cluster. Then when ready do a cf giveback to complete the reboot.

Now verify the root volume:

toaster> vol status
          vol0  online          raid_dp, flex     
          root2 online          raid_dp, flex     root, fs_size_fixed=on

As you can see root2 has the root option listed. Now offline the old root volume and destroy it:

toaster> vol offline vol0
Wed Jul 31 09:48:48 PDT [toaster: wafl.vvol.offline:info]: Volume 'vol0' has been set temporarily offline
Volume 'vol0' is now offline.
toaster> 
toaster> vol destroy vol0
Are you sure you want to destroy this volume? yes
Wed Jul 31 09:48:55 PDT [toaster: wafl.vvol.destroyed:info]: Volume vol0 destroyed.
Volume 'vol0' destroyed.
toaster> 
toaster> vol rename root2 vol0
'root2' renamed to 'vol0'
toaster> 

The migration is complete. Your new root volume is in a different aggregate, and the system is booted from it. The old one has been destroyed, and now you may destroy the old aggregate, or detach shelves, etc. Now check /etc/exports, and CIFS configuration. Since the new root is actually a copy you need to fix the NFS and CIFS configurations.

mysql count and group

In mysql you can count the total of rows, or you can count rows after grouping for a condition. Here we take a table and count every row, for each type of fruit:

mysql> select fruit,count(id) as cnt from inventory group by fruit;

+-------------+------------+
| fruit       | count(id)  |
+-------------+------------+
| apple       |       3496 |
| orange      |       3783 |
| mango       |         11 |
+-------------+------------+

Add the ‘with rollup’ statement, to get the total at the bottom:

mysql> select fruit,count(id) as cnt from inventory group by fruit;

+-------------+------------+
| fruit       | count(id)  |
+-------------+------------+
| apple       |       3496 |
| orange      |       3783 |
| mango       |         11 |
| NULL        |       7290 |
+-------------+------------+

You can add other conditional statements to refine what you need. Here we only want the fresh fruits that we have more than 3500 of:

mysql> select count(id) as cnt,fruit from inventory where status='fresh' group by fruit having cnt >3500;
+------+-----------+
| cnt  | fruit     |
+------+-----------+
| 3783 | apple     |
+------+-----------+

vyatta generate config set commands

To see the set commands that are run to make your Vyatta configured the way it is, use the vyatta-config-gen-sets.pl script:

# sudo /opt/vyatta/sbin/vyatta-config-gen-sets.pl

This will output every single command that was run to get your system configured to where it is now. This is great for copying and pasting default bits to all your Vyatta servers, such as static routes config, package repository config, firewall config, etc. You can put this in your .bashrc to have a shortcut to it:

alias gensets='/opt/vyatta/sbin/vyatta-config-gen-sets.pl'

redmine get list of user emails for project

If you need to email everyone on a project, it’s probably no big deal to find people’s account information and cut and paste into an email. But if you have hundreds of users on a project, just go to the database and get their emails. First find the project id, then get all the emails for people on that project.

mysql> select users.mail from members left join users on members.user_id = users.id where members.project_id = 14;

get current client IP addresses from web farm

To see what common IPs are connecting to your web farm, ssh to all of the servers and get a list of clients. Then sort it until you see most busy clients.

# for i in `seq 401 436`; do ssh www$i "netstat -natp | grep EST | grep apa | grep ":80 "| awk '{print \$5}' | cut -d : -f1"; done | sort | uniq -c | sort -nk1 | tail
      3 10.0.0.1
      3 10.0.0.10
      3 10.245.34.2
      4 10.29.45.89
      5 10.111.111.111
      5 10.239.234.234
      5 10.1.1.1
      5 10.2.2.2
      6 10.3.3.3
     10 10.100.100.100
#

The list shows the number of connections, and the client IP.