NetApp migrate root volume to new aggregate

Moving the root volume from one aggregate to another is fairly straightforward. Be sure to be meticulous about each step. First, identify where the root volume is and what size it is. In this case our root volume is still the default name vol0.

toaster> vol container vol0
Volume 'vol0' is contained in aggregate 'aggr1'
toaster> vol size vol0
vol size: Flexible volume 'vol0' has size 14g.
toaster>

Create a new volume the same size (or larger) as vol0:

toaster> vol create root2 aggr2 14g
Creation of volume 'root2' with size 14g on containing aggregate
'aggr2' has completed.
toaster>

Now restrict the volume and use snapmirror to mirror the root volume to the new root volume you just created:

toaster> vol restrict root2
Volume 'root2' is now restricted.
snapmirror initialize -S vol0 root2
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
toaster>
toaster> snapmirror status
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Uninitialized  -          Transferring  (150 MB done)

Once the transfer is complete, be paranoid and do one last update:

toaster> snapmirror status
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Snapmirrored   00:09:41   Idle
toaster>
toaster> snapmirror update -S vol0 root2
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
toaster>
toaster> snapmirror status              
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Snapmirrored   00:00:12   Idle

Now break the snapmirror, making the destination new root volume writable:

toaster> snapmirror break root2
snapmirror break: Destination root2 is now writable.
Volume size is being retained for potential snapmirror resync.  If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off.
toaster>
toaster>

Mark the new root volume as ‘root’ using vol options:

toaster> vol options root2 root         
Wed Jul 31 09:36:06 PDT [toaster: fmmb.lock.disk.remove:info]: Disk 0a.32 removed from local mailbox set.
Wed Jul 31 09:36:07 PDT [toaster: fmmb.lock.disk.remove:info]: Disk 0c.16 removed from local mailbox set.
Wed Jul 31 09:36:08 PDT [toaster: fmmb.current.lock.disk:info]: Disk 2a.16 is a local HA mailbox disk.
Wed Jul 31 09:36:08 PDT [toaster: fmmb.current.lock.disk:info]: Disk 0c.48 is a local HA mailbox disk.
Volume 'root2' will become root at the next boot.
toaster>

Here you can reboot, or failover. If you need to keep the cluster up while performing this procedure, do a cf takeover from the other head in the cluster. Then when ready do a cf giveback to complete the reboot.

Now verify the root volume:

toaster> vol status
          vol0  online          raid_dp, flex     
          root2 online          raid_dp, flex     root, fs_size_fixed=on

As you can see root2 has the root option listed. Now offline the old root volume and destroy it:

toaster> vol offline vol0
Wed Jul 31 09:48:48 PDT [toaster: wafl.vvol.offline:info]: Volume 'vol0' has been set temporarily offline
Volume 'vol0' is now offline.
toaster> 
toaster> vol destroy vol0
Are you sure you want to destroy this volume? yes
Wed Jul 31 09:48:55 PDT [toaster: wafl.vvol.destroyed:info]: Volume vol0 destroyed.
Volume 'vol0' destroyed.
toaster> 
toaster> vol rename root2 vol0
'root2' renamed to 'vol0'
toaster> 

The migration is complete. Your new root volume is in a different aggregate, and the system is booted from it. The old one has been destroyed, and now you may destroy the old aggregate, or detach shelves, etc. Now check /etc/exports, and CIFS configuration. Since the new root is actually a copy you need to fix the NFS and CIFS configurations.

mysql count and group

In mysql you can count the total of rows, or you can count rows after grouping for a condition. Here we take a table and count every row, for each type of fruit:

mysql> select fruit,count(id) as cnt from inventory group by fruit;

+-------------+------------+
| fruit       | count(id)  |
+-------------+------------+
| apple       |       3496 |
| orange      |       3783 |
| mango       |         11 |
+-------------+------------+

Add the ‘with rollup’ statement, to get the total at the bottom:

mysql> select fruit,count(id) as cnt from inventory group by fruit;

+-------------+------------+
| fruit       | count(id)  |
+-------------+------------+
| apple       |       3496 |
| orange      |       3783 |
| mango       |         11 |
| NULL        |       7290 |
+-------------+------------+

You can add other conditional statements to refine what you need. Here we only want the fresh fruits that we have more than 3500 of:

mysql> select count(id) as cnt,fruit from inventory where status='fresh' group by fruit having cnt >3500;
+------+-----------+
| cnt  | fruit     |
+------+-----------+
| 3783 | apple     |
+------+-----------+

vyatta generate config set commands

To see the set commands that are run to make your Vyatta configured the way it is, use the vyatta-config-gen-sets.pl script:

# sudo /opt/vyatta/sbin/vyatta-config-gen-sets.pl

This will output every single command that was run to get your system configured to where it is now. This is great for copying and pasting default bits to all your Vyatta servers, such as static routes config, package repository config, firewall config, etc. You can put this in your .bashrc to have a shortcut to it:

alias gensets='/opt/vyatta/sbin/vyatta-config-gen-sets.pl'

redmine get list of user emails for project

If you need to email everyone on a project, it’s probably no big deal to find people’s account information and cut and paste into an email. But if you have hundreds of users on a project, just go to the database and get their emails. First find the project id, then get all the emails for people on that project.

mysql> select users.mail from members left join users on members.user_id = users.id where members.project_id = 14;

get current client IP addresses from web farm

To see what common IPs are connecting to your web farm, ssh to all of the servers and get a list of clients. Then sort it until you see most busy clients.

# for i in `seq 401 436`; do ssh www$i "netstat -natp | grep EST | grep apa | grep ":80 "| awk '{print \$5}' | cut -d : -f1"; done | sort | uniq -c | sort -nk1 | tail
      3 10.0.0.1
      3 10.0.0.10
      3 10.245.34.2
      4 10.29.45.89
      5 10.111.111.111
      5 10.239.234.234
      5 10.1.1.1
      5 10.2.2.2
      6 10.3.3.3
     10 10.100.100.100
#

The list shows the number of connections, and the client IP.

monitor host for slow ping times

When there is intermittent network latency to a host, it’s important to monitor a it for a pattern. Using ping can help narrow down what is causing the latency. VMWare load, bandwidth limitations, employee work patterns, backups, and many other sources could be the cause of the latency.

while true; do j=`ping <slowhost> -i1 -c1 2>&1 | grep icmp_req | awk '{print $7}' | cut -d = -f2 | cut -d . -f1`; if [ $j -gt 30 ]; then date | tr '\n' ' ';  echo $j; fi; sleep 1s; done;

This does a ping every second, and if it’s over a threshold (30ms in this case) it is considered unacceptable and logged with date.