NetApp migrate root volume to new aggregate

Moving the root volume from one aggregate to another is fairly straightforward. Be sure to be meticulous about each step. First, identify where the root volume is and what size it is. In this case our root volume is still the default name vol0.

toaster> vol container vol0
Volume 'vol0' is contained in aggregate 'aggr1'
toaster> vol size vol0
vol size: Flexible volume 'vol0' has size 14g.
toaster>

Create a new volume the same size (or larger) as vol0:

toaster> vol create root2 aggr2 14g
Creation of volume 'root2' with size 14g on containing aggregate
'aggr2' has completed.
toaster>

Now restrict the volume and use snapmirror to mirror the root volume to the new root volume you just created:

toaster> vol restrict root2
Volume 'root2' is now restricted.
snapmirror initialize -S vol0 root2
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
toaster>
toaster> snapmirror status
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Uninitialized  -          Transferring  (150 MB done)

Once the transfer is complete, be paranoid and do one last update:

toaster> snapmirror status
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Snapmirrored   00:09:41   Idle
toaster>
toaster> snapmirror update -S vol0 root2
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
toaster>
toaster> snapmirror status              
Snapmirror is on.
Source                Destination           State          Lag        Status
toaster:vol0           toaster:root2          Snapmirrored   00:00:12   Idle

Now break the snapmirror, making the destination new root volume writable:

toaster> snapmirror break root2
snapmirror break: Destination root2 is now writable.
Volume size is being retained for potential snapmirror resync.  If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off.
toaster>
toaster>

Mark the new root volume as ‘root’ using vol options:

toaster> vol options root2 root         
Wed Jul 31 09:36:06 PDT [toaster: fmmb.lock.disk.remove:info]: Disk 0a.32 removed from local mailbox set.
Wed Jul 31 09:36:07 PDT [toaster: fmmb.lock.disk.remove:info]: Disk 0c.16 removed from local mailbox set.
Wed Jul 31 09:36:08 PDT [toaster: fmmb.current.lock.disk:info]: Disk 2a.16 is a local HA mailbox disk.
Wed Jul 31 09:36:08 PDT [toaster: fmmb.current.lock.disk:info]: Disk 0c.48 is a local HA mailbox disk.
Volume 'root2' will become root at the next boot.
toaster>

Here you can reboot, or failover. If you need to keep the cluster up while performing this procedure, do a cf takeover from the other head in the cluster. Then when ready do a cf giveback to complete the reboot.

Now verify the root volume:

toaster> vol status
          vol0  online          raid_dp, flex     
          root2 online          raid_dp, flex     root, fs_size_fixed=on

As you can see root2 has the root option listed. Now offline the old root volume and destroy it:

toaster> vol offline vol0
Wed Jul 31 09:48:48 PDT [toaster: wafl.vvol.offline:info]: Volume 'vol0' has been set temporarily offline
Volume 'vol0' is now offline.
toaster> 
toaster> vol destroy vol0
Are you sure you want to destroy this volume? yes
Wed Jul 31 09:48:55 PDT [toaster: wafl.vvol.destroyed:info]: Volume vol0 destroyed.
Volume 'vol0' destroyed.
toaster> 
toaster> vol rename root2 vol0
'root2' renamed to 'vol0'
toaster> 

The migration is complete. Your new root volume is in a different aggregate, and the system is booted from it. The old one has been destroyed, and now you may destroy the old aggregate, or detach shelves, etc. Now check /etc/exports, and CIFS configuration. Since the new root is actually a copy you need to fix the NFS and CIFS configurations.

NetApp access NTFS CIFS share from Unix host via NFS

NTFS vs. Unix style volume settings have nothing to do with which hosts can mount the volume, they have to do with permissions. To access a NTFS volume via NFS, first allow rw or root mounting in /etc/exports (you do have your root vol mounted on your admin boxes right?):

# sed -i '/cifsshare/d' /mnt/toaster/vol0/etc/exports
# echo '/vol/cifsshare -sec=sys,rw,root=someadminhost:anotherlinuxbox,anon=0,nosuid' >> /mnt/toaster/vol0/etc/exports
# ssh toaster
toaster> exportfs -a
toaster> Connection to toaster closed by remote host.
Connection to toaster closed.
#

Mount the volume on your administration host and list the directory:

# mkdir -p /mnt/toaster/cifsshare
# mount toaster:/vol/cifsshare /mnt/toaster/cifsshare
# cd /mnt/toaster/cifsshare 
# ls
ls: .: Permission denied
#
# whoami
root
#

So even though we are able to mount this share via NFS, the NTFS permissions do not let us see what’s there. Check the filer to see what permissions context it has for ‘root’.

toaster> wcc -u root
Tue Jul 16 09:11:57 PDT [toaster: auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: LSA lookup: Lookup of account "DOMAINNAME\root" failed: STATUS_NONE_MAPPED (0xc0000073).
(NT - UNIX) account name(s):  (DOMAINNAME\guest - root)
        ***************
        UNIX uid = 0
        user is a member of group daemon (1)
        user is a member of group daemon (1)

        NT membership
                DOMAINNAME\Guest
                DOMAINNAME\Domain Guests
                DOMAINNAME\Domain Users
                BUILTIN\Guests
                BUILTIN\Users
        User is also a member of Everyone, Network Users,
        Authenticated Users
        ***************
toaster> 

Looks like the filer doesn’t recognize the user ‘root’ and sees it as a guest. This explains why we might not have permissions in the ‘cifsshare’ mount. The solution is to add a user mapping so that user ‘root’ is recognized as ‘administrator’ for the domain ‘DOMAINNAME’. Make an entry in usermap.cfg (you do have your root vol mounted on your admin boxes right?):

echo 'DOMAINNAME\administrator == root' >>/mnt/toaster/vol0/etc/usermap.cfg

Now let’s see what user ‘root’ is seen as from the view of the filer:

toaster> wcc -u root
Tue Jul 16 09:12:30 PDT [toaster: auth.trace.authenticateUser.loginTraceMsg:info]: AUTH: LSA lookup: Located account "DOMAINNAME\administrator" in domain "DOMAINNAME"..
(NT - UNIX) account name(s):  (DOMAINNAME\administrator - root)
        ***************
        UNIX uid = 0
        user is a member of group daemon (1)
        user is a member of group daemon (1)

        NT membership
                DOMAINNAME\administrator
                DOMAINNAME\Enterprise Admins
                DOMAINNAME\Exchange Recovery Administrators
                DOMAINNAME\Schema Admins
<a ton of other stuff here>
                BUILTIN\Administrators
                BUILTIN\Users
        User is also a member of Everyone, Network Users,
        Authenticated Users
        ***************
toaster>

Now we have all the privileges that the domain administrator has, and we can view, list, and alter files that the domain administrator has permissions for. In a production environment, you could just map a Linux admin jdoe to DOMAINNAME\jdoe assuming they had domain admin permissions.

set NetApp administration hosts

When creating a volume on a NetApp system, if NFS is licensed an entry in /etc/exports will be created for the new volume. It adds the administration hosts (configured at setup) to have root access to the new volume. If you change admin hosts, or add new ones, you need to update /etc/exports to reflect the change, however, any subsequent volume creations will still be using the old admin hosts list. Use the hidden option ‘admin.hosts’ to see the current admin hosts:

toaster> options admin.hosts
admin.hosts                  10.14.33.141,10.14.22.141 

Update the list:

toaster> options admin.hosts 10.14.33.141,172.16.11.23,192.168.1.3
toaster>
toaster> options admin.hosts
admin.hosts 10.14.33.141,172.16.11.23,192.168.1.3
toaster>

get drive serial numbers from NetApp DS4243 shelves strapped to Linux server

When doing data recovery on a failed 3Ware RAID group, I utilized a couple spare NetApp DS4243 shelves. I put all the SATA drives into brackets, popped a quad port SAS card into a 1U server, and booted my rescue image from the network. The Debian OS was able to see each individual drive, at which point I could make ddrescue copies, and edit the 3Ware DCB metadata using hexedit. Keeping everything straight was quite a challenge. This one liner helped me find all the serials, and get the copies sorted out:

# for i in `ls /dev | grep sd | sed -e 's/[0-9]//' | sort -u`; do echo -n "/dev/$i " ; smartctl --all /dev/$i | grep "Serial number" | awk '{print " --- " $3}'; done;
/dev/sdaa  --- 9VS0R3T4
/dev/sdab  --- 9VS3SSJE
/dev/sdac  --- 9VS2K1DY
/dev/sdad  --- 9VS07FMW
/dev/sdae  --- 9VS402AA
/dev/sdaf  --- 9VS3V74N
/dev/sdag  --- 9VS388JX
/dev/sdah  --- 9VS2AQ9A
/dev/sdai  --- 9VS3THY9
/dev/sdaj  --- 9VS4EE3T
/dev/sdak  --- 9VS0FXWA
/dev/sdal  --- 9VS34DAD
/dev/sdam  --- 9VS45C9J
/dev/sdan  --- 9VS4L45S
/dev/sdao  --- 9VS2K9K1
/dev/sdap  --- 9VS2D631
/dev/sdaq  --- 9VS2L1BK
/dev/sdar  --- 9VS4DYDB
/dev/sdas  --- 9VS3T33R
/dev/sdat  --- 9VS3YE1K
/dev/sdc  --- 9VS1HWAM
/dev/sdd  --- 9VS1J9F3
/dev/sde  --- 9VS1L0FY
/dev/sdf  --- 9VS1H3RB
/dev/sdg  --- 9VS1JNPW
/dev/sdh  --- 9VS1GWGK
/dev/sdi  --- 9VS1DLZZ
/dev/sdj  --- 9VS1FSRD
/dev/sdk  --- 9VS3A8GZ
/dev/sdl  --- 9VS1L8ZZ
/dev/sdm  --- 9VS1JE7E
/dev/sdn  --- 9VS1CHE1
/dev/sdo  --- 9VS295R5
/dev/sdp  --- 9VS1HR8P
/dev/sdq  --- 9VS1EJQW
/dev/sdr  --- 9VS1A4V5
/dev/sds  --- 9VS1JGP8
/dev/sdt  --- 9VS1HPGB
/dev/sdu  --- 9VS1JAWZ
/dev/sdv  --- 9VS1JG8K
/dev/sdw  --- 9VS1JA51
/dev/sdx  --- WD-WMATV4441330
/dev/sdy  --- 9VS38GT3
/dev/sdz  --- 9VS4SQJX
#