rageek

A place for Unix Thoughts and Ideas

Monthly Archives: February 2012

Script to update eeprom boot_devices to boot from mirrors

Here is a script I wrote for automatically updating the eeprom settings on Sun servers to configure
it to boot from the configured zfs or disk suite mirror.

update_eeprom.sh

I recently updated the logic and it should now work on T3/T4 based systems.

Here is the output:

root@testserver # ./Update_eeprom.sh
***Old contents of nvramrc***
devalias net /pci@500/pci@0/pci@8/network@0
.” ChassisSerialNumber BEL0823M4G ” cr
***Old boot-device settings***
/pci@400/pci@0/pci@8/scsi@0/disk@0,0:a disk net

***New contents of nvramrc***
devalias net /pci@500/pci@0/pci@8/network@0
.” ChassisSerialNumber BEL0823M4G ” cr
devalias disk0 /pci@400/pci@0/pci@8/scsi@0/disk@0
devalias disk1 /pci@400/pci@0/pci@8/scsi@0/disk@1
***New boot-device settings***
disk0 disk1 net

Update EEPROM? Y
saving eeprom config to /etc/eeprom_orig.012412_1021
updating eeprom
updating default boot-device

root@testserver #

You can optionally use the -f argument to have it run without prompt, which is useful for running in a script during the jumpstart process.

Turn on Locate LED for SAS Drive on HP-UX

Here is the process for Turning on the locate LED for a Drive on HP-UX

On my server EMS has indicated that drive 0/4/1/0.0.0.3.0 is experiencing errors.
I’m going to turn on the LED so I can easily locate it in my datacenter

root@testserver # sasmgr get_info -D /dev/sasd0 -v -q lun=all -q lun_locate
LUN LUN HW Path Enc Bay Locate LED
=== =========== === === ==========
/dev/rdsk/c1t0d0 0/4/1/0.0.0.0.0 1 5 OFF
/dev/rdsk/c1t1d0 0/4/1/0.0.0.1.0 1 6 OFF
/dev/rdsk/c1t2d0 0/4/1/0.0.0.2.0 1 7 OFF
/dev/rdsk/c1t3d0 0/4/1/0.0.0.3.0 1 8 OFF

RAID VOL ID is 1 :
LUN LUN HW Path
=== ===========
/dev/rdsk/c1t6d0 0/4/1/0.0.0.6.0

Physical disks in volume are :
Enc Bay Locate LED VendorID ProductID Revision
=== === ========== ======== ========= ========
1 4 OFF HP EG0146FARTR HPD5
1 1 OFF HP EG0146FARTR HPD5

root@testserver # sasmgr set_attr -D /dev/sasd0 -q lun=/dev/rdsk/c1t3d0 -q locate_led=on
Locate LED set to ON.

root@testserver # sasmgr get_info -D /dev/sasd0 -v -q lun=all -q lun_locate
LUN LUN HW Path Enc Bay Locate LED
=== =========== === === ==========
/dev/rdsk/c1t0d0 0/4/1/0.0.0.0.0 1 5 OFF
/dev/rdsk/c1t1d0 0/4/1/0.0.0.1.0 1 6 OFF
/dev/rdsk/c1t2d0 0/4/1/0.0.0.2.0 1 7 OFF
/dev/rdsk/c1t3d0 0/4/1/0.0.0.3.0 1 8 ON

RAID VOL ID is 1 :
LUN LUN HW Path
=== ===========
/dev/rdsk/c1t6d0 0/4/1/0.0.0.6.0

Physical disks in volume are :
Enc Bay Locate LED VendorID ProductID Revision
=== === ========== ======== ========= ========
1 4 OFF HP EG0146FARTR HPD5
1 1 OFF HP EG0146FARTR HPD5

To turn off, run the following:
root@testserver # sasmgr set_attr -D /dev/sasd0 -q lun=/dev/rdsk/c1t3d0 -q locate_led=off
Locate LED set to OFF.

Seeing a summary of I/O Throughput on a Solaris server

I recently migrated 16TB of storage between 2 systems & arrays using parallel copies with star.

As part of this, I wanted to know my total I/O bandwidth so I could tell when I hit the optimal number of parallel copy jobs and to estimate a completion time (btw the optimal number was 10 jobs).

Here is a simple way of seeing the IO throughput of the fibre/sas/scsi controllers on Solaris.

iostat -xCnM 5 | egrep '%|c.?$'
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    1.0    1.0    0.0    0.0  0.0  0.0    0.1    7.1   0   1 c0
  410.8  178.3   62.5    8.8 13.6  4.5   23.2    7.7   0 125 c1
  410.8  178.5   62.5    8.8 13.6  4.5   23.2    7.7   0 125 c3
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.4    0.0    0.0  0.0  0.0    0.0    5.7   0   0 c0
  692.9  227.1  151.0   20.4  0.0  8.1    0.0    8.8   0 395 c1
  678.1  253.9  151.9   21.3  0.0  8.1    0.0    8.7   0 377 c3
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0
  816.3  274.4  164.3   24.0  0.0  7.9    0.0    7.2   0 364 c1
  830.7  280.0  169.9   23.8  0.0  8.5    0.0    7.6   0 378 c3
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    6.6    0.0    0.1    0.0  0.0  0.0    0.0    4.2   0   2 c0
  785.8  273.8  153.1   24.4  0.2  7.6    0.2    7.2   0 355 c1
  832.0  260.8  168.4   21.8  0.1  8.1    0.1    7.5   0 377 c3

If all you storage is configured in Veritas, you can use vxstat to get a complete summary of reads and writes.

You can modify the vxstat commands to have it only show throughput for a specific disk group.

INT=5; while /bin/true; do CNT=$((`vxstat -o alldgs |wc -l ` + 2)); vxstat -o alldgs -i$INT -c2 | tail +$CNT | nawk -v secs=$INT ‘BEGIN{ writes=0; reads=0}{writes+=$6;reads +=$5} END {printf (“%.2f MB/s Reads %.2f MB/s Writes\n”, reads/2/1024/secs, writes/2/1024/secs) }’; done
46.58 MB/s Reads 334.42 MB/s Writes
47.25 MB/s Reads 320.77 MB/s Writes
47.55 MB/s Reads 340.67 MB/s Writes
45.85 MB/s Reads 498.19 MB/s Writes
52.51 MB/s Reads 478.23 MB/s Writes
42.32 MB/s Reads 465.49 MB/s Writes
31.30 MB/s Reads 439.65 MB/s Writes

every once in a while it will get a blip like
15937437.54 MB/S Reads 9646673.26 MB/S Writes

That is obviously wrong, not sure why I get it, but it can be ignored or filtered out by piping the output through

perl -ne ‘split; print if (!(@_[0] > 100000) || !(@_[3] > 100000))’

Putting it together:

INT=5; while /bin/true; do CNT=$((`vxstat -o alldgs |wc -l ` + 2)); vxstat -o alldgs -i$INT -c2 | tail +$CNT | nawk -v secs=$INT ‘BEGIN{ writes=0; reads=0}{writes+=$6;reads +=$5} END {printf (“%.2f MB/s Reads %.2f MB/s Writes\n”, reads/2/1024/secs, writes/2/1024/secs) }’; done | perl -ne ‘split; print if (!(@_[0] > 100000) || !(@_[3] > 100000))’

You could also tweak this a little bit for use with SNMPD for graphing.

Updated 2/9/11: Reads and Writes were swapped in text output.
Updated 2/6/12: Didn’t realize the iostat one-liner pasted was flat-out wrong and didn’t work.

Configuring ZFS using Native DMP on Veritas Storage Foundation 5.1

Veritas Storage Foundation 5.1 supported a new feature called Native DMP which provides native OS devices that are multipathed via the DMP driver. This is very useful if you have a need for a raw device for ASM or for a ZFS pool. Previously, you could only run ZFS or ASM on top of DMP if you used a veritas volume for its storage.

Alternatively you could simply just enable MPXIO and have it handle all multipathing. However, I believe that DMP does a better job with multipathing on my database servers.

Unfortunately, Native DMP is not supported on SF Basic edition. Which is shame as I originally wanted to use this in my Tibco and OBIEE environments where I needed san connectivity for my zones, but I didn’t have the huge IO requirements of the databases.

Here is a example of of how to enable it

root@testserver # vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR        
disk_0       auto:ZFS       -            -           ZFS                  c0t3d0s2         -            
disk_1       auto:ZFS       -            -           ZFS                  c0t1d0s2         -            
hitachi_usp-v0_04b0 auto:none      -            -           online invalid       c2t50060E8005477215d0s2 hdprclm fc   
hitachi_usp-v0_04d4 auto:none      -            -           online invalid       c2t50060E8005477215d1s2 hdprclm fc   
hitachi_usp-v0_04d5 auto:none      -            -           online invalid       c2t50060E8005477215d2s2 hdprclm fc   
hitachi_usp-v0_04d6 auto:none      -            -           online invalid       c2t50060E8005477215d3s2 hdprclm fc   

root@testserver # vxdmpadm settune dmp_native_support=on

root@testserver # zpool create zonepool hitachi_usp-v0_04b0

root@testserver # zpool status zonepool
  pool: zonepool
 state: ONLINE
 scan: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        zonepool                 ONLINE       0     0     0
          hitachi_usp-v0_04b0s0  ONLINE       0     0     0

errors: No known data errors

root@testserver # vxdisk scandisks
root@testserver # vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR        
disk_0       auto:ZFS       -            -           ZFS                  c0t3d0s2         -            
disk_1       auto:ZFS       -            -           ZFS                  c0t1d0s2         -            
hitachi_usp-v0_04b0 auto:ZFS       -            -           ZFS                  c2t50060E8005477215d0 hdprclm fc   
hitachi_usp-v0_04d4 auto:none      -            -           online invalid       c2t50060E8005477215d1s2 hdprclm fc   
hitachi_usp-v0_04d5 auto:none      -            -           online invalid       c2t50060E8005477215d2s2 hdprclm fc   
hitachi_usp-v0_04d6 auto:none      -            -           online invalid       c2t50060E8005477215d3s2 hdprclm fc

If you are planning to reuse disks/Luns that were previously in Veritas and are not shown in vxdisk list as invalid or for ZFS, the dmp devices will not have been created for the disks yet and the create will fail. In this case you can dd the drive label and relabel and then run a vxdisk scandisks and then the zpool creation will succeed.