rageek

A place for Unix Thoughts and Ideas

Configuring ZFS using Native DMP on Veritas Storage Foundation 5.1

Veritas Storage Foundation 5.1 supported a new feature called Native DMP which provides native OS devices that are multipathed via the DMP driver. This is very useful if you have a need for a raw device for ASM or for a ZFS pool. Previously, you could only run ZFS or ASM on top of DMP if you used a veritas volume for its storage.

Alternatively you could simply just enable MPXIO and have it handle all multipathing. However, I believe that DMP does a better job with multipathing on my database servers.

Unfortunately, Native DMP is not supported on SF Basic edition. Which is shame as I originally wanted to use this in my Tibco and OBIEE environments where I needed san connectivity for my zones, but I didn’t have the huge IO requirements of the databases.

Here is a example of of how to enable it

root@testserver # vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR        
disk_0       auto:ZFS       -            -           ZFS                  c0t3d0s2         -            
disk_1       auto:ZFS       -            -           ZFS                  c0t1d0s2         -            
hitachi_usp-v0_04b0 auto:none      -            -           online invalid       c2t50060E8005477215d0s2 hdprclm fc   
hitachi_usp-v0_04d4 auto:none      -            -           online invalid       c2t50060E8005477215d1s2 hdprclm fc   
hitachi_usp-v0_04d5 auto:none      -            -           online invalid       c2t50060E8005477215d2s2 hdprclm fc   
hitachi_usp-v0_04d6 auto:none      -            -           online invalid       c2t50060E8005477215d3s2 hdprclm fc   

root@testserver # vxdmpadm settune dmp_native_support=on

root@testserver # zpool create zonepool hitachi_usp-v0_04b0

root@testserver # zpool status zonepool
  pool: zonepool
 state: ONLINE
 scan: none requested
config:

        NAME                     STATE     READ WRITE CKSUM
        zonepool                 ONLINE       0     0     0
          hitachi_usp-v0_04b0s0  ONLINE       0     0     0

errors: No known data errors

root@testserver # vxdisk scandisks
root@testserver # vxdisk -e list
DEVICE       TYPE           DISK        GROUP        STATUS               OS_NATIVE_NAME   ATTR        
disk_0       auto:ZFS       -            -           ZFS                  c0t3d0s2         -            
disk_1       auto:ZFS       -            -           ZFS                  c0t1d0s2         -            
hitachi_usp-v0_04b0 auto:ZFS       -            -           ZFS                  c2t50060E8005477215d0 hdprclm fc   
hitachi_usp-v0_04d4 auto:none      -            -           online invalid       c2t50060E8005477215d1s2 hdprclm fc   
hitachi_usp-v0_04d5 auto:none      -            -           online invalid       c2t50060E8005477215d2s2 hdprclm fc   
hitachi_usp-v0_04d6 auto:none      -            -           online invalid       c2t50060E8005477215d3s2 hdprclm fc

If you are planning to reuse disks/Luns that were previously in Veritas and are not shown in vxdisk list as invalid or for ZFS, the dmp devices will not have been created for the disks yet and the create will fail. In this case you can dd the drive label and relabel and then run a vxdisk scandisks and then the zpool creation will succeed.


					
Advertisements

2 responses to “Configuring ZFS using Native DMP on Veritas Storage Foundation 5.1

  1. bradhudsonjr May 14, 2012 at 7:42 pm

    This is a really good feature. Thanks for shariing. It may be useful for some of my local disks. However, I wouldn’t pay the license fees for Symantec if I were only using zpools/zfs. The only reason I use Symantec on Solaris anymore is for CVM/CFS (Clustered Volume Manager/Clustered File Systems). IMHO, those are the only products that can’t be replaced by native Solaris (ZFS/mpathadm/Sun Cluster).

  2. jflaster May 14, 2012 at 8:28 pm

    It is hard to replace CVM/CFS, we have a ton of it deployed and it works really well. But some applications really don’t have the io/response requirements and would be served just fine with Sun Cluster GFS. In terms of Databases, we are large enough to justify Symantec for the databases to have the ease of the cooked filesystems for operations and DR. Once ZFS has the ability to remove luns from existing pools, then it could be very interesting.

    I have been curious on the licensing costs for just DMP. We have been operating under a Oracle ULA and while the cost of our sparc hardware has been going down recently, the Symantec licensing costs has been jumping with each generation, as they get defined on higher tiers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: