rageek

A place for Unix Thoughts and Ideas

Adding Storage Foundation CFS mount points via the Command line.

For the longest time I used vea for my CFS operations because it saved me time with updating the main.cf etc…

Then i figured out that VEA has command line utilities that it calls to do all of its dirty work (check out /var/vx/isis/command.log) and when it comes to adding Cluster Filesystems, it is cfsmntadm.

Here is quick instructions on how to use it..

In this example I’m adding a new shared diskgroup with a single mount point.

Here are its command line options.

root@testnode1 # cfsmntadm
  Error: V-35-1: cfsmntadm: Incorrect usage
  Usage:
       cfsmntadm add
               [service_group_name]  ...
       cfsmntadm add
              [service_group_name] all=[mount_options] [node_name ...]
       cfsmntadm add ckpt
                all=[mount_options] [node_name ...]
       cfsmntadm add snapshot

       cfsmntadm add snapshot dev=
               =[mount_options]
       cfsmntadm delete [-f]
       cfsmntadm modify  =[mount_options]
       cfsmntadm modify  +=
       cfsmntadm modify  -=
       cfsmntadm modify  all=[mount_options]
       cfsmntadm modify  all+=
       cfsmntadm modify  all-=
       cfsmntadm modify  add  ...
       cfsmntadm modify  delete  ...
       cfsmntadm modify  vol
       cfsmntadm display [-v] { mount_point | node_name }
       cfsmntadm setpolicy  [node_name ...]

1. Discover Disks on both nodes:

root@testnode2 # cfgadm | grep fc-fabric | awk '{print $1}' | xargs -I {} cfgadm -c configure {};vxdiskconfig
  VxVM  INFO V-5-2-1401 This command may take a few minutes to complete execution
  Executing Solaris command: devfsadm (part 1 of 2) at 08:59:57 PDT
  Executing VxVM command: vxdctl enable (part 2 of 2) at 08:59:59 PDT
  Command completed at 09:00:02 PDT
root@testnode1 # cfgadm | grep fc-fabric | awk '{print $1}' | xargs -I {} cfgadm -c configure {};vxdiskconfig;vxdisk list
  VxVM  INFO V-5-2-1401 This command may take a few minutes to complete execution
  Executing Solaris command: devfsadm (part 1 of 2) at 08:52:38 PDT
  Executing VxVM command: vxdctl enable (part 2 of 2) at 08:52:40 PDT
  Command completed at 08:52:43 PDT
  DEVICE       TYPE            DISK         GROUP        STATUS
c2t50060E8005486B1Ad0s2 auto:cdsdisk    ipmqa4dg01   ipmqa4dg     online clone_disk thinrclm shared
..
..
..
c2t50060E8005486B1Ad34s2 auto            -            -            nolabel
c2t50060E8005486B1Ad35s2 auto            -            -            nolabel

2.  We are now going label and initialize luns 34 and 35 for veritas

root@testnode1 # echo label > /tmp/cmd
root@testnode1 # for i in 34 35; do format -d c2t50060E8005486B1Ad$i -f /tmp/cmd; vxdisk scandisks; /etc/vx/bin/vxdisksetup -i c2t50060E8005486B1Ad$i;done

and now we are going to have veritas rescan the disks to pickup the label on the other node:

root@testnode2 # vxdisk scandisks

3. We are now going to create a shared volume group called niqa3 and add both are disks to it and use vxassist to create our volume.

root@testnode1 # vxdg -s init niqa3dg niqa3dg01=c2t50060E8005486B1Ad34s2
root@testnode1 # vxdg list
NAME         STATE           ID
..
..
niqa3dg      enabled,shared,cds   1337357092.60.testnode1.testdomain.net
root@testnode1 # vxdg -g niqa3dg adddisk niqa3dg02=c2t50060E8005486B1Ad35s2
root@testnode1 # vxassist -g niqa3dg maxsize
Maximum volume size: 314402816 (153517Mb)
root@testnode1 # vxassist -g niqa3dg make niqa3 153517M
root@testnode1 # mkfs -F vxfs -o largefiles,bsize=8192 /dev/vx/rdsk/niqa3dg/niqa3
    version 7 layout
    314402816 sectors, 19650176 blocks of size 8192, log size 8192 blocks
    largefiles supported

4. Adding volume to cluster configuration.

Note: for this operation, I’m creating a new service group, if I was adding storage to a existing group, you could specify it here instead. However I would not advise adding it to a critical active group as I have seen the adding of new volumes to existing service groups through vea/cfsmntadm create a critical fault in the group and take databases offline.

root@testnode1 # cfsmntadm add niqa3dg niqa3 /niqa3 niqa3 all=rw,suid
  Mount Point is being added...
  /niqa3 added to the cluster-configuration
  root@testnode1 # hagrp -online niqa3 -any
VCS NOTICE V-16-1-50735 Attempting to online group on system testnode1
VCS NOTICE V-16-1-50735 Attempting to online group on system testnode2
root@testnode1 #  hagrp -state
#Group         Attribute             System     Value
..
..
niqa3          State                 testnode1 |ONLINE|
niqa3          State                 testnode2 |ONLINE|

Now the storage is online and can be used. The names still need to be corrected in hagui via copy/paste or through modifying the main.cf. Will let you know if I find a good way to do it from the command line.

Advertisements

One response to “Adding Storage Foundation CFS mount points via the Command line.

  1. bloomkit.com.au October 1, 2014 at 11:14 am

    Excellent items from you, man. I’ve take into accout your stuff prior to and you’re simply too fantastic.
    I really like what you have obtained here, certainly like what you’re saying and the best way
    in which you assert it. You are making it entertaining and you continue
    to care for to keep it smart. I cant wait to
    learn far more from you. This is really a terrific web
    site.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: