A place for Unix Thoughts and Ideas

Category Archives: Solaris

San booting Sparc Systems with Emulex HBAs

While booting from the SAN is more common on domain’d systems, it is less used on the smaller systems.

I have been using it quite a bit over that last year as part of converting my existing servers from UFS->ZFS and upgrading to Solaris 10u9 to support Oracle 11gR2.

Here is the process for enabling SAN Boot on Sparc Systems with Emulex HBAs.

This will change your fibre device paths and you will want to do this before creating any live upgrade environments on the SAN

Here are the Steps:

1. Enable boot_code setting in HBAnywhere or emlxadm on all your adapters. This crucial for enabling the sfs boot support. In practice, it may be advisable to powercycle the server/domain after enabling this.
This can be found, checked, and set through emlxadm:

root@testserver # /opt/EMLXemlxu/bin/emlxadm -iall -y boot_code

Found 2 HBA ports.

HBA 1: /devices/pci@1,700000/emlx@0/fp@0,0

Boot code: Disabled

HBA 2: /devices/pci@3,700000/emlx@0/fp@0,0

Boot code: Disabled

root@testserver # /opt/EMLXemlxu/bin/emlxadm -iall -y boot_code enable

Found 2 HBA ports.

HBA 1: /devices/pci@1,700000/emlx@0/fp@0,0

Boot code: Enabled

HBA 2: /devices/pci@3,700000/emlx@0/fp@0,0

Boot code: Enabled

Read more of this post


Downloading Solaris images from Oracle using wget

I came across this in a post by Steve Scargall in the Oracle forums , this Post is a rehash of his solution.

I recently needed to download the Solaris 11 repository image for seeding a new auto install server.

Oracle provides a wget script for downloading their patches, However Oracle’s OTN doesn’t allow you to login via wget/curl, which means that oracle provided scripts are of no use.

Transferring it locally and uploading over a cable modem is no solution either.

The trick is to authenticate to OTN and then export the cookie. Then you can use that cookie to download the files.

-wget 1.8+
-System must be able to ping download.oracle.com
-Firefox and the Export Cookies plugin (https://addons.mozilla.org/en-US/firefox/addon/export-cookies/)


Read more of this post

Adding Storage Foundation CFS mount points via the Command line.

For the longest time I used vea for my CFS operations because it saved me time with updating the main.cf etc…

Then i figured out that VEA has command line utilities that it calls to do all of its dirty work (check out /var/vx/isis/command.log) and when it comes to adding Cluster Filesystems, it is cfsmntadm.

Here is quick instructions on how to use it..

In this example I’m adding a new shared diskgroup with a single mount point.

Here are its command line options.

root@testnode1 # cfsmntadm
  Error: V-35-1: cfsmntadm: Incorrect usage
       cfsmntadm add
               [service_group_name]  ...
       cfsmntadm add
              [service_group_name] all=[mount_options] [node_name ...]
       cfsmntadm add ckpt
                all=[mount_options] [node_name ...]
       cfsmntadm add snapshot

       cfsmntadm add snapshot dev=
       cfsmntadm delete [-f]
       cfsmntadm modify  =[mount_options]
       cfsmntadm modify  +=
       cfsmntadm modify  -=
       cfsmntadm modify  all=[mount_options]
       cfsmntadm modify  all+=
       cfsmntadm modify  all-=
       cfsmntadm modify  add  ...
       cfsmntadm modify  delete  ...
       cfsmntadm modify  vol
       cfsmntadm display [-v] { mount_point | node_name }
       cfsmntadm setpolicy  [node_name ...]

Read more of this post

Managing VCS zone dependencies in Veritas 5.1

I have been provisioning all my new servers with VCS 5.1sp1 and somewhere between 5.0mp4 and 5.1sp1 they changed the Zones Agent in some fundamental ways. Previously, zones were defined as a resource in the group and could be linked to other resources such as proxy/group/mount resources.

In 5.1, there is a zone resource, but the definition is handled via the ContainerInfo property on the Service group:

5.0 Config:

group ems_zones (
        SystemList = { testnode01 = 0, testnode02 = 1 }
        Parallel = 1
        AutoStartList = { testnode01, testnode02 }

        Zone ems_zones_01_02 (
                Critical = 0
                ZoneName @testnode02 = testzn_ems-02
                ZoneName @testnode01 = testzn_ems-01

        requires group cvm_mounts online local firm

5.1 Config:

group ems_zones (
        SystemList = { testnode02 = 1, testnode01 = 0 }
        ContainerInfo @testnode02 = { Name = testzn_ems-02, Type = Zone, Enabled = 1 }
        ContainerInfo @testnode01 = { Name = testzn_ems-01, Type = Zone, Enabled = 1 }
        Parallel = 1
        AutoStartList = { testnode02, testnode01 }

        Zone ems_zones (
                Critical = 0

        requires group cvm_mounts online local firm

Despite there still being a zone resource and there being support for resource dependencies, resource dependencies only work for dependencies that require the zone resources.

Read more of this post

Creating a derived profile jumpstart Installation

As a matter of philosophy, I worked with having a single profile/manifest for all of my Solaris baselines, and this has worked well for me for many years across sparc/x86 w/numerous updates and a couple major releases.

With the release of the T3/T4 based systems with their WWN based naming, my reliance on the rootdisk variable in my profile went out the window as I no longer could simply double-check with confidence that it had installed to the correct internal disks.

This lead me to creating a derived jumpstart profile which re-writes the jumpstart profile to specify the correct boot devices.

In my previous post I outlined my disk_slot.sh script which translates disk slot numbers to disk id’s on T3 & T4 based systems.

I’m going to use that script with a derived jumpstart profile to specify the proper disk(s) for installing Solaris 10.

Creating my begin script.
This script assumes that the disk_slot.sh has been placed in the install_config directory which also has the rules.ok file.

This script copies a existing profile I have, to the jumpstart profile defined by the already defined $SI_PROFILE variable.
My script then strips out the rootdisk.s0 line of my profile, so I can re-specify it.

Note, The s10_derived_profile used in my script is actually a symbolic link that points to the current profile that I’m using to deploy to my systems.

After that I will determine how many sas devices I have on my system.

Read more of this post

Mapping Disk Device to Slot number on T3/T4 based Systems

I’m starting to play around with my new T4-2 & T4-4 systems and have noticed the change on the disk devices from the previous c?t?d? format to WWN format for the T3 and T4 based systems.

This causes issues with some of my scripts and also in knowing which device is in which slot.

In Solaris 10 update 10 the diskinfo utility will print the mapping, but it only will print for the first controller, which can make it problematic for the T4-1 and T4-4 systems which have 2 controllers. This supposed to be fixed in update 11.

Oracle outlines the process for finding the information through prtconf

Which is good, but a 1-liner is much easier to use:

here is the output of diskinfo:

root@testserver # diskinfo -a

Enclosure path:         1Xxxxxxx-physical-hba-0
Chassis Serial Number:  1xxxxxx-physical-hba-0
Chassis Model:          ORCL,SPARC-T4-2

Label            Disk name               Vendor   Product          Vers
----------------------  -------- ----
/SYS/SASBP/HDD0  c0t5000CCA012B66D58d0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD1  c0t5000CCA012B7368Cd0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD2  c0t5000CCA012B73594d0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD3  c0t5000CCA012B7314Cd0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD4  c0t5000CCA012B66DDCd0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD5  c0t5000CCA012B613B4d0   HITACHI  H106030SDSUN300G A2B0

Not to bad.

But on a T4-4 it is a bit more problematic, it isn’t showing the second drive

root@testserver # diskinfo -a

Enclosure path:         1xxxxxxxxx-physical-hba-0
Chassis Serial Number:  1xxxxxxxxx-physical-hba-0
Chassis Model:          ORCL,SPARC-T4-4

Label      Disk name               Vendor   Product          Vers
---------- ----------------------  -------- ----
/SYS/MB/HDD0 c0t5000CCA02522A838d0   HITACHI  H106030SDSUN300G A2B0

Read more of this post

bootp like setup for Solaris 11 auto installer

Jumpstarting to sparc servers with bootp on Solaris 10 was very simple and straight forward, especially for those who didn’t or couldn’t run DHCP on their server networks.

However with Solaris 11, if you want to use auto-installer, you will need to setup or configure DHCP to hand out addresses to your Solaris hosts.

Here is a straight forward method for managing a DHCP server that functions similar to Bootp, with it only handing out address to your Auto Install Clients.

What I will be demonstrating is installing and configuring the dhcp-server and adding a network with and then adding a permanent address entry for a specific MAC address.

1. Install DHCP server

on Solaris 11:

# pkg install pkg:/service/network/dhcp/isc-dhcp

The server can also be existing and/or on run on Solaris 10.

2. Configure the DHCP Server

I was fortunate to have no conflicting DHCP servers on my subnet, if there was and that server is not allowing enough time for my auto installer DHCP server to respond, I would have kindly ask the dhcp server admistrator to insert a delay in the server.

We will be booting a server that is on the network, we are going to configure the DHCP server to use local files. Note that the dhcp server seems to need the network entry in /etc/netmasks to be present. On Solaris 11 this file by default has no entries in it.

/usr/sbin/dhcpconfig -D -r SUNWfiles -p /var/dhcp
echo “” >> /etc/netmasks
dhcpconfig -N -m -t -g
pntadm -C

3. Add client entry.

Read more of this post

Growing a Veritas Filesystem via the command line.

In my career I have gone from building volumes from the bottom up approach, to using vxassist, to VEA (with CFS cluster) and back to command line. My Symantec Reps have been raving about a new management console to replace VEA, but I’m leery of new software that comes with warnings about triggering kernel panics on existing older CFS clusters.

In the last year I have switched back to using the command line almost exclusively and I’m now going to illustrate the easiest and quickest way to add luns and grow filesystems.

In this example I’m going to be using lun 7 and 8 to grow my filesystem by 300GB. Although this resize can be done in one step, I’m splitting it out into two commands for illustration purposes.

First thing I’m going to check is to make sure that there is space left over in the filesystem for the new inodes.

If the filesystem is 100% full with no space left, do not proceed as if you attempt the resize, the operation will likely freeze and you’ll need to reboot to be able to complete the grow. If you are close to 100%, but have some space left over, you can try growing slowly, in chunks of MBs until you comfortably have enough free space to grow the volume.

I had read somewhere that this behavior should be gone by now, but members of my team still encounter it on occasion.

root@testserver # df -h /zones/.zonemounts/testzone-01/niqa3
Filesystem             size   used  avail capacity  Mounted on
                       200G   5.0G   183G     3%    /zones/.zonemounts/testzone-01/niqa3

In this case I have plenty of space so I’m going to proceed.

Read more of this post

Adding Swap To a Solaris Server

Unless you spend your time only managing database server or have dedicated an entire disk for swap, eventually you are most likely going to need to increase your swap to handle a overzellous Java applications.

I had to do this the other night and figured it would be good to post the process

If you are on a ZFS root,  while you can technically resize the zfs volume, Solaris will not pick up the changes without a reboot.

You could always drop the swap volume and re-add it, but if you are having to increase swap, that is probably a really bad idea.

So Here is a quick a dirty procedure for adding ZFS swap to a system, this system had less than 2GB of swap left and was starting to get sluggish

root@testserver $ swap -s
total: 47704096k bytes allocated + 10022424k reserved = 57726520k used, 1726824k available
root@testserver # swap -l
swapfile             dev  swaplo blocks   free
/dev/zvol/dsk/rpool/swap 256,1      16 16779248 16779248
root@testserver # zfs create -V 20G rpool/swap_1
root@testserver # swap -a /dev/zvol/dsk/rpool/swap_1
jflaster@testserver $ swap -l
swapfile             dev  swaplo blocks   free
/dev/zvol/dsk/rpool/swap 256,1      16 16779248 16779248
/dev/zvol/dsk/rpool/swap_1 256,3      16 41943024 41943024
root@testserver # swap -s
total: 47697768k bytes allocated + 10022152k reserved = 57719920k used, 22704208k available
echo "/dev/zvol/dsk/rpool/swap_1      -       -       swap    -       no      -" >> /etc/vfstab

Here is the procedure on UFS:

root@testserver:~# mkfile 20G /var/DO_NOT_DELETE_swapfile1
root@sanbapsunadm3 # swap -a  /var/DO_NOT_DELETE_swapfile1
root@sanbapsunadm3 # swap -l
swapfile             dev  swaplo blocks   free
/dev/md/dsk/d10     85,10     16 16780208 16780208
/var/DO_NOT_DELETE_swapfile1  -       16 41943024 41943024
root@testserver:~# echo "/var/DO_NOT_DELETE_swapfile1    -         -       swap    -       no      -" >> /etc/vfstab

Configuring IPMP on Solaris 11

Configuring IPMP on Solaris 11 has become very straightforward and simple.

However, most of the examples I have seen online assume that both of the ipmp interfaces are unused and don’t have your system IP on them, which probably isn’t the case.

To get past this, you just need to run a ipadm delete-addr on the existing interface(s) that already have IP’s assigned.

Here are the steps for configuring IPMP active/standby, with test addresses.

In this example testserver-nic0 and testserver-nic1 are the DNS names for the test addresses on each network card and are defined in the hosts file.

1. Identify the net devices to be used. In this case I will be using bge0 and bge2 which map to net0 and net2

root@testserver-01:/# dladm show-phys

LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE

net1              Ethernet             unknown    0      unknown   bge1

net3              Ethernet             unknown    0      unknown   bge3

net0              Ethernet             up         1000   full      bge0

net2              Ethernet             unknown    0      unknown   bge2

2. Remove any addresses if defined

ipadm delete-addr net0/v4

3. Create IPMP device and assign both network cards to it

ipadm create-ipmp ipmp0

ipadm add-ipmp -i net0 -i net2 ipmp0

4. Configure Probe-based Failure dection and the address for the card.

For this you will either assign test addresses to the adapters in the IPMP group (Like in Solaris 10), or enable Transitive probing, which doesn’t require test addresses.

Using Test addresses

ipadm create-addr -T static -a testserver-nic0/23 net0/test

ipadm create-addr -T static -a testserver-nic1/23 net2/test

ipadm set-ifprop -p standby=on -m ip net2

ipadm create-addr -T static -a  testserver/23 ipmp0/v4

Using Transitive probing:

Read more of this post