rageek

A place for Unix Thoughts and Ideas

Monthly Archives: May 2012

Customizing Your Solaris 11 Auto Installer builds using a first boot script

Unlike Solaris 10, there are no equivalents to finish scripts with Solaris 11.

What is done instead, is to create a script which is installed as a IPS package during the install and run on firstboot and then removed.

Oracle outlines this here
http://docs.oracle.com/cd/E23824_01/html/E21798/firstboot-1.html

I will be going over the process, giving the example from my site and going over adding it into your AI manifest.

The first thing you will need to do is create the firstboot SVC manifest and the script itself.

The manifest is described here:
http://docs.oracle.com/cd/E23824_01/html/E21798/firstboot-2.html

And the logic behind the first boot script here:
http://docs.oracle.com/cd/E23824_01/html/E21798/glcfe.html

As with all things unix there are multiple ways and philosophies for how to manage build scripts.

For my Solaris 11 baseline, I choose to write a first boot script that determines my closest package repository and then mounts and copies over my directory of admin scripts.
It then runs a script called frist_boot_config.sh. This script actually does all of the work on the first boot. Some could argue that it may be better to have all the work done in original script run on boot and then version the IPS package, but I was looking to keep things simple and consistent with my previous builds and easy to update, especially while I was refining my Solaris 11 baseline. I might move it into the one script in the future.

The general flow to my System build is:
Read more of this post

Advertisements

San booting Sparc Systems with Emulex HBAs

While booting from the SAN is more common on domain’d systems, it is less used on the smaller systems.

I have been using it quite a bit over that last year as part of converting my existing servers from UFS->ZFS and upgrading to Solaris 10u9 to support Oracle 11gR2.

Here is the process for enabling SAN Boot on Sparc Systems with Emulex HBAs.

This will change your fibre device paths and you will want to do this before creating any live upgrade environments on the SAN

Here are the Steps:

1. Enable boot_code setting in HBAnywhere or emlxadm on all your adapters. This crucial for enabling the sfs boot support. In practice, it may be advisable to powercycle the server/domain after enabling this.
This can be found, checked, and set through emlxadm:

root@testserver # /opt/EMLXemlxu/bin/emlxadm -iall -y boot_code

Found 2 HBA ports.

HBA 1: /devices/pci@1,700000/emlx@0/fp@0,0

Boot code: Disabled

HBA 2: /devices/pci@3,700000/emlx@0/fp@0,0

Boot code: Disabled

root@testserver # /opt/EMLXemlxu/bin/emlxadm -iall -y boot_code enable

Found 2 HBA ports.

HBA 1: /devices/pci@1,700000/emlx@0/fp@0,0

Boot code: Enabled

HBA 2: /devices/pci@3,700000/emlx@0/fp@0,0

Boot code: Enabled

Read more of this post

Downloading Solaris images from Oracle using wget

I came across this in a post by Steve Scargall in the Oracle forums , this Post is a rehash of his solution.

I recently needed to download the Solaris 11 repository image for seeding a new auto install server.

Oracle provides a wget script for downloading their patches, However Oracle’s OTN doesn’t allow you to login via wget/curl, which means that oracle provided scripts are of no use.

Transferring it locally and uploading over a cable modem is no solution either.

The trick is to authenticate to OTN and then export the cookie. Then you can use that cookie to download the files.

Requirements:
-wget 1.8+
-System must be able to ping download.oracle.com
-Firefox and the Export Cookies plugin (https://addons.mozilla.org/en-US/firefox/addon/export-cookies/)

Steps

Read more of this post

Adding Storage Foundation CFS mount points via the Command line.

For the longest time I used vea for my CFS operations because it saved me time with updating the main.cf etc…

Then i figured out that VEA has command line utilities that it calls to do all of its dirty work (check out /var/vx/isis/command.log) and when it comes to adding Cluster Filesystems, it is cfsmntadm.

Here is quick instructions on how to use it..

In this example I’m adding a new shared diskgroup with a single mount point.

Here are its command line options.

root@testnode1 # cfsmntadm
  Error: V-35-1: cfsmntadm: Incorrect usage
  Usage:
       cfsmntadm add
               [service_group_name]  ...
       cfsmntadm add
              [service_group_name] all=[mount_options] [node_name ...]
       cfsmntadm add ckpt
                all=[mount_options] [node_name ...]
       cfsmntadm add snapshot

       cfsmntadm add snapshot dev=
               =[mount_options]
       cfsmntadm delete [-f]
       cfsmntadm modify  =[mount_options]
       cfsmntadm modify  +=
       cfsmntadm modify  -=
       cfsmntadm modify  all=[mount_options]
       cfsmntadm modify  all+=
       cfsmntadm modify  all-=
       cfsmntadm modify  add  ...
       cfsmntadm modify  delete  ...
       cfsmntadm modify  vol
       cfsmntadm display [-v] { mount_point | node_name }
       cfsmntadm setpolicy  [node_name ...]

Read more of this post

Managing VCS zone dependencies in Veritas 5.1

I have been provisioning all my new servers with VCS 5.1sp1 and somewhere between 5.0mp4 and 5.1sp1 they changed the Zones Agent in some fundamental ways. Previously, zones were defined as a resource in the group and could be linked to other resources such as proxy/group/mount resources.

In 5.1, there is a zone resource, but the definition is handled via the ContainerInfo property on the Service group:

5.0 Config:

group ems_zones (
        SystemList = { testnode01 = 0, testnode02 = 1 }
        Parallel = 1
        AutoStartList = { testnode01, testnode02 }

        Zone ems_zones_01_02 (
                Critical = 0
                ZoneName @testnode02 = testzn_ems-02
                ZoneName @testnode01 = testzn_ems-01
                )

        requires group cvm_mounts online local firm

5.1 Config:

group ems_zones (
        SystemList = { testnode02 = 1, testnode01 = 0 }
        ContainerInfo @testnode02 = { Name = testzn_ems-02, Type = Zone, Enabled = 1 }
        ContainerInfo @testnode01 = { Name = testzn_ems-01, Type = Zone, Enabled = 1 }
        Parallel = 1
        AutoStartList = { testnode02, testnode01 }
        )

        Zone ems_zones (
                Critical = 0
                )

        requires group cvm_mounts online local firm

Despite there still being a zone resource and there being support for resource dependencies, resource dependencies only work for dependencies that require the zone resources.

Read more of this post

Creating a derived profile jumpstart Installation

As a matter of philosophy, I worked with having a single profile/manifest for all of my Solaris baselines, and this has worked well for me for many years across sparc/x86 w/numerous updates and a couple major releases.

With the release of the T3/T4 based systems with their WWN based naming, my reliance on the rootdisk variable in my profile went out the window as I no longer could simply double-check with confidence that it had installed to the correct internal disks.

This lead me to creating a derived jumpstart profile which re-writes the jumpstart profile to specify the correct boot devices.

In my previous post I outlined my disk_slot.sh script which translates disk slot numbers to disk id’s on T3 & T4 based systems.

I’m going to use that script with a derived jumpstart profile to specify the proper disk(s) for installing Solaris 10.

Creating my begin script.
This script assumes that the disk_slot.sh has been placed in the install_config directory which also has the rules.ok file.

This script copies a existing profile I have, to the jumpstart profile defined by the already defined $SI_PROFILE variable.
My script then strips out the rootdisk.s0 line of my profile, so I can re-specify it.

Note, The s10_derived_profile used in my script is actually a symbolic link that points to the current profile that I’m using to deploy to my systems.

After that I will determine how many sas devices I have on my system.

Read more of this post

Mapping Disk Device to Slot number on T3/T4 based Systems

I’m starting to play around with my new T4-2 & T4-4 systems and have noticed the change on the disk devices from the previous c?t?d? format to WWN format for the T3 and T4 based systems.

This causes issues with some of my scripts and also in knowing which device is in which slot.

In Solaris 10 update 10 the diskinfo utility will print the mapping, but it only will print for the first controller, which can make it problematic for the T4-1 and T4-4 systems which have 2 controllers. This supposed to be fixed in update 11.

Oracle outlines the process for finding the information through prtconf
http://docs.oracle.com/cd/E22985_01/html/E22986/z40000091523716.html

Which is good, but a 1-liner is much easier to use:

here is the output of diskinfo:

 
root@testserver # diskinfo -a

Enclosure path:         1Xxxxxxx-physical-hba-0
Chassis Serial Number:  1xxxxxx-physical-hba-0
Chassis Model:          ORCL,SPARC-T4-2

Label            Disk name               Vendor   Product          Vers
----------------------  -------- ----
/SYS/SASBP/HDD0  c0t5000CCA012B66D58d0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD1  c0t5000CCA012B7368Cd0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD2  c0t5000CCA012B73594d0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD3  c0t5000CCA012B7314Cd0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD4  c0t5000CCA012B66DDCd0   HITACHI  H106030SDSUN300G A2B0
/SYS/SASBP/HDD5  c0t5000CCA012B613B4d0   HITACHI  H106030SDSUN300G A2B0

Not to bad.

But on a T4-4 it is a bit more problematic, it isn’t showing the second drive

root@testserver # diskinfo -a

Enclosure path:         1xxxxxxxxx-physical-hba-0
Chassis Serial Number:  1xxxxxxxxx-physical-hba-0
Chassis Model:          ORCL,SPARC-T4-4

Label      Disk name               Vendor   Product          Vers
---------- ----------------------  -------- ----
/SYS/MB/HDD0 c0t5000CCA02522A838d0   HITACHI  H106030SDSUN300G A2B0

Read more of this post

bootp like setup for Solaris 11 auto installer

Jumpstarting to sparc servers with bootp on Solaris 10 was very simple and straight forward, especially for those who didn’t or couldn’t run DHCP on their server networks.

However with Solaris 11, if you want to use auto-installer, you will need to setup or configure DHCP to hand out addresses to your Solaris hosts.

Here is a straight forward method for managing a DHCP server that functions similar to Bootp, with it only handing out address to your Auto Install Clients.

What I will be demonstrating is installing and configuring the dhcp-server and adding a network with and then adding a permanent address entry for a specific MAC address.

1. Install DHCP server

on Solaris 11:

# pkg install pkg:/service/network/dhcp/isc-dhcp

The server can also be existing and/or on run on Solaris 10.

2. Configure the DHCP Server

I was fortunate to have no conflicting DHCP servers on my subnet, if there was and that server is not allowing enough time for my auto installer DHCP server to respond, I would have kindly ask the dhcp server admistrator to insert a delay in the server.

We will be booting a server that is on the 10.0.10.0 network, we are going to configure the DHCP server to use local files. Note that the dhcp server seems to need the network entry in /etc/netmasks to be present. On Solaris 11 this file by default has no entries in it.

/usr/sbin/dhcpconfig -D -r SUNWfiles -p /var/dhcp
echo “10.0.10.0 255.255.255.0” >> /etc/netmasks
dhcpconfig -N 10.0.10.0 -m 255.255.255.0 -t 10.0.0.1 -g
pntadm -C 10.0.10.0

3. Add client entry.

Read more of this post